From f2570214e4c0d9be4ef3c4990787e886f0fb3432 Mon Sep 17 00:00:00 2001 From: jrobble Date: Thu, 23 May 2024 17:38:55 -0400 Subject: [PATCH] Update copyright to 2024. --- Dockerfile | 4 +- LICENSE | 2 +- build-site.sh | 4 +- docker-entrypoint.sh | 4 +- docs/docs/Admin-Guide.md | 2 +- docs/docs/CPP-Batch-Component-API.md | 2 +- docs/docs/CPP-Streaming-Component-API.md | 2 +- docs/docs/Component-API-Overview.md | 2 +- docs/docs/Component-Descriptor-Reference.md | 2 +- docs/docs/Contributor-Guide.md | 2 +- docs/docs/Derivative-Media-Guide.md | 2 +- docs/docs/Development-Environment-Guide.md | 2 +- docs/docs/Feed-Forward-Guide.md | 2 +- docs/docs/GPU-Support-Guide.md | 2 +- docs/docs/Health-Check-Guide.md | 2 +- docs/docs/Install-Guide.md | 2 +- docs/docs/Java-Batch-Component-API.md | 2 +- docs/docs/License-And-Distribution.md | 2 +- docs/docs/Markup-Guide.md | 2 +- docs/docs/Media-Segmentation-Guide.md | 2 +- docs/docs/Node-Guide.md | 2 +- docs/docs/Object-Storage-Guide.md | 2 +- docs/docs/OpenID-Connect-Guide.md | 2 +- docs/docs/Python-Batch-Component-API.md | 2 +- docs/docs/Release-Notes.md | 2 +- docs/docs/Roll-Up-Guide.md | 2 +- docs/docs/TiesDb-Guide.md | 2 +- docs/docs/Trigger-Guide.md | 2 +- docs/docs/User-Guide.md | 2 +- docs/docs/Workflow-Manager-Architecture.md | 2 +- docs/docs/index.md | 2 +- docs/site/Admin-Guide/index.html | 2 +- docs/site/CPP-Batch-Component-API/index.html | 2 +- .../CPP-Streaming-Component-API/index.html | 2 +- docs/site/Component-API-Overview/index.html | 2 +- .../Component-Descriptor-Reference/index.html | 2 +- docs/site/Contributor-Guide/index.html | 2 +- docs/site/Derivative-Media-Guide/index.html | 2 +- .../Development-Environment-Guide/index.html | 2 +- docs/site/Feed-Forward-Guide/index.html | 2 +- docs/site/GPU-Support-Guide/index.html | 2 +- docs/site/Health-Check-Guide/index.html | 2 +- docs/site/Install-Guide/index.html | 2 +- docs/site/Java-Batch-Component-API/index.html | 2 +- docs/site/License-And-Distribution/index.html | 2 +- docs/site/Markup-Guide/index.html | 2 +- docs/site/Media-Segmentation-Guide/index.html | 2 +- docs/site/Node-Guide/index.html | 2 +- docs/site/Object-Storage-Guide/index.html | 2 +- docs/site/OpenID-Connect-Guide/index.html | 2 +- .../Python-Batch-Component-API/index.html | 2 +- docs/site/Release-Notes/index.html | 2 +- docs/site/Roll-Up-Guide/index.html | 2 +- docs/site/TiesDb-Guide/index.html | 2 +- docs/site/Trigger-Guide/index.html | 2 +- docs/site/User-Guide/index.html | 2 +- .../Workflow-Manager-Architecture/index.html | 2 +- docs/site/index.html | 4 +- docs/site/search/search_index.json | 54 +++++++++---------- 59 files changed, 89 insertions(+), 89 deletions(-) diff --git a/Dockerfile b/Dockerfile index 6c72d1c15057..dd5f373f82f8 100644 --- a/Dockerfile +++ b/Dockerfile @@ -7,11 +7,11 @@ # under contract, and is subject to the Rights in Data-General Clause # # 52.227-14, Alt. IV (DEC 2007). # # # -# Copyright 2023 The MITRE Corporation. All Rights Reserved. # +# Copyright 2024 The MITRE Corporation. All Rights Reserved. # ############################################################################# ############################################################################# -# Copyright 2023 The MITRE Corporation # +# Copyright 2024 The MITRE Corporation # # # # Licensed under the Apache License, Version 2.0 (the "License"); # # you may not use this file except in compliance with the License. # diff --git a/LICENSE b/LICENSE index 589a35dffc3a..ead8284c5755 100644 --- a/LICENSE +++ b/LICENSE @@ -1,5 +1,5 @@ /****************************************************************************** - * Copyright 2023 The MITRE Corporation * + * Copyright 2024 The MITRE Corporation * * * * Licensed under the Apache License, Version 2.0 (the "License"); * * you may not use this file except in compliance with the License. * diff --git a/build-site.sh b/build-site.sh index 69139f4b6f82..8ce1cb388b62 100755 --- a/build-site.sh +++ b/build-site.sh @@ -7,11 +7,11 @@ # under contract, and is subject to the Rights in Data-General Clause # # 52.227-14, Alt. IV (DEC 2007). # # # -# Copyright 2023 The MITRE Corporation. All Rights Reserved. # +# Copyright 2024 The MITRE Corporation. All Rights Reserved. # ############################################################################# ############################################################################# -# Copyright 2023 The MITRE Corporation # +# Copyright 2024 The MITRE Corporation # # # # Licensed under the Apache License, Version 2.0 (the "License"); # # you may not use this file except in compliance with the License. # diff --git a/docker-entrypoint.sh b/docker-entrypoint.sh index aeb9414718cb..243a841875b6 100755 --- a/docker-entrypoint.sh +++ b/docker-entrypoint.sh @@ -7,11 +7,11 @@ # under contract, and is subject to the Rights in Data-General Clause # # 52.227-14, Alt. IV (DEC 2007). # # # -# Copyright 2023 The MITRE Corporation. All Rights Reserved. # +# Copyright 2024 The MITRE Corporation. All Rights Reserved. # ############################################################################# ############################################################################# -# Copyright 2023 The MITRE Corporation # +# Copyright 2024 The MITRE Corporation # # # # Licensed under the Apache License, Version 2.0 (the "License"); # # you may not use this file except in compliance with the License. # diff --git a/docs/docs/Admin-Guide.md b/docs/docs/Admin-Guide.md index 87474e0d2a99..95bae82a4e49 100644 --- a/docs/docs/Admin-Guide.md +++ b/docs/docs/Admin-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

WARNING: Please refer to the User Configuration section for changing the default user passwords.

diff --git a/docs/docs/CPP-Batch-Component-API.md b/docs/docs/CPP-Batch-Component-API.md index 84b3395455a1..3b5809fbca04 100644 --- a/docs/docs/CPP-Batch-Component-API.md +++ b/docs/docs/CPP-Batch-Component-API.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # API Overview diff --git a/docs/docs/CPP-Streaming-Component-API.md b/docs/docs/CPP-Streaming-Component-API.md index 142be9578a5e..30bb5fc091fc 100644 --- a/docs/docs/CPP-Streaming-Component-API.md +++ b/docs/docs/CPP-Streaming-Component-API.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

WARNING: The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.

diff --git a/docs/docs/Component-API-Overview.md b/docs/docs/Component-API-Overview.md index c4e568c0c9eb..1fff81d5b10a 100644 --- a/docs/docs/Component-API-Overview.md +++ b/docs/docs/Component-API-Overview.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Goals diff --git a/docs/docs/Component-Descriptor-Reference.md b/docs/docs/Component-Descriptor-Reference.md index 4fee2ead745d..fbc48e5d1c07 100644 --- a/docs/docs/Component-Descriptor-Reference.md +++ b/docs/docs/Component-Descriptor-Reference.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. ## Overview In order to be registered within OpenMPF, each component must provide a JavaScript Object Notation (JSON) descriptor file which provides contextual information about the component. diff --git a/docs/docs/Contributor-Guide.md b/docs/docs/Contributor-Guide.md index 6adfe29f1732..0e71e16f64b7 100644 --- a/docs/docs/Contributor-Guide.md +++ b/docs/docs/Contributor-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # High-level Overview diff --git a/docs/docs/Derivative-Media-Guide.md b/docs/docs/Derivative-Media-Guide.md index 4c069f8b123a..0a2954221a36 100644 --- a/docs/docs/Derivative-Media-Guide.md +++ b/docs/docs/Derivative-Media-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Introduction diff --git a/docs/docs/Development-Environment-Guide.md b/docs/docs/Development-Environment-Guide.md index 024dfed2c4a4..6957b4db88e0 100644 --- a/docs/docs/Development-Environment-Guide.md +++ b/docs/docs/Development-Environment-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

WARNING: diff --git a/docs/docs/Feed-Forward-Guide.md b/docs/docs/Feed-Forward-Guide.md index c1d651c5ab2e..214b8550bb16 100644 --- a/docs/docs/Feed-Forward-Guide.md +++ b/docs/docs/Feed-Forward-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Introduction diff --git a/docs/docs/GPU-Support-Guide.md b/docs/docs/GPU-Support-Guide.md index d032c424c6f5..e2592d01a6ba 100644 --- a/docs/docs/GPU-Support-Guide.md +++ b/docs/docs/GPU-Support-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Introduction diff --git a/docs/docs/Health-Check-Guide.md b/docs/docs/Health-Check-Guide.md index 61c01a0ba8a4..a9c4c96885c6 100644 --- a/docs/docs/Health-Check-Guide.md +++ b/docs/docs/Health-Check-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. diff --git a/docs/docs/Install-Guide.md b/docs/docs/Install-Guide.md index 22b545f98ced..bcbfbecaa4e3 100644 --- a/docs/docs/Install-Guide.md +++ b/docs/docs/Install-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Docker diff --git a/docs/docs/Java-Batch-Component-API.md b/docs/docs/Java-Batch-Component-API.md index e5623a45fa69..1998e6d5cfe0 100644 --- a/docs/docs/Java-Batch-Component-API.md +++ b/docs/docs/Java-Batch-Component-API.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # API Overview diff --git a/docs/docs/License-And-Distribution.md b/docs/docs/License-And-Distribution.md index 27cff30a804e..0da0e0e7aa60 100644 --- a/docs/docs/License-And-Distribution.md +++ b/docs/docs/License-And-Distribution.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. License Considerations diff --git a/docs/docs/Markup-Guide.md b/docs/docs/Markup-Guide.md index 6b21123442c5..8e5d8255a4a8 100644 --- a/docs/docs/Markup-Guide.md +++ b/docs/docs/Markup-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Overview diff --git a/docs/docs/Media-Segmentation-Guide.md b/docs/docs/Media-Segmentation-Guide.md index f4a366593077..dc5095663de6 100644 --- a/docs/docs/Media-Segmentation-Guide.md +++ b/docs/docs/Media-Segmentation-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Detection Chaining diff --git a/docs/docs/Node-Guide.md b/docs/docs/Node-Guide.md index bea0b61f6a12..a4f9879c96dc 100644 --- a/docs/docs/Node-Guide.md +++ b/docs/docs/Node-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

WARNING: This guide is for non-Docker deployments only, such as for local development environments. In a Docker deployment components are run as Docker-managed containers.

diff --git a/docs/docs/Object-Storage-Guide.md b/docs/docs/Object-Storage-Guide.md index 48c097f99a35..2277e2bcd1af 100644 --- a/docs/docs/Object-Storage-Guide.md +++ b/docs/docs/Object-Storage-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Object Storage Overview diff --git a/docs/docs/OpenID-Connect-Guide.md b/docs/docs/OpenID-Connect-Guide.md index cea4e57f5a1d..07fb03de85d6 100644 --- a/docs/docs/OpenID-Connect-Guide.md +++ b/docs/docs/OpenID-Connect-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # OpenID Connect Overview diff --git a/docs/docs/Python-Batch-Component-API.md b/docs/docs/Python-Batch-Component-API.md index c500686e7d95..a0b7435b9df3 100644 --- a/docs/docs/Python-Batch-Component-API.md +++ b/docs/docs/Python-Batch-Component-API.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # API Overview diff --git a/docs/docs/Release-Notes.md b/docs/docs/Release-Notes.md index aa39fba4c89a..9ddf3116ca4b 100644 --- a/docs/docs/Release-Notes.md +++ b/docs/docs/Release-Notes.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # OpenMPF 9.0.x diff --git a/docs/docs/Roll-Up-Guide.md b/docs/docs/Roll-Up-Guide.md index a452e9327b49..45dde37c9946 100644 --- a/docs/docs/Roll-Up-Guide.md +++ b/docs/docs/Roll-Up-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. diff --git a/docs/docs/TiesDb-Guide.md b/docs/docs/TiesDb-Guide.md index 99fc71f33725..d93dd523b050 100644 --- a/docs/docs/TiesDb-Guide.md +++ b/docs/docs/TiesDb-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # TiesDb Overview diff --git a/docs/docs/Trigger-Guide.md b/docs/docs/Trigger-Guide.md index 897e161a32c8..81418df7d619 100644 --- a/docs/docs/Trigger-Guide.md +++ b/docs/docs/Trigger-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. diff --git a/docs/docs/User-Guide.md b/docs/docs/User-Guide.md index e2aefe3e74af..58a48c719e1b 100644 --- a/docs/docs/User-Guide.md +++ b/docs/docs/User-Guide.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

INFO: This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.

diff --git a/docs/docs/Workflow-Manager-Architecture.md b/docs/docs/Workflow-Manager-Architecture.md index b0c9281a5529..3283feba9a0f 100644 --- a/docs/docs/Workflow-Manager-Architecture.md +++ b/docs/docs/Workflow-Manager-Architecture.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

INFO: This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.

diff --git a/docs/docs/index.md b/docs/docs/index.md index e03e0bff9daa..aa18afc9afeb 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -1,5 +1,5 @@ **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved. +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved. # Overview diff --git a/docs/site/Admin-Guide/index.html b/docs/site/Admin-Guide/index.html index 4b55d6231d9b..b60bcc58a29e 100644 --- a/docs/site/Admin-Guide/index.html +++ b/docs/site/Admin-Guide/index.html @@ -255,7 +255,7 @@

NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

+Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

WARNING: Please refer to the User Configuration section for changing the default user passwords.

INFO: This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.

diff --git a/docs/site/CPP-Batch-Component-API/index.html b/docs/site/CPP-Batch-Component-API/index.html index 8de5259d83d9..ecd45a742504 100644 --- a/docs/site/CPP-Batch-Component-API/index.html +++ b/docs/site/CPP-Batch-Component-API/index.html @@ -278,7 +278,7 @@

NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

+Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

API Overview

In OpenMPF, a component is a plugin that receives jobs (containing media), processes that media, and returns results.

The OpenMPF Batch Component API currently supports the development of detection components, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.

diff --git a/docs/site/CPP-Streaming-Component-API/index.html b/docs/site/CPP-Streaming-Component-API/index.html index f5911c747c37..8b92e1d2d16e 100644 --- a/docs/site/CPP-Streaming-Component-API/index.html +++ b/docs/site/CPP-Streaming-Component-API/index.html @@ -284,7 +284,7 @@

NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

+Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

WARNING: The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.

API Overview

diff --git a/docs/site/Component-API-Overview/index.html b/docs/site/Component-API-Overview/index.html index a882dd3a2485..df3a47026524 100644 --- a/docs/site/Component-API-Overview/index.html +++ b/docs/site/Component-API-Overview/index.html @@ -252,7 +252,7 @@

NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

+Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

Goals

The OpenMPF Component Application Programming Interface (API) provides a mechanism for integrating components into OpenMPF. The goals of the document are to:

    diff --git a/docs/site/Component-Descriptor-Reference/index.html b/docs/site/Component-Descriptor-Reference/index.html index b5311edccff9..2a8c72541a1a 100644 --- a/docs/site/Component-Descriptor-Reference/index.html +++ b/docs/site/Component-Descriptor-Reference/index.html @@ -242,7 +242,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Overview

    In order to be registered within OpenMPF, each component must provide a JavaScript Object Notation (JSON) descriptor file which provides contextual information about the component.

    This file must be named "descriptor.json".

    diff --git a/docs/site/Contributor-Guide/index.html b/docs/site/Contributor-Guide/index.html index 9adc98e7d932..314f6868b92b 100644 --- a/docs/site/Contributor-Guide/index.html +++ b/docs/site/Contributor-Guide/index.html @@ -267,7 +267,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    High-level Overview

    We're excited that you're considering contributing to the OpenMPF project! If you have any questions about the process or how to get involved, please feel free to send us an e-mail with your question.

    We encourage you to read the remainder of the guide as well as review the project's License and other Documentation.

    diff --git a/docs/site/Derivative-Media-Guide/index.html b/docs/site/Derivative-Media-Guide/index.html index 18c76f5fc48d..cc628e131b1d 100644 --- a/docs/site/Derivative-Media-Guide/index.html +++ b/docs/site/Derivative-Media-Guide/index.html @@ -256,7 +256,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Introduction

    This guide covers the derivative media feature, which allows users to create pipelines where a component in one of the initial stages of the pipeline generates one or more derivative (aka child) media from the source (aka parent) diff --git a/docs/site/Development-Environment-Guide/index.html b/docs/site/Development-Environment-Guide/index.html index 52763e9c1e9c..08c24daeea88 100644 --- a/docs/site/Development-Environment-Guide/index.html +++ b/docs/site/Development-Environment-Guide/index.html @@ -274,7 +274,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    WARNING: For most component developers, these steps are not necessary. Instead, diff --git a/docs/site/Feed-Forward-Guide/index.html b/docs/site/Feed-Forward-Guide/index.html index ccc86e3dfb97..133dc0a59588 100644 --- a/docs/site/Feed-Forward-Guide/index.html +++ b/docs/site/Feed-Forward-Guide/index.html @@ -260,7 +260,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Introduction

    Feed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be directly “fed into” the next stage. It differs from the default segmenting behavior in the following major ways:

    diff --git a/docs/site/GPU-Support-Guide/index.html b/docs/site/GPU-Support-Guide/index.html index e99162f46e35..78b44dc5648f 100644 --- a/docs/site/GPU-Support-Guide/index.html +++ b/docs/site/GPU-Support-Guide/index.html @@ -251,7 +251,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Introduction

    A subset of OpenMPF components are capable of running on NVIDIA GPUs. GPU support is through the NVIDIA CUDA libraries and runtime. This guide provides information needed for new component developers that would like to use NVIDIA GPUs diff --git a/docs/site/Health-Check-Guide/index.html b/docs/site/Health-Check-Guide/index.html index 417f984957b3..cdfa76e4ed8f 100644 --- a/docs/site/Health-Check-Guide/index.html +++ b/docs/site/Health-Check-Guide/index.html @@ -245,7 +245,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Health Check Overview

    The C++ and Python component executors can be configured to run health checks on components prior diff --git a/docs/site/Install-Guide/index.html b/docs/site/Install-Guide/index.html index 4b1a2627710d..57a0364646be 100644 --- a/docs/site/Install-Guide/index.html +++ b/docs/site/Install-Guide/index.html @@ -239,7 +239,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Docker

    OpenMPF is installed using the Docker container platform.

    To use prebuilt Docker images, refer to the "Quick Start" section of the documentation for the OpenMPF Workflow Manager image on DockerHub.

    diff --git a/docs/site/Java-Batch-Component-API/index.html b/docs/site/Java-Batch-Component-API/index.html index d2c3b23a3c49..bbecc0115464 100644 --- a/docs/site/Java-Batch-Component-API/index.html +++ b/docs/site/Java-Batch-Component-API/index.html @@ -276,7 +276,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    API Overview

    In OpenMPF, a component is a plugin that receives jobs (containing media), processes that media, and returns results.

    The OpenMPF Batch Component API currently supports the development of detection components, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.

    diff --git a/docs/site/License-And-Distribution/index.html b/docs/site/License-And-Distribution/index.html index 14baa1fd9f56..ccd3d807c49a 100644 --- a/docs/site/License-And-Distribution/index.html +++ b/docs/site/License-And-Distribution/index.html @@ -249,7 +249,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    License Considerations

    We are not lawyers and provide this information to the best of our ability in an attempt to honor all licensing agreements and clarify the potential responsibilities of OpenMPF users.

    diff --git a/docs/site/Markup-Guide/index.html b/docs/site/Markup-Guide/index.html index 2c5e13011b3f..f26b4eb874ff 100644 --- a/docs/site/Markup-Guide/index.html +++ b/docs/site/Markup-Guide/index.html @@ -251,7 +251,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Overview

    OpenMPF provides a Markup component that can be used to draw bounding boxes and labels on images and videos. The component provides one task called OCV GENERIC MARKUP TASK that can be added to the end of any image and/or video diff --git a/docs/site/Media-Segmentation-Guide/index.html b/docs/site/Media-Segmentation-Guide/index.html index 20db10dccd3e..b74b3427bc0a 100644 --- a/docs/site/Media-Segmentation-Guide/index.html +++ b/docs/site/Media-Segmentation-Guide/index.html @@ -258,7 +258,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Detection Chaining

    The OpenMPF has the ability to chain detection tasks together in a detection pipeline. As each detection stage in the pipeline completes, the volume of data to be processed in the next stage may be reduced. Generally, any detection tasks executed prior to the final detection task in the pipeline are referred to as preprocessors or filters. For example, consider the following pipeline which demonstrates the use of a motion preprocessor:

    Detection Chaining Diagram

    diff --git a/docs/site/Node-Guide/index.html b/docs/site/Node-Guide/index.html index 6f4287630cc9..9c059e00d953 100644 --- a/docs/site/Node-Guide/index.html +++ b/docs/site/Node-Guide/index.html @@ -239,7 +239,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    WARNING: This guide is for non-Docker deployments only, such as for local development environments. In a Docker deployment components are run as Docker-managed containers.

    JGroups Communication

    diff --git a/docs/site/Object-Storage-Guide/index.html b/docs/site/Object-Storage-Guide/index.html index fa56af55eda0..5f1d2d41f803 100644 --- a/docs/site/Object-Storage-Guide/index.html +++ b/docs/site/Object-Storage-Guide/index.html @@ -248,7 +248,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Object Storage Overview

    By default, OpenMPF will write markup files, JSON output objects, and extracted artifacts to directories in $MPF_HOME/share. For multi-node deployments, $MPF_HOME/share points to a directory on a network share. diff --git a/docs/site/OpenID-Connect-Guide/index.html b/docs/site/OpenID-Connect-Guide/index.html index b84a73312c5c..89d0ff9fcac3 100644 --- a/docs/site/OpenID-Connect-Guide/index.html +++ b/docs/site/OpenID-Connect-Guide/index.html @@ -247,7 +247,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    OpenID Connect Overview

    Workflow Manager can use an OpenID Connect (OIDC) provider to handle authentication for users of diff --git a/docs/site/Python-Batch-Component-API/index.html b/docs/site/Python-Batch-Component-API/index.html index 161f31428ac9..0f40c93e67b4 100644 --- a/docs/site/Python-Batch-Component-API/index.html +++ b/docs/site/Python-Batch-Component-API/index.html @@ -296,7 +296,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    API Overview

    In OpenMPF, a component is a plugin that receives jobs (containing media), processes that media, and returns results.

    The OpenMPF Batch Component API currently supports the development of detection components, which are used detect diff --git a/docs/site/Release-Notes/index.html b/docs/site/Release-Notes/index.html index 9629c9cff095..6e19e1ee74c3 100644 --- a/docs/site/Release-Notes/index.html +++ b/docs/site/Release-Notes/index.html @@ -290,7 +290,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    OpenMPF 9.0.x

    9.0.0: May 2024

    diff --git a/docs/site/Roll-Up-Guide/index.html b/docs/site/Roll-Up-Guide/index.html index 8df9bc34016b..b75637635dad 100644 --- a/docs/site/Roll-Up-Guide/index.html +++ b/docs/site/Roll-Up-Guide/index.html @@ -242,7 +242,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Roll Up Overview

    The Workflow Manager can be configured to replace the values of track and detection properties after diff --git a/docs/site/TiesDb-Guide/index.html b/docs/site/TiesDb-Guide/index.html index ac22e16bc234..293ea5aa5d17 100644 --- a/docs/site/TiesDb-Guide/index.html +++ b/docs/site/TiesDb-Guide/index.html @@ -258,7 +258,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    TiesDb Overview

    Refer to https://github.com/Noblis/ties-lib for more information on the Triage Import Export diff --git a/docs/site/Trigger-Guide/index.html b/docs/site/Trigger-Guide/index.html index 8d0e068dec6b..621a0bc6a64b 100644 --- a/docs/site/Trigger-Guide/index.html +++ b/docs/site/Trigger-Guide/index.html @@ -257,7 +257,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, -and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 +and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Trigger Overview

    The TRIGGER property enables pipelines that use feed forward to have diff --git a/docs/site/User-Guide/index.html b/docs/site/User-Guide/index.html index 40a3608f8977..46e49d600b89 100644 --- a/docs/site/User-Guide/index.html +++ b/docs/site/User-Guide/index.html @@ -272,7 +272,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    INFO: This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.

    General

    diff --git a/docs/site/Workflow-Manager-Architecture/index.html b/docs/site/Workflow-Manager-Architecture/index.html index aebc26802aed..57e4543dad94 100644 --- a/docs/site/Workflow-Manager-Architecture/index.html +++ b/docs/site/Workflow-Manager-Architecture/index.html @@ -279,7 +279,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    INFO: This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.

    Workflow Manager Overview

    diff --git a/docs/site/index.html b/docs/site/index.html index fe1c8e7e7fef..75a3f363d2da 100644 --- a/docs/site/index.html +++ b/docs/site/index.html @@ -235,7 +235,7 @@

    NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the -Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.

    +Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.

    Overview

    There are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison.

    OpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection.

    @@ -400,5 +400,5 @@

    Overview

    diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json index d8e5aeb41d33..fa194b7fe908 100644 --- a/docs/site/search/search_index.json +++ b/docs/site/search/search_index.json @@ -2,7 +2,7 @@ "docs": [ { "location": "/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nThere are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison.\n\n\nOpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection.\n\n\nFor those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms.\n\n\nA list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here:\n\n\n\n\n\n\n\n\nOperation\n\n\nObject Type\n\n\nFramework\n\n\n\n\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nLBP-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nMOG w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nSuBSENSE w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nLicense Plate\n\n\nOpenALPR\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nSphinx\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nAzure Cognitive Services Batch Transcription API\n\n\n\n\n\n\nDetection\n\n\nScene\n\n\nOpenCV\n\n\n\n\n\n\nDetection\n\n\nClassification\n\n\nOpenCV DNN (GoogLeNet, Yahoo NSFW, vehicle color)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification\n\n\nOpenCV DNN (YOLO)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification/Features\n\n\nTensorRT (COCO classes)\n\n\n\n\n\n\nDetection\n\n\nText Region\n\n\nEAST\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nApache Tika\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nTesseract OCR\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services Computer Vision API (OCR endpoint)\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services Read API\n\n\n\n\n\n\nDetection\n\n\nForm Structure (with OCR)\n\n\nAzure Cognitive Services Form Recognizer API\n\n\n\n\n\n\nDetection\n\n\nKeywords\n\n\nBoost Regular Expressions\n\n\n\n\n\n\nDetection\n\n\nImage (from document)\n\n\nApache Tika\n\n\n\n\n\n\nTranslation\n\n\nLanguage\n\n\nAzure Cognitive Services Translate API\n\n\n\n\n\n\n\n\nThe OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nThere are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison.\n\n\nOpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection.\n\n\nFor those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms.\n\n\nA list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here:\n\n\n\n\n\n\n\n\nOperation\n\n\nObject Type\n\n\nFramework\n\n\n\n\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nLBP-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nMOG w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nSuBSENSE w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nLicense Plate\n\n\nOpenALPR\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nSphinx\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nAzure Cognitive Services Batch Transcription API\n\n\n\n\n\n\nDetection\n\n\nScene\n\n\nOpenCV\n\n\n\n\n\n\nDetection\n\n\nClassification\n\n\nOpenCV DNN (GoogLeNet, Yahoo NSFW, vehicle color)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification\n\n\nOpenCV DNN (YOLO)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification/Features\n\n\nTensorRT (COCO classes)\n\n\n\n\n\n\nDetection\n\n\nText Region\n\n\nEAST\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nApache Tika\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nTesseract OCR\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services Computer Vision API (OCR endpoint)\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services Read API\n\n\n\n\n\n\nDetection\n\n\nForm Structure (with OCR)\n\n\nAzure Cognitive Services Form Recognizer API\n\n\n\n\n\n\nDetection\n\n\nKeywords\n\n\nBoost Regular Expressions\n\n\n\n\n\n\nDetection\n\n\nImage (from document)\n\n\nApache Tika\n\n\n\n\n\n\nTranslation\n\n\nLanguage\n\n\nAzure Cognitive Services Translate API\n\n\n\n\n\n\n\n\nThe OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.", "title": "Home" }, { @@ -12,7 +12,7 @@ }, { "location": "/Release-Notes/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOpenMPF 9.0.x\n\n\n9.0.0: May 2024\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nQuality Selection Guide\n.\n\n\n\n\nQuality Selection\n\n\n\n\n\nCan now specify a \nQUALITY_SELECTION_PROPERTY\n and \nQUALITY_SELECTION_THRESHOLD\n for choosing exemplars, artifacts,\n and controlling feed-forward behavior.\n\n\nThe following old job properties and old system properties are no longer supported. The tables show the new properties\n that should be used instead:\n\n\n\n\n\n\n\n\n\n\nOld Job Property\n\n\nNew Job Properties\n\n\n\n\n\n\n\n\n\n\nCONFIDENCE_THRESHOLD\n\n\nQUALITY_SELECTION_PROPERTY\nQUALITY_SELECTION_THRESHOLD\n\n\n\n\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n\n\n\n\n\n\nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOld System Property\n\n\nNew System Properties\n\n\n\n\n\n\n\n\n\n\ndetection.confidence.threshold\n\n\ndetection.quality.selection.prop\ndetection.quality.selection.threshold\n\n\n\n\n\n\ndetection.artifact.extraction.policy.top.confidence.count\n\n\ndetection.artifact.extraction.policy.top.quality.count\n\n\n\n\n\n\n\n\n\n\nBy default, \nQUALITY_SELECTION_PROPERTY\n is set to the value of \ndetection.quality.selection.prop\n system property,\n which, by default, is \nCONFIDENCE\n. In most cases this preserves the previous behavior.\n\n\nBy default, \nQUALITY_SELECTION_THRESHOLD\n is set to the value of \ndetection.quality.selection.threshold\n system\n property, which, by default, is \n-Infinity\n. This setting disables the threshold. Previously, the default value of\n \ndetection.confidence.threshold\n was -1, which disabled the threshold for most components.\n\n\nComponents that previously used \nCONFIDENCE_THRESHOLD\n now have \nQUALITY_SELECTION_PROPERTY=CONFIDENCE\n. Also,\n \nQUALITY_SELECTION_THRESHOLD\n is set to the previous value of \nCONFIDENCE_THRESHOLD\n. For example, see \nthis\n commit\n\n for changes made to the OcvYoloDetection component.\n\n\nEXEMPLAR_POLICY\n is now set to \nQUALITY\n by default. This setting results in choosing the detection within each track\n with the maximum quality according to the \nQUALITY_SELECTION_PROPERTY\n. Previously, the selection was always made\n based on highest detection confidence.\n\n\nSimilarly, the new \nFEED_FORWARD_TOP_QUALITY_COUNT\n and \nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n properties use\n \nQUALITY_SELECTION_PROPERTY\n and \nQUALITY_SELECTION_THRESHOLD\n.\n\n\nRefer to the \nQuality Selection Guide\n for details.\n\n\n\n\nTransformer Tagging Component\n\n\n\n\n\nThis component uses a user-specified corpus JSON file to match known phrases against each sentence in the input text\n data.\n\n\nThe input text sentences that generate match scores above the threshold are called \"trigger sentences\". These\n sentences are grouped by \"tag\" based on which entry in the corpus they matched against.\n\n\nThe underlying \nall-mpnet-base-v2 model\n was trained\n on a variety of text data in order to understand the commonalities in phrasing, subject, and context.\n\n\nRefer to the \nREADME\n\n for details.\n\n\n\n\nKeyword Tagging Component Output\n\n\n\n\n\nUpdated the Keyword Tagging Component to generate output in the same format as the Transformer Tagging Component. For\n example, the output properties used to take the form \n TRIGGER WORDS\n and \n TRIGGER WORDS OFFSET\n:\n\n\n\n\nTEXT TRIGGER WORDS\nTEXT TRIGGER WORDS OFFSET\nTRANSLATION TRIGGER WORDS\nTRANSLATION TRIGGER WORDS OFFSET\n\n\n\n\n\nNow the output properties take the form \n TRIGGER WORDS\n and \n TRIGGER WORDS OFFSET\n:\n\n\n\n\nTEXT TRAVEL TRIGGER WORDS\nTEXT TRAVEL TRIGGER WORDS OFFSET\nTRANSLATION TRAVEL TRIGGER WORDS\nTRANSLATION TRAVEL TRIGGER WORDS OFFSET\n\n\n\n\n\nNotice that in the above example the new output properties include the word \nTRAVEL\n. If trigger words are detected\n for other tags, such as \nFINANCIAL\n and \nVEHICLE\n, those words will be used in separate \nTRIGGER WORDS\n and\n \nTRIGGER WORDS OFFSET\n output properties.\n\n\nThis change enables the job consumer to determine which trigger words are associated with each entry in the \nTAGS\n\n output property.\n\n\nRefer to the \"Outputs\" section of the\n \nREADME\n for details.\n\n\n\n\nReporting Component Processing Time\n\n\n\n\n\nThe JSON output object contains a new section for reporting component processing time in milliseconds. For example:\n\n\n\n\n\"timing\": {\n \"processingTime\": 1514,\n \"actions\": [\n {\n \"name\": \"OCV YOLO VEHICLE DETECTION ACTION\",\n \"processingTime\": 1431\n },\n {\n \"name\": \"TENSORFLOW VEHICLE COLOR DETECTION (WITH FF REGION) ACTION\",\n \"processingTime\": 83\n }\n ]\n},\n\n\n\n\n\nThis does not include the time sub-jobs spent waiting in queues, or processing time by the Workflow Manager, such as\n the time to perform media inspection.\n\n\nAlso, the above JSON is reported in the TiesDB job record within the \ndataObject\n field.\n\n\n\n\nNLP Text Splitter Utility\n\n\n\n\n\nThe new NLP Text Splitter utility uses spaCy or \nWhere's the Point (WtP)\n\n models for determining how to break up text into sentences.\n\n\nSupports both CPU processing and optional GPU processing.\n\n\nUpdated the Azure Translation Component to use this utility to ensure that translation requests are within the 50,000\n character limit.\n\n\nRefer to the\n \nREADME\n\n for details.\n\n\n\n\nCLIP Component Video Support\n\n\n\n\n\nThe CLIP Component now supports processing videos in addition to the previous ability to process images. Specify the\n batch size using \nDETECTION_FRAME_BATCH_SIZE\n.\n\n\nThe component also supports a new, larger, and more accurate \nViT-L/14\n model in addition to the previous \nViT-B/32\n\n model. Both models are supported via the optional Triton server as well as within the component itself for non-Triton\n deployments.\n\n\nRefer to the\n \nREADME\n\n for performance metrics.\n\n\nThe \nNUMBER_OF_TEMPLATES\n property has been renamed to \nTEMPLATE_TYPE\n and now accepts one of the following values:\n \nopenai_1\n, \nopenai_7\n, \nopenai_80\n.\n\n\n\n\nImport Root Certificates for Components\n\n\n\n\n\nCan now specify a \nMPF_CA_CERTS\n environment variable for component Docker services to import root certificates.\n\n\nMay be useful when components need to communicate with external web services.\n\n\nRefer to the \nREADME\n for details.\n\n\n\n\nDocker Secrets for Environment Variables\n\n\n\n\n\nCan now use Docker secrets for environment variables in the Docker compose file.\n\n\nThis prevents exposing information as plain text in \ndocker-compose.yml\n.\n\n\nMay be useful for environment variables like:\n\n\nWorkflow Manager username and password: \nWFM_USER\n and \nWFM_PASSWORD\n\n\nKeystore password when enabling Workflow Manager HTTPS: \nKEYSTORE_PASSWORD\n\n\nAzure credentials: \nMPF_PROP_ACS_URL\n and \nMPF_PROP_ACS_SUBSCRIPTION_KEY\n\n\n\n\n\n\nRefer to the\n \nREADME\n\n for details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1692\n] Create a TransformerTagging component\n\n\n[\n#1718\n] Support a \nQUALITY_SELECTION_PROP\n to specify how the WFM should choose an exemplar\n\n\n[\n#1754\n] Report amount of time components spent executing a job\n\n\n[\n#1756\n] Support \nMPF_CA_CERTS\n for components\n\n\n[\n#1771\n] Azure Translation: Identify character limits. Split text using NLP Text Splitter.\n\n\n[\n#1798\n] Add NLP Text Splitter to Python Component SDK\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1694\n] Update CLIP component to support videos\n\n\n[\n#1706\n] Update KeywordTagging to work with TransformerTagging\n\n\n[\n#1745\n] Support using docker secrets for environment variables in \ndocker-compose.yml\n\n\n[\n#1769\n] Upgrade to proto3 and clean up \n.proto\n files\n\n\n[\n#1774\n] Update how TransformerTagging tokenizes sentences\n\n\n[\n#1785\n] Upgrade to OpenCV 4.9\n\n\n[\n#1786\n] Modify the behavior of Markup when \nCONFIDENCE\n is the bounding box label to be displayed\n\n\n[\n#1797\n] Further update Azure Translation and STT language maps\n\n\n[\n#1803\n] Upgrade Postgres client used by Workflow Manager\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1781\n] Markup boxes are not drawn when animation is disabled and there are gaps in a track\n\n\n[\n#1799\n] Keyword Tagging removes newlines so character offsets don't line up with original text\n\n\n\n\nOpenMPF 8.0.x\n\n\n8.0.4: May 2024\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1805\n] Workflow Manager incorrectly detects whether certain videos\n are constant or variable frame rate\n\n\n\n\n8.0.3: April 2024\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1788\n] Azure Speech and Translation: Update supported language\n mappings\n\n\n\n\n8.0.2: March 2024\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n with new \n[GET] /rest/queues\n and \n[GET] /rest/queues/{name}\n endpoints.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1776\n] Add REST endpoint for retrieving the ActiveMQ message counts\n for each queue\n\n\n\n\n8.0.1: March 2024\n\n\n\nUpdates\n\n\n\n\n\n[\n#1768\n] Add Option to Merge Text Sections in TikaTextDetection\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1763\n] Media inspections fails when ffprobe does not specify a\n stream \"codec_type\"\n\n\n\n\n8.0.0: December 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nOpenID Connect Guide\n.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to remove\n \n/workflow-manager\n from the Workflow Manager base URL. The Admin Guide includes a section for the new Hawtio web\n console.\n\n\nUpdated the \nREST API\n to use path parameters for pipelines, tasks, actions, and algorithms\n endpoints.\n\n\nUpdated the \nComponent Descriptor Reference\n with \nalgorithm.trackType\n.\n\n\nUpdated the \nC++ Batch Component API\n, \nPython Batch Component\n API\n, and \nJava Batch Component API\n to\n remove the ability to get the detection type since track type is now specified in \ndescriptor.json\n.\n\n\nCreated a new \nTrigger Guide\n.\n\n\nCreated a new \nRoll Up Guide\n.\n\n\n\n\nOpenID-Connect (OIDC) Authentication\n\n\n\n\n\nThe Workflow Manager can now optionally use an OpenID Connect (OIDC) provider to handle authentication for users of\n the web UI and clients of the REST API. The URI for the OIDC provider is specified using the \nOIDC_ISSUER_URI\n\n environment variable.\n\n\nWhen enabled, OIDC is used to authenticate components when they register with the Workflow Manager.\n\n\nWhen \nCALLBACK_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token in job request callbacks.\n\n\nWhen \nTIES_DB_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token when posting to a TiesDb server.\n\n\nWhen OIDC is not enabled, the Workflow Manager uses basic authentication with usernames and passwords, as in previous\n versions of OpenMPF.\n\n\nRefer to the \nOpenID Connect Guide\n for more information on the various OIDC\n environment variables and a Keycloak example.\n\n\n\n\nEmbedded ActiveMQ Broker and Hawtio\n\n\n\n\n\nActiveMQ is now part of the Workflow Manager Spring Boot web application and is no longer run as a separate Docker\n service. This enables ActiveMQ to integrate with Spring Security so it can be protected by the Workflow Manager's OIDC\n support.\n\n\nThe Workflow Manager is the sender or recipient of all ActiveMQ messages, so embedding ActiveMQ in the Workflow\n Manager prevents a network hop on all messages.\n\n\nThe ActiveMQ management page has been replaced by \nHawtio\n, which is more feature rich and can be\n used to monitor the state of the ActiveMQ queues used for communication between the Workflow Manager and the\n components. The Hawtio web console can be accessed by selecting \"Hawtio\" from the \"Configuration\" dropdown menu in the\n top menu bar of the web UI.\n\n\nImportantly, the base URL for the Workflow Manager is now http://localhost:8080 instead of\n http://localhost:8080/workflow-manager. \n/workflow-manager\n is no longer part of the path. This change was made to\n enable Hawtio integration.\n\n\n\n\nREST API Updates\n\n\n\n\n\nThe following changes have been made to the REST endpoints to address a limitation with Swagger (OpenAPI). These\n changes enable the REST endpoints to properly show up in the Swagger page, which is accessed by selecting \"REST API\"\n from the \"Configuration\" dropdown menu in the top menu bar of the web UI.\n\n\n\n\n\n\n\n\n\n\nOld REST Endpoint\n\n\nNew REST Endpoint\n\n\n\n\n\n\n\n\n\n\n[GET] /rest/pipelines?name={name}\n\n\n[GET] /rest/pipelines/{name}\n\n\n\n\n\n\n[GET] /rest/tasks?name={name}\n\n\n[GET] /rest/tasks/{name}\n\n\n\n\n\n\n[GET] /rest/actions?name={name}\n\n\n[GET] /rest/actions/{name}\n\n\n\n\n\n\n[GET] /rest/algorithms?name={name}\n\n\n[GET] /rest/algorithms/{name}\n\n\n\n\n\n\n[DELETE] /rest/pipelines?name={name}\n\n\n[DELETE] /rest/pipelines/{name}\n\n\n\n\n\n\n[DELETE] /rest/tasks?name={name}\n\n\n[DELETE] /rest/tasks/{name}\n\n\n\n\n\n\n[DELETE] /rest/actions?name={name}\n\n\n[DELETE] /rest/actions/{name}\n\n\n\n\n\n\n\n\n\n\nIn general, the name is now specified as part of the URL path instead of as a URL parameter.\n\n\n/\n and \n;\n characters are no longer allowed in these names.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nEach component's \ndescriptor.json\n now requires an \nalgorithm.trackType\n field. This is used by the Workflow Manager\n to determine the kind of tracks that may be generated by the component (e.g. \nFACE\n, \nTEXT\n, \nCLASS\n, etc.). This is\n now used in place of the component API calls that were used to get the detection type. \n\n\n\n\nComponent API Updates\n\n\n\n\n\nThe following changes were made since the track type is now part of each component's \ndescriptor.json\n:\n\n\nRemoved \nGetDetectionType()\n from the CPP Component API.\n\n\nRemoved \ndetection_type\n from the Python Component API.\n\n\nRemoved \ngetDetectionType()\n from the Java Component API.\n\n\n\n\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects use \naction\n instead of \nsource\n in the track type group. Also, \nsource\n is removed from each track.\n\n\nConsider this example of the old JSON output:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"confidence\": 8.799637,\n ...\n\n\n\n\n\nThe corresponding new JSON output is:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"action\": \"OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"confidence\": 8.799637,\n ...\n\n\n\nTrigger Support\n\n\n\n\n\nA \nTRIGGER\n property can now be added to any action in a pipeline. It will only be used if \nFEED_FORWARD_TYPE\n is\n provided and set to something other than \nNONE\n. The \nTRIGGER\n property is used to conditionally control if the\n Workflow Manager executes that action. Each feed-forward track that is not executed is passed to the next stage of the\n pipeline. This results in skipping untriggered actions.\n\n\nThe value of \nTRIGGER\n takes the form \n=[;...]\n. For example, if the value is\n \nCLASSIFICATION=car\n then the Workflow Manager would only execute the associated action using feed-forward tracks from\n the previous stage in the pipeline if those tracks have the \nCLASSIFICATION\n track property with a value of \ncar\n.\n This could be useful to skip a license plate detection action. To enable the action to trigger on more than just \ncar\n\n tracks you can provide a list of valid values. For example, \nCLASSIFICATION=car;truck;bus\n.\n\n\nThe \nTrigger Guide\n goes into more detail and provides an example of a pipeline with\n multiple speech-to-text stages. \nTRIGGER\n is used to select which speech-to-text algorithm is executed based on the\n detected language in the media.\n\n\n\n\nRoll Up Support\n\n\n\n\n\nThe Workflow Manager can be configured to replace the values of track and detection properties\n after receiving tracks and detections from a component. For example, the \nCLASSIFICATION\n property\n may be set to \"car\", \"bus\", and \"truck\". Those can be rolled up into \"vehicle\".\n\n\nTo use this feature, set the \nROLL_UP_FILE\n property to the path of a JSON file that matches\n the format of this example:\n\n\n\n\n[\n {\n \"propertyToProcess\": \"CLASSIFICATION\",\n \"originalPropertyCopy\": \"ORIGINAL CLASSIFICATION\",\n \"groups\": [\n {\n \"rollUp\": \"vehicle\",\n \"members\": [\n \"truck\",\n \"car\",\n \"bus\"\n ]\n }\n ]\n }\n]\n\n\n\n\n\nRefer to the \nRoll Up Guide\n for an explanation and more details.\n\n\n\n\nChanged All \"whitelist\" References to \"allow list\"\n\n\n\n\n\nIn an effort to be more culturally sensitive, all references to \"whitelist\" have been removed or renamed to \"allow\n list\".\n\n\nThe \nwhitelist.\n prefix has been removed from the entries in the \nmediaType.properties\n file. For example,\n \nwhitelist.image/gif=VIDEO\n is now \nimage/gif=VIDEO\n.\n\n\nThe OcvDnnDetection component \nFEED_FORWARD_WHITELIST_FILE\n property has been renamed to\n \nFEED_FORWARD_ALLOW_LIST_FILE\n.\n\n\nThe OcvYoloDetection component \nCLASS_WHITELIST_FILE\n property has been renamed to \nCLASS_ALLOW_LIST_FILE\n.\n\n\n\n\nArgos Translation Component\n\n\n\n\n\nThis new component utilizes \nArgos Translate\n to translate input\n text from a given source language to English. It can be used in a feed-forward pipeline to process tracks with\n language and/or script identifiers from an upstream stage.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nWhisper Speech-to-Text and Translation Component\n\n\n\n\n\nThis new component utilizes \nOpenAI Whisper\n to perform language detection,\n speech-to-text transcription, or speech translation.\n\n\nIf multiple languages are spoken in a single piece of media, language detection will detect only one of them.\n\n\nNote that Whisper is not designed to return a transcription in the source language when performing translation, so we\n implemented the component to perform an additional transcribe call when configured to perform translation.\n\n\nRefer to the \nREADME\n\n for details.\n\n\n\n\nContrastive Language\u2013Image Pre-training (CLIP) Component\n\n\n\n\n\nThis new component utilizes \nCLIP\n to classify images using the 80 COCO classes, 1000\n ImageNet classes, or a list of user-provided classes. It can run on a CPU or GPU, and can make calls to an NVIDIA\n Triton inference server.\n\n\nClassification is performed by taking the class names and filling in one or more text prompts. For example, \"a photo\n of {}\", where \"{}\" can be \"dog\" or \"cat\". An embedding is generated using the text prompt(s) for each class and\n compared against the image embedding to get a match score. Optionally, users can provide a list of their own text\n prompts.\n\n\nOpenAI trained the CLIP model using a wide variety of images and their respective captions from the Internet. This may\n make it suitable for a wide variety of classification tasks without further training (known as zero-shot\n classification). For example, a user could make up a list of classes for arbitrary objects like \"walrus\", \"paperclip\",\n \"pizza\", etc., and use the default text prompts.\n\n\nIt is also possible to use CLIP to classify concepts like scenes and sentiment. For example, using a text prompt of \"a\n {} scene\" where the classes are \"safe\", \"violent\", and \"dangerous\".\n\n\nOptionally, the CLIP component can return the image embedding as the track \nFEATURE\n. For example, this can be used\n for search and retrieval tasks by comparing it to other embeddings enrolled in a database.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1547\n] Create Argos translation component\n\n\n[\n#1574\n] Update the WFM to support an optional \nTRIGGER\n property on any action\n\n\n[\n#1598\n] Create a Whisper component for speech-to-text and and translation\n\n\n[\n#1644\n] Create CLIP component for processing images\n\n\n[\n#1704\n] Update Workflow Manager to authenticate users and REST clients using OIDC\n\n\n[\n#1730\n] Update Workflow Manager to optionally use OIDC when sending callbacks and posting to TiesDb\n\n\n[\n#1733\n] Update Workflow Manager to use an embedded ActiveMQ broker\n\n\n[\n#1793\n] Add Roll Up support to Workflow Manager\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#799\n] Avoid unnecessary serialization between Camel routes\n\n\n[\n#949\n] Change \n/pipelines?name=MYPIPELINE\n REST endpoint to \n/pipelines/MYPIPELINE\n\n\n[\n#1643\n] Remove \nLONG_SPEAKER_ID\n and instead only use \nSPEAKER_ID\n\n\n[\n#1645\n] Refactor camel code\n\n\n[\n#1705\n] Change all references to \"whitelist\" to \"allow list\" and \"blacklist\" to \"block list\"\n\n\n[\n#1759\n] Disable markup animation by default\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1642\n] \nInProgressBatchJobsService.setProcessedAction\n is now called when a previous task produces no tracks\n\n\n[\n#1755\n] The Workflow Manager logs page does not properly handle multi-byte characters\n\n\n\n\nOpenMPF 7.2.x\n\n\n7.2.6: January 2024\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nHealth Check Guide\n.\n\n\n\n\nHealth Check Support\n\n\n\n\n\nThe C++ and Python component executors can be configured to run health checks on components prior to running jobs.\n Health checks are configured using environment variables:\n\n\nHEALTH_CHECK\n: When set to \"ENABLED\", the component executor will run health checks.\n\n\nHEALTH_CHECK_TIMEOUT\n: When set to a positive integer, specifies the minimum number of seconds between health\n checks. When absent or set to 0, a health check will run before every job.\n\n\nHEALTH_CHECK_RETRY_MAX_ATTEMPTS\n: When set to a positive integer, specifies the number of consecutive health\n check failures that will cause the component service to exit. When absent or set to 0, the component service will\n never exit because of a failed health check.\n\n\n\n\n\n\nAlso, an INI file must be provided at \n$MPF_HOME/plugins//health/health-check.ini\n. For example:\n\n\n\n\nmedia=$MPF_HOME/plugins/OcvFaceDetection/health/meds_faces_image.png\nmin_num_tracks=2\nmedia_type=IMAGE\n\n[job_properties]\nJOB PROP1=VALUE1\nJOB PROP2=VALUE2\n\n[media_properties]\nMEDIA PROP=MEDIA VALUE\n\n\n\n\n\nRefer to the \nHealth Check Guide\n for an explanation and more details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1731\n] Implement health checks for C++ and Python components\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1727\n] Update ffmpeg to 6.1\n\n\n\n\n7.2.5: November 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1715\n] Upgrade ActiveMQ to 5.17.6\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1711\n] When selecting detections with the highest confidence,\n Workflow Manager should consistently handle detections with equal confidence\n\n\n\n\n7.2.4: September 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1707\n] Fix bug where TiesDB check status reports\n \nNO_TIES_DB_URL_IN_JOB\n instead of \nMEDIA_MIME_TYPES_ABSENT\n\n\n\n\n7.2.3: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1697\n] Prevent OcvYoloDetection component from deadlocking on\n strange frame sizes when using Triton\n\n\n\n\n7.2.2: June 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1693\n] Add property to enable/disable SAS in AzureSpeech\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1695\n] Fix memory leak in KeywordTagging component\n\n\n\n\n7.2.1: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1678\n] Fix bug where ffmpeg hangs when processing some kinds of\n unsupported/corrupted media\n\n\n\n\n7.2.0: May 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nTiesDb Guide\n.\n\n\nUpdated the \nComponent Descriptor Reference\n with \noutputChangedCounter\n.\n\n\nUpdated the \nREST API\n with a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint.\n\n\nUpdated the REST API \n[POST] /rest/jobs\n response with \ntiesDbCheckStatus\n and \noutputObjectUri\n.\n\n\n\n\nTiesDb Re-Post\n\n\n\n\n\nAdded a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint that accepts an array of job ids as an input and will attempt to\n re-post the job assertions (records) to TiesDb for each one. \n\n\nAdded a \"TiesDb\" column to the Job Status page. If there is a problem posting a record to the TiesDb server the column\n will contain an \"ERROR\" button. Clicking on it will provide a description of the error and a button that can be used\n to re-post the associated job records.\n\n\n\n\nTiesDb Checking\n\n\n\n\n\nIf the \nTIES_DB_URL\n job property or \nties.db.url\n system property is set when submitting a job creation request, \n then the Workflow Manager will attempt to check TiesDb for existing job results before running the job again.\n\n\nThe Workflow Manager will attempt to use the most-recently-created job results, preferring jobs that completed without\n errors or warnings, and preferring jobs that completed with warnings over completed with errors.\n\n\nTo prevent this check, set \nSKIP_TIES_DB_CHECK=true\n. That will force the job to run and attempt to post the new\n job results to TiesDb.\n\n\nWhen using TiesDb, we strongly recommend providing both the \nMEDIA_HASH\n and \nMIME_TYPE\n in the \nmedia.metadata\n map\n in the job request. This will enable the Workflow Manager to skip media inspection. When using S3 object storage, this\n means that the Workflow Manager will not need to download the media before checking TiesDb for existing job records.\n\n\nThe \n[POST] /rest/jobs\n response now contains a \ntiesDbCheckStatus\n and \noutputObjectUri\n field. \ntiesDbCheckStatus\n\n will be set to one of the following values:\n\n\nNOT_REQUESTED\n\n\nNO_TIES_DB_URL_IN_JOB\n\n\nMEDIA_HASHES_ABSENT\n\n\nMEDIA_MIME_TYPES_ABSENT\n\n\nNO_MATCH\n\n\nFOUND_MATCH\n\n\n\n\n\n\nWhen there is a \nFOUND_MATCH\n, the \noutputObjectUri\n will be set to the URI of the old TiesDb record if S3 copy is\n not enabled.\n\n\nBy default, the \nties.db.s3.copy.enabled\n system property is set to \ntrue\n. This means that the Workflow Manager will\n attempt to copy all of the artifacts, markup, and derivative media associated with the job in TiesDb from the S3\n locations associated with the old job to the new S3 location specified in the new job. A new JSON output object will\n be generated. To disable this behavior set the system property, or \nTIES_DB_S3_COPY_ENABLED\n, to \nfalse\n. Then the\n Workflow Manager will simply provide a link to the old JSON as the result of the new job.\n\n\nIf there is a problem copying between S3 locations, the \"TiesDb\" column to the Job Status page will show a\n \"COPY ERROR\" button. Clicking on it will provide a description of the error.\n\n\n\n\nTiesDb Linked Media\n\n\n\n\n\nAdded support for \nLINKED_MEDIA_HASH\n in the \nmedia.properties\n section of the job creation request. When specified,\n the value of \nLINKED_MEDIA_HASH\n will be used instead of the actual media hash when creating a record in TiesDb,\n and also when looking for existing records in TiesDb.\n\n\nThis feature can be used to submit a transcoded (or thumbnail) version of an image to process instead of the source\n image. For example, the source image may be in a format not supported by OpenMPF. In this case, the value of\n \nLINKED_MEDIA_HASH\n can be set to the source image, but the rest of the job creation request would specify\n the \nmedia.mediaUri\n and \nmedia.metadata\n for the transcoded version of that image.\n\n\n\n\nOutput Changed Counter\n\n\n\n\n\nAdded the \noutput.changed.counter\n system property to the Workflow Manager and \noutputChangedCounter\n field to each\n component's \ndescriptor.json\n. These values are used when calculating the hash for a job when its record is posted to\n TiesDb, and also when checking TiesDb for existing records when a new job is submitted.\n\n\nIf the Workflow Manager is updated for any reason that should invalidate pre-existing job results, such as a\n change to the fields in the JSON output object, or significant improvements to track merging, for example, then the\n value of \noutput.changed.counter\n should be incremented by one. This will ensure that records in TiesDb will not be\n used so that all future jobs will need to be (re)run at least once until the counter is incremented again.\n\n\nThe same is true for each component. If a component is updated for any reason that should invalidate\n pre-existing job results, such as changes to input or output properties, or substantial improvements to the algorithm,\n then the value of \noutputChangedCounter\n should be incremented by one.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects will include \ntiesDbSourceJobId\n and \ntiesDbSourceMediaPath\n when the Workflow Manager can use\n previous job results stored in TiesDB. Note that the Workflow Manager will not generate a new JSON output object\n unless \nS3_RESULTS_BUCKET\n is set to a valid value, S3 access and secret keys are provided, and\n \nTIES_DB_S3_COPY_ENABLED=true\n.\n\n\n\n\nffprobe for Media Inspection\n\n\n\n\n\nThe Workflow Manager media inspection behavior now uses \nffprobe\n with \n-print_format json\n to return more precise\n \nFPS\n values for the \nmedia.mediaMetadata\n in the JSON output object. For example, the previous version of the\n Workflow Manager would return \n29.97\n, where the new version will return \n29.97002997002997\n. In multi-hour-long\n vidoes this can prevent cases where the last few frames were being ignored.\n\n\nThe previous version of the Workflow Manager was using both \nffmpeg\n and OpenCV to determine the number of frames in\n a video. We removed the OpenCV frame counter in this version because the \nffprobe\n approach is more accurate.\n The \nffprobe\n command replaces the old \nffmpeg\n command. \n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Job Status page to be more efficient. Searching a database of hundreds of thousands of jobs takes a long\n time. By limiting the search to one page of results at a time the UI is more responsive.\n\n\nRemoved timeout and bootout. The user session will no longer automatically end due to time out, or due to the same\n user logging in from a different host or browser. These behaviors were deemed too disruptive by end users.\n\n\nUpdated the Job Status page to include a \"TiesDb\" column that reports TiesDb status, such as when posting records\n to TiesDb and when retrieving existing records.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1438\n] Create a REST endpoint that will attempt to re-post to TiesDb\n\n\n[\n#1613\n] Check TiesDb before running a job\n\n\n[\n#1650\n] Create TiesDb records for thumbnail jobs under the parent media\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1342\n] Use ffprobe to get FPS during media inspection\n\n\n[\n#1564\n] Use ffprobe's JSON output instead of regexes during media inspection\n\n\n[\n#1601\n] Update the Workflow Manager jobs table to be more efficient\n\n\n[\n#1611\n] Remove Workflow Manager timeout and bootout behavior\n\n\n\n\nOpenMPF 7.1.x\n\n\n7.1.12: March 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1667\n] Handle Webp files with extra data at the end that cause components to crash\n\n\n\n\n7.1.10: March 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1662\n] Monitor StorageBackend\n\n\n\n\n7.1.9: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1675\n] Prevent upgrade of cudnn in yolo server dockerfile\n\n\n\n\n7.1.8: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1649\n] Install specific version of libcudnn8 in Docker build\n\n\n\n\n7.1.7: February 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1674\n] Update \nSPEAKER_ID\n logic, set \nLONG_SPEAKER_ID=0\n\n\n\n\n7.1.5: January 2023\n\n\n\nFeatures\n\n\n\n\n\n[\n#1542\n] Update Azure Speech Detection component to select transcription language based on feed-forward track\n\n\n[\n#1543\n] Update audio transcoder to accept subsegments\n\n\n[\n#1605\n] Update Azure Translation to use detected language from upstream\n\n\n\n\n7.1.1: December 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1634\n] Update version numbers to 7.1\n\n\n\n\n7.1.0: December 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_UPLOAD_OBJECT_KEY_PREFIX\n.\n\n\nUpdated the Markup Guide with \nMARKUP_TEXT_LABEL_MAX_LENGTH\n.\n\n\n\n\nExemplar Selection Policy\n\n\n\n\n\nThe policy for selecting the exemplar detection for each track can now be set using the \nEXEMPLAR_POLICY\n job property\n with following values:\n\n\nCONFIDENCE\n: Select the detection with the maximum confidence. If some confidences are the same, select the\n detection with the lower frame number. This is the default setting.\n\n\nFIRST\n: Select the detection with the lowest frame number\n\n\nLAST\n: Select the detection with the highest frame number\n\n\nMIDDLE\n: Select the detection with the frame number closest to the middle frame of the track, preferring the\n detection with the lower frame number if there is an even number of frames\n\n\n\n\n\n\n\n\nAutomatic Rotation and Horizontal Flip Enabled by Default\n\n\n\n\n\nIt is no longer necessary to explicitly set \nAUTO_ROTATE\n and \nAUTO_FLIP\n to true since that is now the default value.\n\n\nThese properties affect all video and image components that use the MPFImageReader and MPFVideoCapture tools. When\n true, if the image has EXIF data, or there is metadata associated with a video that ffmpeg understands, the tools will\n use that information to properly orient the frames before returning the frames to the component for processing.\n\n\n\n\nSupport S3 Object Storage Key Prefix\n\n\n\n\n\nSet the \nS3_UPLOAD_OBJECT_KEY_PREFIX\n job property or \ns3.upload.object.key.prefix\n system property to add a prefix to\n object keys when the Workflow Manager uploads objects to the S3 object store. This affects the JSON output object,\n artifacts, markup files, and derivative media.\n\n\nSpecifically, the Workflow Manager will upload objects to\n \n///\n.\n\n\nFor example, if you wish to add \"work/\" to the object key, then set \nS3_UPLOAD_OBJECT_KEY_PREFIX=work/\n.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1526\n] Allow markup to display more than 10 characters in the text\n part of the label\n\n\n[\n#1527\n] Enable the Workflow Manager to select the middle detection\n as the exemplar\n\n\n[\n#1566\n] Make \nAUTO_ROTATE\n and \nAUTO_FLIP\n true by default\n\n\n[\n#1569\n] Modify C++ and Python component executor to automatically\n add the job name to log messages\n\n\n[\n#1621\n] Make S3 object keys used for upload configurable\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1602\n] Update Workflow Manager to use Spring Boot\n\n\n[\n#1631\n] Update byte-buddy, Mockito, and Hibernate versions to\n resolve build issue. Most notably, update Hibernate to 5.6.14.\n\n\n[\n#1632\n] Update ActiveMQ to 5.17.3\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1581\n] Don't change track start and end frame when\n \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n is disabled\n\n\n[\n#1595\n] Work around how Ubuntu only recognizes certificate files\n that end in .crt\n\n\n[\n#1610\n] Prevent premature pipeline creation when using web UI\n\n\n[\n#1612\n] At startup, prevent Workflow Manager from consuming from\n queues before purging them\n\n\n\n\nOpenMPF 7.0.x\n\n\n7.0.3: September 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1561\n] Fix logging for Python components when running through CLI\n runner\n\n\n[\n#1583\n] Can now properly view media while job is in progress\n\n\n[\n#1587\n] Fix bugs in amq_detection_component's use of select\n\n\n\n\n7.0.2: August 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1562\n] Fix bug where an ffmpeg change prevented detecting video\n rotation\n\n\n\n\n7.0.0: July 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Development Environment Guide by replacing steps for CentOS 7 with Ubuntu 20.04.\n\n\nAdded the Derivative Media Guide.\n\n\nUpdated the Batch Component APIs with revised error codes.\n\n\nUpdated the Python Batch Component API and Python base Docker image README with instructions for\n using \npyproject.toml\n and \nsetup.cfg\n.\n\n\nUpdated the Admin Guide and User Guide with images that show the new TiesDb and Callback columns in the job status UI.\n\n\nUpdated the REST API with the \npipelineDefinition\n, \nframeRanges\n, and \ntimeRanges\n fields now supported by the\n \n[POST] /rest/jobs\n endpoint.\n\n\nUpdated the OcvYoloDetection component README with information on using the NVIDIA Triton inference server.\n\n\nUpdated the Markup Guide with \nMARKUP_ANIMATION_ENABLED\n and \nMARKUP_LABELS_TRACK_INDEX_ENABLED\n.\n\n\nUpdated the Contributor Guide with new steps for generating documentation.\n\n\n\n\nTransition from CentOS 7 to Ubuntu 20.04\n\n\n\n\n\nAll the Docker images that previously used CentOS 7 as a base now use Ubuntu 20.04.\n\n\nWe decided not to use CentOS 8, which is a version of CentOS Stream, due to concerns about stability.\n\n\nAlso, Ubuntu is a very common OS within the AI and ML space, and has significant community support.\n\n\n\n\nUse Job Id that Enables Load Balancing\n\n\n\n\n\nThe Workflow Manager can now optionally accept job ids of the form \n-\n through\n the REST endpoints, where \n\n is the same as the shorter id used in previous releases. The\n \n-\n prefix enables better tracking and separation of jobs run across multiple\n Workflow Manager instances in a cluster.\n\n\nThe prefix can be set in the \ndocker-compose.yml\n file by assigning \n{{.Node.Hostname}}\n to the \nNODE_HOSTNAME\n\n environment variable for the Workflow Manager service, or hard-coding \nNODE_HOSTNAME\n to the desired hostname.\n\n\nThe shorter version of the id can still be used in REST requests, but the longer id will always be returned by the\n Workflow Manager when responding to those requests.\n\n\nThe shorter id will always be used internally by the Workflow Manager, meaning the job status web UI and log messages\n will all use the shorter job id. \n\n\n\n\nSupport for Derivative Media\n\n\n\n\n\nThe TikaImageDetection component now returns \nMEDIA\n tracks instead of \nIMAGE\n tracks when extracting images from\n documents, such as PDFs, Word documents, and PowerPoint slides. The document is considered the \"source\", or \"parent\",\n media, and the images are considered the \"derivative\", or \"child\", media.\n\n\nActions can now be configured with \nSOURCE_MEDIA_ONLY=true\n or \nDERIVATIVE_MEDIA_ONLY=true\n, which will result in only\n performing the action on that kind of media. Feed forward can still be used to pass track information from one stage\n to another. The tracks will skip the stages (actions) that don't apply.\n\n\nThis enables complex pipelines like one that extracts text from a PDF using TikaTextDetection, OCRs embedded images\n using EastTextDetection and TesseractOCRTextDetection, and runs all of the \nTEXT\n tracks through KeywordTagging.\n\n\nAdded the following pipelines to the TikaImageDetection component:\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING AND MARKUP PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE AND MARKUP PIPELINE\n\n\n\n\n\n\n\n\nReport when Job Callbacks and TiesDb POSTs Fail\n\n\n\n\n\nThe job status UI displays two new columns, one that indicates the status of posting to TiesDB, and one that indicates\n the status of posting the job callback to the job producer.\n\n\nAdditionally, the \n[GET] /rest/jobs/{id}\n endpoint now includes a \ntiesDbStatus\n and \ncallbackStatus\n field.\n\n\nNote that, by design, the JSON output itself does not contain these statuses.\n\n\n\n\nAllow Pipelines to be Specified in a Job Request\n\n\n\n\n\nOptionally, the \npipelineDefinition\n field can be provided instead of the \npipelineName\n field when using the\n \n[POST] /rest/jobs\n endpoint in order to specify a pipeline on the fly for that specific job run. It will not be saved\n for later reuse.\n\n\nThe format of the pipeline definition is similar to that in a \ndescriptor.json\n file, with separate sections for\n defining \ntasks\n and \nactions\n. Pre-existing tasks and actions known to the Workflow Manager can be specified in the\n definition. They do not need to be defined again.\n\n\nThis feature is a convenient alternative to creating persistent definitions using the \n[POST] /rest/pipelines\n,\n \n[POST] /rest/tasks\n, and \n[POST] /rest/actions\n endpoints. For example, this feature could be used to quickly add or\n remove a motion preprocessing stage from a pipeline.\n\n\n\n\nAllow User-Specified Segment Boundaries\n\n\n\n\n\nOptionally, multiple \nframeRanges\n and/or \ntimeRanges\n fields can be provided when using the \n[POST] /rest/jobs\n\n endpoint in order to manually specify segment boundaries. These values will override the normal segmenting behavior of\n the Workflow Manager.\n\n\nNote that overlapping ranges will be combined and large ranges may still be split up according to the value of\n \nTARGET_SEGMENT_LENGTH\n and \nVFR_TARGET_SEGMENT_LENGTH\n.\n\n\nNote that \nframeRanges\n is specified using the frame number and \ntimeRanges\n is specified in milliseconds.\n\n\n\n\nAdd Triton Inference Server support to YOLO component\n\n\n\n\n\nThe OcvYoloDetection component now supports the ability to send requests to an NVIDIA Triton Inference Server by\n setting \nENABLE_TRITON=true\n. If set to false, the component will process jobs using OpenCV DNN on the local host\n running the Docker service, as per normal.\n\n\nBy default \nTRITON_SERVER=ocv-yolo-detection-server:8001\n, which\n corresponds to the \nocv-yolo-detection-server\n entry in your \ndocker-compose.yml\n file. Refer to the example entry\n within \ndocker-compose.components.yml\n\n . That entry uses a pre-built and pre-configured version of the Triton server.\n\n\nThe Triton server runs the YOLOv4 model within the TensorRT framework, which performs a warmup operation when the\n server starts up to determine which optimizations to enable for the available GPU hardware. \n*.engine\n files are\n generated within the \nyolo_engine_file\n Docker volume for later reuse.\n\n\nTo further improve inferencing speed, shared memory can be configured between the \nocv-yolo-detection\n client service and the\n \nocv-yolo-detection-server\n service if they are running on the same host. Set \nTRITON_USE_SHM=true\n and configure the\n server with a \n/dev/shm:/dev/shm\n Docker volume.\n\n\nDepending on the available GPU hardware, the Triton server can achieve speeds that are 5x faster than OpenCV DNN with\n tracking enabled, no shared memory, and nearly 9x faster with tracking disabled, with shared memory. Our tests used a\n single RTX 2080 GPU.\n\n\n\n\nRemoved Unused and Redundant Error Codes\n\n\n\n\n\nThe error codes shown on the left were redundant and replaced with the corresponding error codes on the right:\n\n\n\n\n\n\n\n\n\n\nOld Error Code\n\n\nNew Error Code\n\n\n\n\n\n\n\n\n\n\nMPF_IMAGE_READ_ERROR\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\n\n\n\n\nMPF_BOUNDING_BOX_SIZE_ERROR\n\n\nMPF_BAD_FRAME_SIZE\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_INT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_FLOAT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_INVALID_FRAME_INTERVAL\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_DETECTION_TRACKING_FAILED\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\n\n\n\n\n\n\nAlso, the following error codes are no longer being used and have been removed:\n\n\n\n\nMPF_UNRECOGNIZED_DATA_TYPE\n\n\nAll media types can now be processed since we support the \nUNKNOWN\n (a.k.a. \"generic\")\n media type\n\n\n\n\n\n\nMPF_INVALID_DATAFILE_URI\n\n\nThe Workflow Manager will reject a job with an invalid media URI before it gets to a\n component\n\n\n\n\n\n\nMPF_INVALID_START_FRAME\n\n\nMPF_INVALID_STOP_FRAME\n\n\nMPF_INVALID_ROTATION\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nBy default, the Markup component draws bounding boxes to fill in the gaps between detections in each track by\n interpolating the box size and position. This can now be disabled by setting the job property\n \nMARKUP_ANIMATION_ENABLED=false\n, or the system property \nmarkup.video.animation.enabled=false\n.\n Disabling this feature can be useful to prevent floating boxes from cluttering the marked-up frames.\n\n\nThe Markup component will now start each bounding box label with a track index like \n[0]\n that can be used to\n correlate the box with the track in the JSON output object. The JSON output now contains an \nindex\n field for every\n track, relative to each piece of media, that is simply an integer that starts at 0 and counts upward. This can be\n disabled by setting the job property \nMARKUP_LABELS_TRACK_INDEX_ENABLED=false\n, or the system property\n \nmarkup.labels.track.index.enabled=false\n.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nComponents that generate \nMEDIA\n tracks will result in new derivative \nmedia\n entries in the JSON output file. This\n means it's possible to provide a single piece of media as an input and have more than one \nmedia\n entry in the JSON\n output. The output will always include the original media.\n\n\nEach \nmedia\n entry in the JSON output now contains a \nparentMediaId\n in addition to the \nmediaId\n. The \nparentMediaId\n\n for original source media will always be set to -1; otherwise, for derivative media, the \nparentMediaId\n is set the\n \nmediaId\n of the source media from which the child media was derived.\n\n\nEach \nmedia\n entry also contains a new \nframeRanges\n and \ntimeRanges\n collection.\n\n\nThe JSON output file also contains a new \nindex\n field for every track, relative to each piece of media.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#792\n] Perform detection on images extracted from PDFs\n\n\n[\n#1283\n] Add user-specified segment boundaries\n\n\n[\n#1374\n] Transition from CentOS 7 to Ubuntu 20.04\n\n\n[\n#1396\n] Report when job callbacks and TiesDb POSTs fail\n\n\n[\n#1398\n] Add Triton Inference Server support to YOLO component\n\n\n[\n#1428\n] Allow pipelines to be specified in a job request\n\n\n[\n#1454\n] Transition from Clair scans to Trivy scans\n\n\n[\n#1485\n] Use \npyproject.toml\n and \nsetup.cfg\n instead of \nsetup.py\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#803\n] Update Tika Image Detection to generate one track per piece of extracted media\n\n\n[\n#808\n] Update Tika Text Detection component to not use leading zeros for \nPAGE_NUM\n\n\n[\n#1105\n] Remove dependency on QT from C++ SDK\n\n\n[\n#1282\n] Use job id that enables load balancing\n\n\n[\n#1303\n] Update Tika Image Detection to return \nMEDIA\n tracks\n\n\n[\n#1319\n] Review existing error codes and remove unused or redundant error codes\n\n\n[\n#1384\n] Update Apache Tika to 2.4.1 for TikaImageDetection and TikaTextDetection Components\n\n\n[\n#1436\n] CLI Runner should initialize a component once when handling multiple jobs\n\n\n[\n#1465\n] Remove YoloV3 support from OcvYoloDetection component\n\n\n[\n#1513\n] Update to Spring 5.3.18\n\n\n[\n#1528\n] CLI runner should also sort by startOffsetTime\n\n\n[\n#1540\n] Upgrade to Java 17\n\n\n[\n#1549\n] Allow markup animation to be disabled\n\n\n[\n#1550\n] Add track index to markup\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1372\n] Tika Image Detection no longer misses images in PowerPoint and Word documents\n\n\n[\n#1449\n] Simon data is now refreshed when clicking the Processes tab\n\n\n[\n#1495\n] Fix bug where invalid CSRF token found for \n/workflow-manager/login\n\n\n\n\nOpenMPF 6.3.x\n\n\n6.3.14: May 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1530\n] Fix S3 code memory leak\n\n\n\n\n6.3.12: April 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1519\n] Upgrade to OpenCV 4.5.5\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1520\n] S3 code now retries on most 400 errors\n\n\n\n\n6.3.11: April 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_SESSION_TOKEN\n, \nS3_USE_VIRTUAL_HOST\n, \nS3_HOST\n, and \nS3_REGION\n.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1496\n] Update S3 client code\n\n\n[\n#1514\n] Update Tomcat to 8.5.78\n\n\n\n\n6.3.10: March 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1486\n] Fix bug where \nMOVING\n was being added to immutable map twice\n\n\n[\n#1498\n] Can now provide media metadata when frameTimeInfo is missing\n\n\n[\n#1501\n] MPFVideoCapture now properly reads frames from videos with rotation metadata\n\n\n[\n#1502\n] Detections with \nHORIZONTAL_FLIP\n will no longer result in illformed detections and incorrectly padded regions\n\n\n[\n#1503\n] Videos with rotation metadata will no longer result in corrupt markup\n\n\n\n\n6.3.8: January 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1469\n] \nTENSORFLOW VEHICLE COLOR DETECTION\n pipelines no longer refer to YOLO tasks that no longer exist\n\n\n\n\n6.3.7: January 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1466\n] Upgrade log4j to 2.17.1\n\n\n\n\n6.3.6: December 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1457\n] Upgrade log4j to 2.16.0\n\n\n\n\n6.3.5: November 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1451\n] Make concurrent callbacks configurable\n\n\n\n\n6.3.4: November 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1441\n] Modify AdminStatisticsController so that it doesn't hold all jobs in memory at once\n\n\n\n\n6.3.3: October 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1425\n] Make protobuf size limit configurable\n\n\n\n\n6.3.2: October 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1420\n] Sphinx component no longer omits audio at end of video files\n\n\n[\n#1422\n] Media inspection now correctly calculates milliseconds from ffmpeg duration\n\n\n\n\n6.3.1: September 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1404\n] Improve OcvDnnDetection vehicle color detection\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1251\n] Add version to JSON output object\n\n\n[\n#1272\n] Update Keyword Tagging to work on multiple inputs\n\n\n[\n#1350\n] Retire old components to the graveyard: DlibFaceDetection, DarknetDetection, and OcvPersonDetection\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n now behaves as expected\n\n\n[\n#1271\n] Azure speech component no longer omits audio at end of video files\n\n\n[\n#1389\n] NLP text correction component now properly reads the value of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n[\n#1403\n] Corrected README to state that the Azure Speech Component doesn't support v2 of the API\n\n\n[\n#1406\n] Speech detections in videos are no longer dropped if using keyword tagging\n\n\n[\n#1411\n] Exception no longer occurs when adding \nSHRUNK_TO_NOTHING=TRUE\n to an immutable map in multiple pipeline stages\n\n\n[\n#1413\n] Speech detections in videos are no longer dropped if using translation\n\n\n\n\n6.3.0: September 2021\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the API documents, Development Environment Guide, Node Guide, Install Guide, User Guide, Admin Guide, and\n others to clarify the difference between Docker and non-Docker behaviors.\n\n\nTransformed Packaging and Registering a Component document into Component Descriptor Reference.\n\n\nSplit Media Segmentation Guide from User Guide.\n\n\nUpdated and renamed the Workflow Manager document to Workflow Manager Architecture.\n\n\nUpdated the various Docker guides to clarify the difference between building Docker images from scratch versus\n building them using pre-built base images on Docker Hub, emphasizing the latter.\n\n\nUpdated the Contributor Guide to document the hotfix pull request process.\n\n\n\n\nTiesDb Integration\n\n\n\n\n\nTiesDb is a PostgreSQL DB with a RESTful API that stores media metadata. The metadata entries are queried using the\n hash (sha256, md5) of the media file. TIES stands\n for \nTriage Import Export Schema\n. TiesDb is deployed and managed externally to\n OpenMPF. For more information please contact us.\n\n\nWhen a job completes, OpenMPF can post assertions to media entries that exist in TiesDb. In general, one assertion is\n generated for each algorithm run on a piece of media. It contains the job status, algorithm name, detection\n type (\nFACE\n, \nTEXT\n, \nMOTION\n, etc.), and number of tracks generated, as well as a link to the full JSON output\n object.\n\n\nEach assertion serves as a lasting record so that job producers may first check TiesDb to see if an algorithm was run\n on a piece of media before submitting the same job to OpenMPF again.\n\n\nTo enable TiesDb support, set the \nTIES_DB_URL\n job property or \nties.db.url\n system property to\n the \n://:\n part of the URL. The Workflow Manager will append\n the \n/api/db/supplementals?sha256Hash=\n part. Here is an example of a TiesDb POST:\n\n\n\n\n{\n \"dataObject\": {\n \"sha256OutputHash\": \"1f8f2a8b2f5178765dd4a2e952f97f5037c290ee8d011cd7e92fb8f57bc75f17\",\n \"outputType\": \"FACE\",\n \"algorithm\": \"FACECV\",\n \"processDate\": \"2021-09-09T21:37:30.516-04:00\",\n \"pipeline\": \"OCV FACE DETECTION PIPELINE\",\n \"outputUri\": \"file:///home/mpf/git/openmpf-projects/openmpf/trunk/install/share/output-objects/1284/detection.json\",\n \"jobStatus\": \"COMPLETE\",\n \"jobId\": 1284,\n \"systemVersion\": \"6.3\",\n \"trackCount\": 1,\n \"systemHostname\": \"openmpf-master\"\n },\n \"system\": \"OpenMPF\",\n \"securityTag\": \"UNCLASSIFIED\",\n \"informationType\": \"OpenMPF FACE\",\n \"assertionId\": \"4874829f666d79881f7803207c7359dc781b97d2c68b471136bf7235a397c5cd\"\n}\n\n\n\nNatural Language Processing (NLP) Text Correction Component\n\n\n\n\n\nThis component utilizes the \nCyHunspell\n library, which is a Python\n port of the \nHunspell\n spell-checking library, to perform post-processing\n correction of OCR text. In general, it's intended to be used in a pipeline after a component like\n TesseractOCRTextDetection that generates \nTEXT\n tracks. These tracks are then fed-forward into NlpTextCorrection,\n which will add a \nCORRECTED TEXT\n property to the existing tracks.\n The \nTESSERACT OCR TEXT DETECTION WITH NLP TEXT CORRECTION PIPELINE\n performs this behavior. The component can also\n run on its own to process plain text files. Refer to\n the \nREADME\n for details.\n\n\n\n\nAzure Cognitive Services (ACS) Read Component\n\n\n\n\n\nThis component utilizes\n the \nAzure Cognitive Services Read Detection REST endpoint\n\n to extract formatted text from documents (PDFs), images, and videos. Refer to\n the \nREADME\n for\n details.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1151\n] Now supports \nIN_PROGRESS_WITH_WARNINGS\n status\n\n\n[\n#1234\n] Now sorts JSON output object media by media id\n\n\n[\n#1341\n] Added job id to all batch-job-specific Workflow Manager log\n messages\n\n\n[\n#1349\n] Improved reporting and recording job status\n\n\n[\n#1353\n] Updated the Workflow Manager to remove and warn about\n zero-size detections\n\n\n[\n#1382\n] Updated Tika version to 1.27 for TikaImageDetection and\n TikaTextDetection components\n\n\n[\n#1387\n] Markup can now be configured in a\n component's \ndescriptor.json\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1080\n] Batch jobs no longer prematurely set to 100% completion\n during artifact extraction\n\n\n[\n#1106\n] When a job ends in \nERROR\n or \nCANCELLED_BY_SHUTDOWN\n the\n job status UI now shows an End Date\n\n\n[\n#1158\n] JSON output object URI no longer changes when callback fails\n\n\n[\n#1317\n] TikaTextDetection no longer generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n[\n#1337\n] Now using \nMPF_BAD_FRAME_SIZE\n instead\n of \nMPF_DETECTION_FAILED\n for OpenCV empty/resize exception\n\n\n[\n#1359\n] Image detection tracks no longer\n have \nendOffsetFrameInclusive\n set to 1\n\n\n[\n#1373\n] When uploading large files through the Workflow Manager web\n UI, now more than the first 865032704 bytes get written\n\n\n[\n#1379\n] TikaImageDetection component now avoids conflicts by no\n longer using the same path when extracting images for jobs with multiple pieces of media\n\n\n[\n#1386\n] FeedForwardFrameCropper in the Python SDK now handles\n negative coordinates properly\n\n\n[\n#1391\n] If a job is configured to upload markup and markup fails,\n the job no longer gets stuck\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1372\n] TikaImageDetection misses images in PowerPoint and Word\n documents\n\n\n[\n#1389\n] NlpTextCorrection does not properly read the value\n of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n\n\nOpenMPF 6.2.x\n\n\n6.2.5: July 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1367\n] Enable cross-origin resource sharing on Workflow Manager\n\n\n\n\n6.2.4: June 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1356\n] AzureSpeech now properly reports when media is missing audio stream\n\n\n[\n#1357\n] AzureSpeech now handles case where speaker id is not present\n\n\n\n\n6.2.2: June 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1333\n] Combine media name and job id into one WFM log line\n\n\n[\n#1336\n] Remove duplicate \"Setting status of job to COMPLETE\" Workflow Manager log line and other improvements\n\n\n[\n#1338\n] Update OpenCV DNN Detection component to optionally use feed-forward confidence values\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1237\n] Fixed jQuery DataTables bug: \"int parameter 'draw' is present but cannot be translated into a null value\"\n\n\n[\n#1254\n] Jobs table no longer flickers when polling is enabled and the search box is used\n\n\n[\n#1308\n] Prevent OCV YOLO Tracking from generating zero-sized detections\n\n\n[\n#1313\n] Fix JSON output object timestamps for variable frame rate videos\n\n\n\n\n6.2.1: May 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1330\n] Return error codes for \nmodels_ini_parser.py\n exceptions\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1331\n] Decoding certain heic images no longer causes Workflow Manager to segfault\n\n\n\n\n6.2.0: May 2021\n\n\n\nTesseract OCR Text Detection Component Support for Videos\n\n\n\n\n\nThe component can now process videos in addition to images and PDFs. Each video frame is processed sequentially.\n The \nMAX_PARALLEL_SCRIPT_THREADS\n property determines how many threads to use to process each frame, one thread per\n language or script.\n\n\nNote that for videos without much text, it may be faster to disable threading by\n setting \nMAX_PARALLEL_SCRIPT_THREADS=1\n. This will allow the component to reuse TessAPI instances instead of creating\n new ones for every frame. Please refer to the Known Issues section.\n\n\nResolved issues: \n#1285\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1086\n] Added support for \nCOULD_NOT_OPEN_MEDIA\n\n and \nCOULD_NOT_READ_MEDIA\n error types\n\n\n[\n#1159\n] Split \nIssueCodes.REMOTE_STORAGE\n\n into \nREMOTE_STORAGE_DOWNLOAD\n and \nREMOTE_STORAGE_UPLOAD\n\n\n[\n#1250\n] Modified \n/rest/jobs/{id}\n to include the job's media\n\n\n[\n#1312\n] Created \nNETWORK_ERROR\n error code for when a component\n can't connect to an external server. Updated Python HTTP retry code to return \nNETWORK_ERROR\n. This affects the Azure\n components.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1008\n] Use global TessAPI instances with parallel processing\n\n\n\n\nOpenMPF 6.1.x\n\n\n6.1.6: May 2021\n\n\n\nHandle Variable Frame Rate Videos\n\n\n\n\n\nThe Workflow Manager will attempt to detect if a video is constant frame rate (CFR) or variable frame rate (VFR)\n during media inspection. If no determination can be made, it will default to VFR behavior. If CFR, the JSON output\n object will have a \nHAS_CONSTANT_FRAME_RATE=true\n property in the \nmediaMetadata\n field.\n\n\nWhen \nMPFVideoCapture\n handles a CFR video it will use OpenCV to set the frame position, unless the position is within\n 16 frames of the current position, in which case it will iteratively use OpenCV \ngrab()\n to advance to the desired\n frame.\n\n\nWhen \nMPFVideoCapture\n handles a VFR video it will always iteratively use OpenCV \ngrab()\n to advance to the desired\n frame because setting the frame position directly has been shown to not work correctly on VFR videos.\n\n\nWhen a video is split into multiple segments, \nMPFVideoCapture\n must iteratively use \ngrab()\n to advance from frame 0\n to the start of the segment. This introduces performance overhead. To mitigate this we recommend using larger video\n segments than those used for CFR videos.\n\n\nIn addition to the existing \nTARGET_SEGMENT_LENGTH\n and \nMIN_SEGMENT_LENGTH\n job\n properties (\ndetection.segment.target.length\n and \ndetection.segment.minimum.length\n system properties) for CFR\n videos, the Workflow Manager now supports the \nVFR_TARGET_SEGMENT_LENGTH\n and \nVFR_MIN_SEGMENT_LENGTH\n job\n properties (\ndetection.vfr.segment.target.length\n and \ndetection.vfr.segment.minimum.length\n system properties) for\n VFR videos.\n\n\nNote that the timestamps associated with tracks and detections in a VFR video may be wrong. Please refer to the Known\n Issues section.\n\n\nResolved issues: \n#1307\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1287\n] Updated Tika Text Detection Component to break up large\n chunks of text. The component now generates tracks with both a \nPAGE_NUM\n property and \nSECTION_NUM\n property. Please\n refer to\n the \nREADME\n.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1313\n] Incorrect JSON output object timestamps for variable frame\n rate videos\n\n\n[\n#1317\n] Tika Text Detection component generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n\n\n6.1.5: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1300\n] Parallelized S3 artifact upload. Use\n the \ndetection.artifact.extraction.parallel.upload.count\n system property to configure the number of parallel uploads.\n\n\n\n\n6.1.4: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1299\n] Improved artifact extraction performance when there is no\n rotation or flip\n\n\n\n\n6.1.3: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1295\n] Improved artifact extraction and markup JNI memory\n utilization\n\n\n[\n#1297\n] Limited Workflow Manager IO threads to a reasonable number\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1296\n] Fixed ActiveMQ job priorities\n\n\n\n\n6.1.2: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1294\n] Limited ffmpeg threads to a reasonable number\n\n\n\n\n6.1.1: April 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1292\n] Don't skip artifact extraction for failed media\n\n\n\n\n6.1.0: April 2021\n\n\n\nOpenMPF Command Line Runner\n\n\n\n\n\nThe Command Line Runner allows users to run jobs with a single component without the Workflow Manager.\n\n\nIt outputs results in a JSON structure that is a subset of the regular OpenMPF output.\n\n\nIt only supports C++ and Python components.\n\n\nSee the\n \nREADME\n\n for more information.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nComponent code should no longer configure Log4CXX. The component executor now handles configuring Log4CXX. Component\n code should call \nlog4cxx::Logger::getLogger(\"\")\n\n to get access to the logger. Calls to \nlog4cxx::xml::DOMConfigurator::configure(logconfig_file);\n\n should be removed.\n\n\n\n\nPython Batch Component API \n\n\n\n\n\nComponent code should no longer configure logging. The component executor now handles configuring logging. Calls\n to \nmpf.configure_logging\n should be replaced with\n \nlogging.getLogger('')\n.\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\n\n\nIn order to support running a component through the CLI runner, C++ component developers should set\n the \nLD_LIBRARY_PATH\n environment variable in the final stage of their Dockerfiles. It should generally be set\n like: \nENV LD_LIBRARY_PATH $PLUGINS_DIR//lib\n.\n\n\n\n\n\n\nBecause of the logging changes mentioned above, components no longer need to set the\n \nCOMPONENT_LOG_NAME\n environment variable in their Dockerfiles.\n\n\n\n\n\n\nAdded the\n \nopenmpf_python_executor_ssb\n base image\n\n . It can be used instead of \nopenmpf_python_component_build\n and \nopenmpf_python_executor\n to simplify Dockerfiles for\n Python components that are pure Python and have no build time dependencies.\n\n\n\n\n\n\nLabel Moving vs. Non-Moving Tracks\n\n\n\n\n\nThe Workflow Manager can now identify whether a track is moving or non-moving. This is determined by calculating the\n average bounding box for a track by averaging the size and position of all the detections in the track. Then, for each\n detection in the track, the intersection over union (IoU) is calculated between that detection and the average\n detection. If the IoU for at least \nMOVING_TRACK_MIN_DETECTIONS\n number of detections is less than or equal to\n \nMOVING_TRACK_MAX_IOU\n, then the track is considered a moving track.\n\n\nAdded the following Workflow Manager job properties. These can be set for any video job:\n\n\nMOVING_TRACK_LABELS_ENABLED\n: When set to true, attempt to label tracks as either moving or non-moving objects.\n Each track will have a \nMOVING\n property set to \nTRUE\n or \nFALSE\n.\n\n\nMOVING_TRACKS_ONLY\n: When set to true, remove any tracks that were marked as not moving.\n\n\nMOVING_TRACK_MAX_IOU\n: The maximum IoU overlap between detection bounding boxes and the average per-track\n bounding box for objects to be considered moving. Value is expected to be between 0 and 1. Note that the lower\n IoU, the more likely the object is moving.\n\n\nMOVING_TRACK_MIN_DETECTIONS\n: The minimum number of moving detections for a track to be labeled as moving.\n\n\n\n\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nUsers can now watch videos directly in the OpenMPF web UI within the media pop-up dialog for each job. Most modern web\n browsers support videos encoded in VP9 and H.264. If a video cannot be played, users have the option to download it\n and play it using a stand-alone media player.\n\n\nTo set the markup encoder use \nMARKUP_VIDEO_ENCODER\n. The default encoder has changed from \nmjpeg\n to \nvp9\n. As a\n result, it will take longer to generate marked up videos, but they will be higher quality and can be viewed in the web\n UI.\n\n\nEach bounding box in the marked up media is now labeled. By default, the label shows the track-level \nCLASSIFICATION\n\n and associated confidence value. The information shown in the label can be changed by\n setting \nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n and \nMARKUP_LABELS_NUMERIC_PROP_TO_SHOW\n. To show information for each\n individual detection, rather than the entire track, set \nMARKUP_LABELS_FROM_DETECTIONS=TRUE\n.\n\n\nExemplar detections in video tracks include a star icon in their label.\n\n\nOptionally, set \nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED=TRUE\n to show icons that represent if the track is moving or\n non-moving.\n\n\nOptionally, set \nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED=TRUE\n to show icons that represent the source of the detection.\n For example, if the box is the result of an algorithm detection, tracking performing gap fill, or Workflow Manager\n animation.\n\n\nEach frame of a marked-up video now has a frame number in the upper right corner.\n\n\nPlease refer to the \nMarkup Guide\n for the complete set of markup properties, icon definitions, and\n encoder considerations.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1181\n] Updated the Tesseract OCR Text Detection component from\n Tesseract version 4.0.0 to 4.1.1\n\n\n[\n#1232\n] Updated the Azure Speech Detection component from Azure\n Batch Transcription version 2.0 to 3.0\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1187\n] EXIF orientation is now preserved during markup and artifact\n extraction\n\n\n[\n#1257\n] Updated \nOUTPUT_LAST_TASK_ONLY\n to work on all media types\n\n\n\n\nOpenMPF 6.0.x\n\n\n6.0.11: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1284\n] Updated the Azure Translation component to count emoji as 2\n characters\n\n\n\n\n6.0.10: March 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1270\n] The Azure Cognitive Services components now retry HTTP\n requests\n\n\n\n\n6.0.9: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1273\n] Setting \nTRANSLATION\n to the empty string no longer prevents\n Keyword Tagging\n\n\n\n\n6.0.6: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1265\n] Updated the Tika Text Detection component to handle\n spreadsheets\n\n\n[\n#1268\n] Updated the Tika Text Detection component to remove metadata\n\n\n\n\n6.0.5: February 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1266\n] The Azure Translation component now handles the final\n segment correctly when guessing sentence breaks\n\n\n\n\n6.0.4: February 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1264\n] Updated the Azure Translation component to handle large\n amounts of text\n\n\n[\n#1269\n] AzureTranslation no longer tries to translate text that is\n already in the \nTO_LANGUAGE\n\n\n\n\n6.0.3: February 2021\n\n\n\nOpenCV YOLO Detection Component\n\n\n\n\n\nThis new component utilizes the OpenCV Deep Neural Networks (DNN) framework to detect and classify objects in images\n and videos using Darknet YOLOv4 models trained on the COCO dataset. It supports both CPU and GPU modes of operation.\n Tracking is performed using a combination of intersection over union, pixel difference after Fast Fourier transform (\n FFT) phase correlation, Kalman filtering, and OpenCV MOSSE tracking. Refer to\n the \nREADME\n for details.\n\n\n\n\n6.0.2: January 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1249\n] FFmpeg no longer reports different frame counts for the same\n piece of media\n\n\n\n\n6.0.1: December 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1238\n] The JSON output object is now generated when remote media\n cannot be downloaded.\n\n\n\n\n6.0.0: December 2020\n\n\n\nUpgrade to OpenCV 4.5.0\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.4.7 to OpenCV 4.5.0.\n\n\nOpenCV is now built with CUDA support, including cuDNN (CUDA Deep Neural Network library) and cuBLAS (CUDA Basic\n Linear Algebra Subroutines library). All C++ components that use the base C++ builder and executor Docker images have\n CUDA support built in, giving developers the option to make use of it.\n\n\nAdded GPU support to the OcvDnnDetection component.\n\n\n\n\nAzure Cognitive Services (ACS) Translation Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Translator REST endpoint\n\n to translate text from one language (locale) to another. Generally, it's intended to operate on feed-forward tracks\n that contain detections with \nTEXT\n and \nTRANSCRIPT\n properties. It can also operate on plain text file inputs. Refer\n to the \nREADME\n for\n details.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded \nalgorithm\n field to the element that describes a collection of tracks generated by an action in the JSON output\n object. For example:\n\n\n\n\n\"output\": {\n \"FACE\": [{\n \"source\": \"+#MOG MOTION DETECTION PREPROCESSOR ACTION#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [{ ... }],\n ...\n },\n\n\n\nMerge Tasks in JSON Output Object\n\n\n\n\n\nThe output of two tasks in the JSON output object can be merged by setting the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n\n property to true. This is a Workflow Manager property and can be set on any task in any pipeline, although it has no\n effect when set on the first task or the Markup task.\n\n\nWhen the output of two tasks are merged, the tracks for the previous task will not be shown in the JSON output object,\n and no artifacts are generated for it. The task will be listed under \nTRACKS MERGED\n, if it's not already listed\n under \nTRACKS SUPPRESSED\n due to the \nmpf.output.objects.last.task.only\n system property setting,\n or \nOUTPUT_LAST_TASK_ONLY\n property. The tracks associated with the second task will inherit the detection type and\n algorithm of the previous task.\n\n\nFor example, the \nTESSERACT OCR TEXT DETECTION WITH KEYWORD TAGGING PIPELINE\n is defined as\n the \nTESSERACT OCR TEXT DETECTION TASK\n followed by the \nKEYWORD TAGGING (WITH FF REGION) TASK\n. The second task\n sets \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n to true. The resulting JSON output object contains one set of keyword-tagged\n OCR tracks that have the \nTEXT\n detection type and \nTESSERACTOCR\n algorithm (both inherited from\n the \nTESSERACT OCR TEXT DETECTION TASK\n):\n\n\n\n\n\"output\": {\n \"TRACKS MERGED\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }],\n \"TEXT\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [{\n \"type\": \"TEXT\",\n \"trackProperties\": {\n \"TAGS\": \"ANIMAL\",\n \"TEXT\": \"The quick brown fox\",\n \"TEXT_LANGUAGE\": \"script/Latin\",\n \"TRIGGER_WORDS\": \"fox\",\n \"TRIGGER_WORDS_OFFSET\": \"16-18\"\n ...\n\n\n\n\n\nNote that you can use the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n setting on multiple tasks. For example, if you set it as a\n job property it will be applied to all tasks (with the exception of Markup - in which case the task before Markup is\n used), so you will only get the output of the last task in the pipeline. The last task will inherit the detection type\n and algorithm of the first task in the pipeline.\n\n\n\n\nTesseract Custom Dictionaries\n\n\n\n\n\nThe Tesseract component Docker image now contains an \n/opt/mpf/tessdata_model_updater\n binary that you can use to\n update \n*.traineddata\n models with a custom dictionary, as well as extract files from existing models. Refer to\n the \nDICTIONARIES\n\n guide to learn how to use the tool.\n\n\nIn general, legacy \n*.traineddata\n models are more influenced by words in their dictionary than more modern\n LSTM \n*.traineddata\n models. Also, refer to the known issue below.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1243\n] Unpacking a \n*.traineddata\n model, for example, in order to\n modify its dictionary, and then repacking it may result in dropping some of the words present in the original\n dictionary file. This may be due to some kind of compression or filtering. It's unknown what effect this has on OCR\n results.\n\n\n\n\nOpenMPF 5.1.x\n\n\n5.1.3: December 2020\n\n\n\nSetting Properties as Docker Environment Variables\n\n\n\n\n\nAny property that can be set as a job property can now be set as a Docker environment variable by prefixing it\n with \nMPF_PROP_\n. For example, setting the \nMPF_PROP_TRTIS_SERVER\n environment variable in the \ntrtis-detection\n\n service in your \ndocker-compose.yml\n file will have the same effect as setting the \nTRTIS_SERVER\n job property.\n\n\nProperties set in this way will take precedence over all other property types (job, algorithm, media, etc). It is not\n possible to change the value of properties set via environment variables at runtime and therefore they should only be\n used to specify properties that will not change throughout the entire lifetime of the service.\n\n\n\n\nUpdates\n\n\n\n\n\nThe \nmpf.output.objects.censored.properties\n system property can be used to prevent properties from being shown in\n JSON output objects. The value for these properties will appear as \n\n.\n\n\nThe Azure Speech Detection component now retries without diarization when diarization is not supported by the selected\n locale.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1230\n] The Azure Speech Detection component now uses a UUID for the\n recording id associated with a piece of media in order to prevent deleting a piece of media while it's in use.\n\n\n\n\n5.1.1: December 2020\n\n\n\nUpdates\n\n\n\n\n\nOnly generate \nFRAME_COUNT\n warning when the frame difference is > 1. This can be configured using\n the \nwarn.frame.count.diff\n system property.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1209\n] The Keyword Tagging component now generates video tracks in\n the JSON output object.\n\n\n[\n#1212\n] The Keyword Tagging component now preserves the detection\n bounding box and confidence.\n\n\n\n\n5.1.0: November 2020\n\n\n\nMedia Inspection Improvements\n\n\n\n\n\nThe Workflow Manager will now handle video files that don't have a video stream as an \nAUDIO\n type, and handle video\n files that don't have a video or audio stream as an \nUNKNOWN\n type. The JSON output object contains a\n new \nmedia.mediaType\n field that will be set to \nVIDEO\n, \nAUDIO\n, \nIMAGE\n, or \nUNKNOWN\n.\n\n\nThe Workflow Manager now configures Tika\n with \ncustom MIME type support\n\n . Currently, this enables the detection of \nvideo/vnd.dlna.mpeg-tts\n and \nimage/jxr\n MIME types.\n\n\nIf the Workflow Manager cannot use Tika to determine the media MIME type then it will fall back to using the\n Linux \nfile\n command with\n a \ncustom magicfile\n\n .\n\n\nOpenMPF now supports Apple-optimized PNGs and HEIC images. Refer to the Bug Fixes section below.\n\n\n\n\nEAST Text Region Detection Component Improvements\n\n\n\n\n\nThe \nTEMPORARY_PADDING\n property has been separated into \nTEMPORARY_PADDING_X\n and \nTEMPORARY_PADDING_Y\n so that X and\n Y padding can be configured independently.\n\n\nThe \nMERGE_MIN_OVERLAP\n property has been renamed to \nMERGE_OVERLAP_THRESHOLD\n so that setting it to a value of 0 will\n merge all regions that touch, regardless of how small the amount of overlap.\n\n\nRefer to\n the \nREADME\n\n for details.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tool Improvements\n\n\n\n\n\nThese tools now support a \nROTATION_FILL_COLOR\n property for setting the fill color for pixels near the corners and\n edges of frames when performing non-orthogonal rotations. Previously, the color was hardcoded to \nBLACK\n. That is\n still the default setting for most components. Now the color can be set to \nWHITE\n, which is the default setting for\n the Tesseract component.\n\n\nThese tools now support a \nROTATION_THRESHOLD\n property for adjusting the threshold at which the frame transformer\n performs rotation. Previously, the value was hardcoded to 0.1 degrees. That is still the default value. Rotation is\n not performed on any \nROTATION\n value less than that threshold. The motivation is that some algorithms detect small\n rotations (for example, on structured text) when there is no rotation. In such cases rotating the frame results in\n fewer detections.\n\n\nOpenMPF now uses FFmpeg when counting video frames. Refer to the Bug Fixes section below.\n\n\n\n\nAzure Cognitive Services (ACS) Form Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Form Detection REST endpoint\n\n to extract formatted text from documents (PDFs) and images. Refer to\n the \nREADME\n for\n details.\n\n\nThis component is capable of performing detections using a specified ACS endpoint URL. For example, different\n endpoints support receipt detection, business card detection, layout analysis, and support for custom models trained\n with or without labeled data.\n\n\nThis component may output the following detection properties depending on the endpoint, model, and media being\n processed: \nTEXT\n, \nTABLE_CSV_OUTPUT\n, \nKEY_VALUE_PAIRS_JSON\n, and \nDOCUMENT_JSON_FIELDS\n.\n\n\n\n\nKeyword Tagging Component\n\n\n\n\n\nThis new component performs the same keyword tagging behavior that was previously part of the Tesseract component, but\n does so on feed-forward tracks that generate detections with \nTEXT\n and \nTRANSCRIPT\n properties. Refer to\n the \nREADME\n for details.\n\n\nIn addition to the Tesseract component, keyword tagging behavior has been removed from the Tika Text component and ACS\n OCR component.\n\n\nExample pipelines have been added to the following components which make use of a final Keyword Tagging component\n stage:\n\n\nTesseract\n\n\nTika Text\n\n\nACS OCR\n\n\nSphinx\n\n\nACS Speech\n\n\n\n\n\n\n\n\nOptionally Skip Media Inspection\n\n\n\n\n\nThe Workflow Manager will skip media inspection if all of the required media metadata is provided in the job request.\n The \nMEDIA_HASH\n and \nMIME_TYPE\n fields are always required. Depending on the media data type, other fields may be\n required or optional:\n\n\nImages\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nVideos\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n, \nFRAME_COUNT\n, \nFPS\n, \nDURATION\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nAudio files\n\n\nRequired: \nDURATION\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nUpdate OpenMPF Python SDK exception handling for Python 3. Now instead of raising an \nEnvironmentError\n, which has\n been deprecated in Python 3, the SDK will raise an \nmpf.DetectionError\n or allow the underlying exception to be\n thrown.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1028\n] OpenMPF can now properly handle Apple-optimized PNGs, which\n have a non-standard data chunk named CgBI before the IHDR chunk. The Workflow Manager\n uses \npngdefry\n to convert the image into a standard PNG for processing. Before\n this fix, Tika would throw an error when trying to determine the MIME type of the Apple-optimized PNG.\n\n\n[\n#1130\n] OpenMPF can now properly handle HEIC images. The Workflow\n Manager uses \nlibheif\n to convert the image into a standard PNG for processing.\n Before this fix, the HEIC image was sometimes falsely identified as a video and the Workflow Manager would fail to\n count the number of frames.\n\n\n[\n#1171\n] The MIME type in the JSON output object is no longer null\n when there is a frame counting exception.\n\n\n[\n#1192\n] When processing videos, the frame count is now obtained from\n both OpenCV and FFmpeg. The lower of the two is used. If they don't match, a \nFRAME_COUNT\n warning is generated.\n Before this fix, on some videos OpenCV would return frame counts that were magnitudes higher than the frames that\n could actually be read. This resulted in failing to process many video segments with a \nBAD_FRAME_SIZE\n error.\n\n\n\n\nOpenMPF 5.0.x\n\n\n5.0.9: October 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1200\n] The MPFVideoCapture and MPFImageReader tools now properly\n handle cropping to frame regions when the region coordinates fall outside of the frame boundary. There was a bug that\n would result in an OpenCV error. Note that the bug only occurred when cropping was not performed with rotation or\n flipping.\n\n\n\n\n5.0.8: October 2020\n\n\n\nUpdates\n\n\n\n\n\nThe Tesseract component now supports a \nTESSDATA_MODELS_SUBDIRECTORY\n property. The component will look for tessdata\n files in \n/\n. This allows users to easily switch between \ntessdata\n\n , \ntessdata_best\n, and \ntessdata_fast\n subdirectories.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1199\n] Added missing synchronized to InProgressBatchJobsService,\n which was resulting in some jobs staying \nIN_PROGRESS\n indefinitely.\n\n\n\n\n5.0.7: September 2020\n\n\n\nTensorRT Inference Server (TRTIS) Object Detection Component\n\n\n\n\n\nThis new component detects objects in images and videos by making use of\n an \nNVIDIA TensorRT Inference Server\n (\n TRTIS), and calculates features that can later be used by other systems to recognize the same object in other media.\n We provide support for running the server as a separate service during a Docker deployment, but an external server\n instance can be used instead.\n\n\nBy default, the ip_irv2_coco model is supported and will optionally classify detected objects\n using \nCOCO labels\n\n . Additionally, features can be generated for whole frames, automatically-detected object regions, and user-specified\n regions. Refer to the \nREADME\n\n .\n\n\n\n\n5.0.6: August 2020\n\n\n\nEnable OcvDnnDetection to Annotate Feed-forward Detections\n\n\n\n\n\nThe OcvDnnDetection component can now by configured to operate only on certain feed-forward detections and annotate\n them with supplementary information. For example, the following pipeline can be configured to generate detections that\n have both \nCLASSIFICATION\n and \nCOLOR\n detection properties:\n\n\n\n\nDarknetDetection (person + vehicle) --> OcvDnnDetection (vehicle color)\n\n\n\n\n\nFor example:\n\n\n\n\n \"detectionProperties\": {\n \"CLASSIFICATION\": \"car\",\n \"CLASSIFICATION CONFIDENCE LIST\": \"0.397336\",\n \"CLASSIFICATION LIST\": \"car\",\n \"COLOR\": \"blue\",\n \"COLOR CONFIDENCE LIST\": \"0.93507; 0.055744\",\n \"COLOR LIST\": \"blue; gray\"\n }\n\n\n\n\n\nThe OcvDnnDetection component now supports the following properties:\n\n\nCLASSIFICATION_TYPE\n: Set this value to change the \nCLASSIFICATION*\n part of each output property name to\n something else. For example, setting it to \nCOLOR\n will generate \nCOLOR\n, \nCOLOR LIST\n,\n and \nCOLOR CONFIDENCE LIST\n. When handling feed-foward detections, the pre-existing \nCLASSIFICATION*\n properties\n will be carried over and the \nCOLOR*\n properties will be added to the detection.\n\n\nFEED_FORWARD_WHITELIST_FILE\n: When \nFEED_FORWARD_TYPE\n is provided and not set to \nNONE\n, only feed-forward\n detections with class names contained in the specified file will be processed. For, example, a file with only \"\n car\" in it will result in performing the exclude behavior (below) for all feed-foward detections that do not have\n a \nCLASSIFICATION\n of \"car\".\n\n\nFEED_FORWARD_EXCLUDE_BEHAVIOR\n: Specifies what to do when excluding detections not specified in\n the \nFEED_FORWARD_WHITELIST_FILE\n. Acceptable values are:\n\n\nPASS_THROUGH\n: Return the excluded detections, without modification, along with any annotated detections.\n\n\nDROP\n: Don't return the excluded detections. Only return annotated detections.\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nMake interop package work with Java 8 to better support exernal job producers and consumers.\n\n\n\n\n5.0.5: August 2020\n\n\n\nUpdates\n\n\n\n\n\nConfigure Camel not to auto-acknowledge messages. Users can now see the number of pending messages in the ActiveMQ\n management console for queues consumed by the Workflow Manager.\n\n\nImprove Tesseract OSD fallback behavior. This prevents selecting the OSD rotation from the fallback pass without the\n OSD script from the fallback pass.\n\n\n\n\n5.0.4: August 2020\n\n\n\nUpdates\n\n\n\n\n\nRetry job callbacks when they fail. The Workflow Manager now supports the \nhttp.callback.timeout.ms\n\n and \nhttp.callback.retries\n system properties.\n\n\nDrop \"duplicate paged in from cursor\" DLQ messages.\n\n\n\n\n5.0.3: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdate ActiveMQ to 5.16.0.\n\n\n\n\n5.0.2: July 2020\n\n\n\nUpdates\n\n\n\n\n\nDisable video segmentation for ACS Speech Detection to prevent issues when generating speaker ids.\n\n\n\n\n5.0.1: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated Tessseract component with \nMAX_PIXELS\n setting to prevent processing large images.\n\n\n\n\n5.0.0: June 2020\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guides to describe the new build process,\n which now includes automatically copying the openmpf repo source code into the openmpf-build image instead of using\n various bind mounts, and building all of the component base builder and executor images.\n\n\nUpdated the openmpf-docker repo \nREADME\n with the\n following sections:\n\n\nHow\n to \nUse Kibana for Log Viewing and Aggregation\n\n\nHow\n to \nRestrict Media Types that a Component Can Process\n\n\nHow\n to \nImport Root Certificates for Additional Certificate Authorities\n\n\n\n\n\n\nUpdated the \nCONTRIBUTING\n guide for Docker\n deployment with information on the new build process and component base builder and executor images.\n\n\nUpdated the \nInstall Guide\n with a pointer to the \"Quick Start\" section on DockerHub.\n\n\nUpdated the \nREST API\n with the new endpoints for getting, deleting, and creating actions, tasks, and\n pipelines, as well as a change to the \n[GET] /rest/info\n endpoint.\n\n\nUpdated the \nC++ Batch Component API\n to describe changes to the \nGetDetection()\n calls,\n which now return a collection of detections or tracks instead of an error code, and to describe improvements to\n exception handling.\n\n\nUpdated the \nC++ Batch Component API\n\n , \nPython Batch Component API\n,\n and \nJava Batch Component API\n with \nMIME_TYPE\n, \nFRAME_WIDTH\n, and \nFRAME_HEIGHT\n media\n properties.\n\n\nUpdated the \nPython Batch Component API\n with information on Python3 and the\n simplification of using a \ndict\n for some of the data members.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nRenamed \nstages\n to \ntasks\n for clarity and consistency with the rest of the code.\n\n\nThe \nmedia\n element no longer contains a \nmessage\n field.\n\n\nEach \ndetectionProcessingError\n element now contains a \ncode\n field.\n\n\nErrors and warnings are now grouped by \nmediaId\n and summarized using a \ndetails\n element that contains a \nsource\n\n , \ncode\n, and \nmessage\n field. Refer\n to \nthis comment\n for an example of the JSON\n structure. Note that errors and warnings generated by the Workflow Manager do not have a \nmediaId\n.\n\n\nWhen an error or warning occurs in multiple frames of a video for a single piece of media it will be represented\n in one \ndetails\n element and the \nmessage\n will list the frame ranges.\n\n\n\n\n\n\n\n\nInteroperability Package\n\n\n\n\n\nRenamed \nJsonStage.java\n to \nJsonTask.java\n.\n\n\nRemoved \nJsonJobRequest.java\n.\n\n\nModified \nJsonDetectionProcessingError.java\n by removing the \nstartOffset\n and \nstopOffset\n fields and adding the\n following new fields: \nstartOffsetFrame\n, \nstopOffsetFrame\n, \nstartOffsetTime\n, \nstopOffsetTime\n, and \ncode\n.\n\n\nUpdated \nJsonMediaOutputObject.java\n by removing \nmessage\n field.\n\n\nAdded \nJsonMediaIssue.java\n and \nJsonIssueDetails.java\n.\n\n\n\n\nPersistent Database\n\n\n\n\n\nThe \ninput_object\n column in the \njob_request\n table has been renamed to \njob\n and the content now contains a\n serialized form of \nBatchJob.java\n instead of \nJsonJobRequest.java\n.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nThe \nGetDetection()\n calls now return a collection instead of an error code:\n\n\nstd::vector GetDetections(const MPFImageJob &job)\n\n\nstd::vector GetDetections(const MPFVideoJob &job)\n\n\nstd::vector GetDetections(const MPFAudioJob &job)\n\n\nstd::vector GetDetections(const MPFGenericJob &job)\n\n\n\n\n\n\nMPFDetectionException\n can now be constructed with a \nwhat\n parameter representing a descriptive error message:\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\n\n\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nSimplified the \ndetection_properties\n and \nframe_locations\n data members to use a Python \ndict\n instead of a custom\n data type.\n\n\n\n\nFull Docker Conversion\n\n\n\n\n\nEach component is now encapsulated in its own Docker image which self-registers with the Workflow Manager at runtime.\n This deconflicts component dependencies, and allows for greater flexibility when deciding which components to deploy\n at runtime.\n\n\nThe Node Manager image has been removed. For Docker deployments, component services should be managed using Docker\n tools external to OpenMPF.\n\n\nIn Docker deployments, streaming job REST endpoints are disabled, the Nodes web page is no longer available, component\n tar.gz packages cannot be registered through the Component Registration web page, and the \nmpf\n command line script\n can now only be run on the Workflow Manager container to modify user settings. The preexisting features are now\n reserved for non-Docker deployments and development environments.\n\n\nThe OpenMPF Docker stack can optionally be deployed with \nKibana\n (which depends on\n Elasticsearch and Filebeat) for viewing log files. Refer to the\n openmpf-docker \nREADME\n\n .\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\nA base builder image and executor image are provided for\n C++ (\nREADME\n),\n Python (\nREADME\n), and\n Java (\nREADME\n) component\n development. Component developers can also refer to the Dockerfile in the source code for each component as reference\n for how to make use of the base images.\n\n\n\n\nRestrict Media Types that a Component Can Process\n\n\n\n\n\nEach component service now supports an optional \nRESTRICT_MEDIA_TYPES\n Docker environment variable that specifies the\n types of media that service will process. For example, \nRESTRICT_MEDIA_TYPES: VIDEO,IMAGE\n will process both videos\n and images, while \nRESTRICT_MEDIA_TYPES: IMAGE\n will only process images. If not specified, the service will process\n all of the media types it natively supports. For example, this feature can be used to ensure that some services are\n always available to process images while others are processing long videos.\n\n\n\n\nImport Additional Root Certificates into the Workflow Manager\n\n\n\n\n\nAdditional root certificates can be imported into the Workflow Manager at runtime by adding an entry\n for \nMPF_CA_CERTS\n to the workflow-manager service's environment variables in \ndocker-compose.core.yml\n\n . \nMPF_CA_CERTS\n must contain a colon-delimited list of absolute file paths. Of note, a root certificate may be used\n to trust the identity of a remote object storage server.\n\n\n\n\nDockerHub\n\n\n\n\n\nPushed prebuilt OpenMPF Docker images to \nDockerHub\n. Refer to the \"Quick Start\"\n section of the OpenMPF Workflow Manager\n image \ndocumentation\n.\n\n\n\n\nVersion Updates\n\n\n\n\n\nUpdated from Oracle Java 8 to OpenJDK 11, which required updating to Tomcat 8.5.41. We now\n use \nCargo\n to run integration tests.\n\n\nUpdated OpenCV from 3.0.0 to 3.4.7 to update Deep Neural Networks (DNN) support.\n\n\nUpdated Python from 2.7 to 3.8.2.\n\n\n\n\nFFmpeg\n\n\n\n\n\nWe are no longer building separate audio and video encoders and decoders for FFmpeg. Instead, we are using the\n built-in decoders that come with FFmpeg by default. This simplifies the build process and redistribution via Docker\n images.\n\n\n\n\nArtifact Extraction\n\n\n\n\n\nThe \nARTIFACT_EXTRACTION_POLICY\n property can now be assigned a value of \nNONE\n, \nVISUAL_TYPES_ONLY\n, \nALL_TYPES\n,\n or \nALL_DETECTIONS\n.\n\n\nWith the \nVISUAL_TYPES_ONLY\n or \nALL_TYPES\n policy, artifacts will be extracted according to\n the \nARTIFACT_EXTRACTION_POLICY*\n properties. With the \nNONE\n and \nALL_DETECTIONS\n policies, those settings are\n ignored.\n\n\nNote that previously \nNONE\n, \nVISUAL_EXEMPLARS_ONLY\n, \nEXEMPLARS_ONLY\n, \nALL_VISUAL_DETECTIONS\n,\n and \nALL_DETECTIONS\n were supported.\n\n\n\n\n\n\nThe following \nARTIFACT_EXTRACTION_POLICY*\n properties are now supported:\n\n\nARTIFACT_EXTRACTION_POLICY_EXEMPLAR_FRAME_PLUS\n: Extract the exemplar frame from the track, plus this many frames\n before and after the exemplar.\n\n\nARTIFACT_EXTRACTION_POLICY_FIRST_FRAME\n: If true, extract the first frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_MIDDLE_FRAME\n: If true, extract the frame with a detection that is closest to the\n middle frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_LAST_FRAME\n: If true, extract the last frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n: Sort the detections in a track by confidence and then extract\n this many detections, starting with those which have the highest confidence.\n\n\nARTIFACT_EXTRACTION_POLICY_CROPPING\n: If true, an artifact will be extracted for each detection in each frame\n that is selected according to the other \nARTIFACT_EXTRACTION_POLICY*\n properties. The extracted artifact will be\n cropped to the width and height of the detection bounding box, and the artifact will be rotated according to the\n detection \nROTATION\n property. If false, the artifact extraction behavior is unchanged from the previous release:\n the entire frame will be extracted without any rotation.\n\n\n\n\n\n\nFor clarity, \nOUTPUT_EXEMPLARS_ONLY\n has been renamed to \nOUTPUT_ARTIFACTS_AND_EXEMPLARS_ONLY\n. Extracted artifacts\n will always be reported in the JSON output object.\n\n\nThe \nmpf.output.objects.exemplars.only\n system property has been renamed\n to \nmpf.output.objects.artifacts.and.exemplars.only\n. It works the same as before with the exception that if an\n artifact is extracted for a detection then that detection will always be represented in the JSON output object,\n whether it's an exemplar or not.\n\n\nThe \nmpf.output.objects.last.stage.only\n system property has been renamed to \nmpf.output.objects.last.task.only\n. It\n works the same as before with the exception that when set to true artifact extraction is skipped for all tasks but the\n last task.\n\n\n\n\nREST Endpoints\n\n\n\n\n\nModified \n[GET] /rest/info\n. Now returns output like \n{\"version\": \"4.1.0\", \"dockerEnabled\": true}\n.\n\n\nAdded the following REST endpoints for getting, removing, and creating actions, tasks, and pipelines. Refer to\n the \nREST API\n for more information:\n\n\n[GET] /rest/actions\n, \n[GET] /rest/tasks\n, \n[GET] /rest/pipelines\n\n\n[DELETE] /rest/actions\n, \n[DELETE] /rest/tasks\n, \n[DELETE] /rest/pipelines\n\n\n[POST] /rest/actions\n , \n[POST] /rest/tasks\n, \n[POST] /rest/pipelines\n\n\n\n\n\n\nAll of the endpoints above are new with the exception of \n[GET] /rest/pipelines\n. The endpoint has changed since the\n last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a\n collection of tasks for each pipelines. Refer to the REST API.\n\n\n[GET] /rest/algorithms\n can be used to get information about algorithms. Note that algorithms are tied to registered\n components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the\n associated component's Docker container so it self-registers with the Workflow Manager.\n\n\n\n\nIncomplete Actions, Tasks, and Pipelines\n\n\n\n\n\nThe previous version of OpenMPF would generate an error when attempting to register a component that included actions,\n tasks, or pipelines that depend on algorithms, actions, or tasks that are not yet registered with the Workflow\n Manager. This required components to be registered in a specific order. Also, when unregistering a component, it\n required the components which depend on it to be unregistered. These dependency checks are no longer enforced.\n\n\nIn general, the Workflow Manager now appropriately handles incomplete actions, tasks, and pipelines by checking if all\n of the elements are defined before executing a job, and then preserving that information in memory until the job is\n complete. This allows components to be registered and removed in an arbitrary order without affecting the state of\n other components, actions, tasks, or pipelines. This also allows actions and tasks to be removed using the new REST\n endpoints and then re-added at a later time while still preserving the elements that depend on them.\n\n\nNote that unregistering a component while a job is running will cause it to stall. Please ensure that no jobs are\n using a component before unregistering it.\n\n\n\n\nPython Arbitrary Rotation\n\n\n\n\n\nThe Python MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270\n degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range\n will be normalized to that range. Floating point values are accepted. This is similar to the existing support\n for \nC++ arbitrary rotation\n.\n\n\n\n\nOpenCV Deep Neural Networks (DNN) Detection Component\n\n\n\n\n\nThis new component replaces the old CaffeDetection component. It supports the same GoogLeNet and Yahoo Not Suitable\n For Work (NSFW) models as the old component, but removes support for the Rezafuad vehicle color detection model in\n favor of a custom TensorFlow vehicle color detection model. In our tests, the new model has proven to be more\n generalizable and provide more accurate results on never-before-seen test data. Refer to\n the \nREADME\n.\n\n\n\n\nAzure Cognitive Services (ACS) Speech Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Batch Transcription REST endpoint\n\n to transcribe speech from audio and video files. Refer to\n the \nREADME\n.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nText tagging has been simplified to only support regular expression searches. Whole keyword searches are a subset of\n regular expression searches, and are therefore still supported. Also, the \ntext-tags.json\n file format has been\n updated to allow for specifying case-sensitive regular expression searches.\n\n\nAdditionally, the \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n detection properties are now supported, which list the\n OCR'd words that resulted in adding a \nTAG\n to the detection, and the character offset of those words within the\n OCR'd \nTEXT\n, respectively.\n\n\nKey changes to tagging output and \ntext-tags.json\n format are outlined below. Refer to\n the \nREADME\n\n for more information:\n\n\nRegex patterns should now be entered in the format \n{\"pattern\": \"regex_pattern\"}\n. Users can add and toggle\n the \n\"caseSensitive\"\n regex flag for each pattern.\n\n\nFor example: \n{\"pattern\": \"(\\\\b)bus(\\\\b)\", \"caseSensitive\": true}\n enables case-sensitive regex pattern\n matching.\n\n\nBy default, each regex pattern, including those in the legacy format, will be case-insensitive.\n\n\n\n\n\n\nAs part of the text tagging update, the \nTAGS\n outputs are now separated by semicolons \n;\n rather than commas \n,\n\n to be consistent with the delimiters for \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n output patterns.\n\n\nBecause semicolons can be part of the trigger word itself, those semicolons will be encapsulated in brackets.\n\n\nFor example, \ndetected trigger with a ;\n in the OCR'd \nTEXT\n is reported\n as \nTRIGGER_WORDS=detected trigger with a [;]; some other trigger\n.\n\n\n\n\n\n\nCommas are now used to group each set of \nTRIGGER_WORDS_OFFSET\n with its respective \nTRIGGER_WORDS\n output.\n Both \nTAGS\n and \nTRIGGER_WORDS\n are separated by semicolons only.\n\n\nFor example: \nTRIGGER_WORDS=trigger1; trigger2\n, \nTRIGGER_WORDS_OFFSET=0-5, 6-10; 12-15\n, means\n that \ntrigger1\n occurs twice in the text at the index ranges 0-5 and 6-10, and \ntrigger2\n occurs at index\n range 12-15.\n\n\n\n\n\n\n\n\n\n\nRegex tagging now follows the C++ ECMAS format (\n see \nexamples here\n) after resolving JSON string conversion\n for regex tags.\n\n\nAs a result the regex patterns \n\\b\n and \n\\p\n in the text tagging file must now be written as \n\\\\b\n and \n\\\\p\n,\n respectively, to match the format of other regex character patterns (ex. \n\\\\d\n, \n\\\\w\n, \n\\\\s\n, etc.).\n\n\n\n\n\n\nThe \nMAX_PARALLEL_SCRIPT_THREADS\n and \nMAX_PARALLEL_PAGE_THREADS\n properties are now supported. When processing\n images, the first property is used to determine how many threads to run in parallel. Each thread performs OCR using a\n different language or script model. When processing PDFs, the second property is used to determine how many threads to\n run in parallel. Each thread performs OCR on a different page of the PDF.\n\n\nThe \nENABLE_OSD_FALLBACK\n property is now supported. If enabled, an additional round of OSD is performed when the\n first round fails to generate script predictions that are above the OSD score and confidence thresholds. In the second\n pass, the component will run OSD on multiple copies of the input text image to get an improved prediction score\n and \nOSD_FALLBACK_OCCURRED\n detection property will be set to true.\n\n\nIf any OSD-detected models are missing, the new \nMISSING_LANGUAGE_MODELS\n detection property will list the missing\n models.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThe Tika text detection component now supports text tagging in the same way as the Tesseract component. Refer to\n the \nREADME\n.\n\n\n\n\nOther Improvements\n\n\n\n\n\nSimplified component \ndescriptor.json\n files by moving the specification of common properties, such\n as \nCONFIDENCE_THRESHOLD\n, \nFRAME_INTERVAL\n, \nMIN_SEGMENT_LENGTH\n, etc., to a single \nworkflow-properties.json\n file.\n Now when the Workflow Manager is updated to support new features, the component \ndescriptor.json\n file will not need\n to be updated.\n\n\nUpdated the Sphinx component to return \nTRANSCRIPT\n instead of \nTRANSCRIPTION\n, which is grammatically correct.\n\n\nWhitespace is now trimmed from property names when jobs are submitted via the REST API.\n\n\nThe Darknet Docker image now includes the YOLOv3 model weights.\n\n\nThe C++ and Python ModelsIniParser now allows users to specify optional fields.\n\n\nWhen a job completion callback fails, but otherwise the job is successful, the final state of the job will\n be \nCOMPLETE_WITH_WARNINGS\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#772\n] Can now create a custom pipeline with long action names using\n the Pipelines 2 UI.\n\n\n[\n#812\n] Now properly setting the start and stop index for elements in\n the \ndetectionProcessingErrors\n collection in the JSON output object. Errors reported for each job segment will now\n appear in the collection.\n\n\n[\n#941\n] Tesseract component no longer segfaults when handling corrupt\n media.\n\n\n[\n#1005\n] Fixed a bug that caused a NullPointerException when\n attempting to get output object JSON via REST before a job completes.\n\n\n[\n#1035\n] The search bar in the Job Status UI can once again for used\n to search for job id.\n\n\n[\n#1104\n] Fixed C++/Python component executor memory leaks.\n\n\n[\n#1108\n] Fixed a bug when handling frames and detections that are\n horizontally flipped. This affected both markup and feed-forward behaviors.\n\n\n[\n#1119\n] Fixed Tesseract component memory leaks and uninitialized\n read issues.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1028\n] Media inspection fails to handle Apple-optimized PNGs with\n the CgBI data chunk before the IHDR chunk.\n\n\n[\n#1109\n] We made the search bar in the Job Status UI more efficient\n by shifting it to a database query, but in doing so introduced a bug where the search operates on UTC time instead of\n local system time.\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n does not behave as expected for\n batch jobs. A user would expect it to control whether the JSON output object is generated, but it's generated\n regardless of that setting.\n\n\n[\n#1032\n] Jobs fail on corrupt QuickTime videos. For these videos, the\n OpenCV-reported frame count is more than twice the actual frame count.\n\n\n[\n#1106\n] When a job ends in ERROR the job status UI does not show an\n End Date.\n\n\n\n\nOpenMPF 4.1.x\n\n\n4.1.14: June 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1120\n] The node-manager Docker image now correctly installs CUDA\n libraries so that GPU-enabled components on that image can run on the GPU.\n\n\n[\n#1064\n] Fixed memory leaks in the Darknet component for various\n network types, and when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.13: June 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated the OpenCV build and media inspection process to properly handle webp images.\n\n\n\n\n4.1.12: May 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated JDK from \njdk-8u181-linux-x64.rpm\n to \njdk-8u251-linux-x64.rpm\n.\n\n\n\n\n4.1.11: May 2020\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nAdded \nINVALID_MIN_IMAGE_SIZE\n job property to filter out images with extremely low width or height.\n\n\nUpdated image rescaling behavior to account for image dimension limits.\n\n\nFixed handling of \nnullptr\n returns from Tesseract API OCR calls.\n\n\n\n\n4.1.8: May 2020\n\n\n\nAzure Cognitive Services (ACS) OCR Component\n\n\n\n\n\nThis new component utilizes\n the \nACS OCR REST endpoint\n\n to extract text from images and videos. Refer to\n the \nREADME\n.\n\n\n\n\n4.1.6: April 2020\n\n\n\nUpdates\n\n\n\n\n\nNow silently discarding ActiveMQ DLQ \"Suppressing duplicate delivery on connection\" messages in addition to \"duplicate\n from store\" messages.\n\n\n\n\n4.1.5: March 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1062\n] Fixed a memory leak in the Darknet component that occurred\n when running jobs on CPU resources with the Tiny YOLO model.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1064\n] The Darknet component has memory leaks for various network\n types, and potentially when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.4: March 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated from Hibernate 5.0.8 to 5.4.12 to support schema-based multitenancy. This allows multiple instances of OpenMPF\n to use the same PostgreSQL database as long as each instance connects to the database as a separate user, and the\n database is configured appropriately. This also required updating Tomcat from 7.0.72 to 7.0.76.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nUpdated the Workflow Manager to include an \noutputobjecturi\n in GET callbacks, and \noutputObjectUri\n in POST\n callbacks, when jobs complete. This URI specifies a file path, or path on the object storage server, depending on\n where the JSON output object is located.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nUpdated \nJsonCallbackBody.java\n to contain an \noutputObjectUri\n field.\n\n\n\n\n4.1.3: February 2020\n\n\n\nFeatures\n\n\n\n\n\nAdded support for \nDETECTION_PADDING_X\n and \nDETECTION_PADDING_Y\n optional job properties. The value can be a\n percentage or whole-number pixel value. When positive, each detection region in each track will be expanded. When\n negative, the region will shrink. If the detection region is shrunk to nothing, the shrunk dimension(s) will be set to\n a value of 1 pixel and the \nSHRUNK_TO_NOTHING\n detection property will be set to true.\n\n\nAdded support for \nDISTANCE_CONFIDENCE_WEIGHT_FACTOR\n and \nSIZE_CONFIDENCE_WEIGHT_FACTOR\n SuBSENSE algorithm\n properties. Increasing the value of the first property will generate detection confidence values that favor being\n closer to the center frame of a track. Increasing the value of the second property will generate detection confidence\n values that favor large detection regions.\n\n\n\n\n4.1.1: January 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1016\n] Fixed a bug that caused a deadlock situation when the media\n inspection process failed quickly when processing many jobs using a pipeline with more than one stage.\n\n\n\n\n4.1.0: July 2019\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nC++ Batch Component API\n to describe the \nROTATION\n\n detection property. See the \nC++ Arbitrary Rotation\n section below.\n\n\nUpdated the \nREST API\n with new component registration REST endpoints. See\n the \nComponent Registration REST Endpoints\n section below.\n\n\nAdded a \nREADME\n for\n the EAST text region detection component. See\n the \nEAST Text Region Detection Component\n section below.\n\n\nUpdated the Tesseract OCR text detection\n component \nREADME\n\n . See the \nTesseract OCR Text Detection Component\n section below.\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guide to describe the new streamlined\n approach to using \ndocker-compose config\n. See the \nDocker Deployment\n section below.\n\n\nFixed the description of \nMIN_SEGMENT_LENGTH\n and associated examples in\n the \nUser Guide\n for\n issue \n#891\n.\n\n\nUpdated the \nJava Batch Component API\n with information on how to use Log4j2.\n Related to resolving issue \n#855\n.\n\n\nUpdated the \nInstall Guide\n to point to the\n Docker \nREADME\n.\n\n\nTransformed the Build Guide into a \nDevelopment Environment Guide\n.\n\n\n\n\n\n\nC++ Arbitrary Rotation\n\n\n\n\n\nThe C++ MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees.\n Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be\n normalized to that range. Floating point values are accepted.\n\n\nWhen using those tools to read frame data, they will automatically correct for rotation so that the returned frame is\n horizontally oriented toward the normal 3 o'clock position.\n\n\nWhen \nFEED_FORWARD_TYPE=REGION\n, these tools will look for a \nROTATION\n detection property in the feed-forward\n detections and automatically correct for rotation. For example, a detection property of \nROTATION=90\n represents\n that the region is rotated 90 degrees counter clockwise, and therefore must be rotated 90 degrees clockwise to\n correct for it.\n\n\nWhen \nFEED_FORWARD_TYPE=SUPERSET_REGION\n, these tools will properly account for the \nROTATION\n detection property\n associated with each feed-forward detection when calculating the bounding box that encapsulates all of those\n regions.\n\n\nWhen \nFEED_FORWARD_TYPE=FRAME\n, these tools will rotate the frame according to the \nROTATION\n job property. It's\n important to note that for rotations other than 0, 90, 180, and 270 degrees the rotated frame dimensions will be\n larger than the original frame dimensions. This is because the frame needs to be expanded to encapsulate the\n entirety of the original rotated frame region. Black pixels are used to fill the empty space near the edges of the\n original frame.\n\n\n\n\n\n\nThe Markup component now places a colored dot at the upper-left corner of each detection region so that users can\n determine the rotation of the region relative to the entire frame.\n\n\n\n\n\n\nComponent Registration REST Endpoints\n\n\n\n\n\nAdded a \n[POST] /rest/components/registerUnmanaged\n endpoint so that components running as separate Docker containers\n can self-register with the Workflow Manager.\n\n\nSince these components are not managed by the Node Manager, they are considered unmanaged OpenMPF components.\n These components are not displayed in Nodes web UI and are tagged as unmanaged in the Component Registration web\n UI where they can only be removed.\n\n\nNote that components uploaded to the Component Registration web UI as .tar.gz files are considered managed\n components.\n\n\n\n\n\n\nAdded a \n[DELETE] /rest/components/{componentName}\n endpoint that can be used to remove managed and unmanaged\n components.\n\n\n\n\nPython Component Executor Docker Image\n\n\n\n\n\nComponent developers can now use a Python component executor Docker image to write a Python component for OpenMPF that\n can be encapsulated within a Docker container. This isolates the build and execution environment from the rest of\n OpenMPF. For more information, see\n the \nREADME\n.\n\n\nComponents developed with this image are not managed by the Node Manager; rather, they self-register with the Workflow\n Manager and their lifetime is determined by their own Docker container.\n\n\n\n\n\n\nDocker Deployment\n\n\n\n\n\nStreamlined single-host \ndocker-compose up\n deployments and multi-host \ndocker stack deploy\n swarm deployments. Now\n users are instructed to create a single \ndocker-compose.yml\n file for both types of deployments.\n\n\nRemoved the \ndocker-generate-compose-files.sh\n script in favor of allowing users the flexibility of combining\n multiple \ndocker-compose.*.yml\n files together using \ndocker-compose config\n. See\n the \nGenerate docker-compose.yml\n\n section of the README.\n\n\nComponents based on the Python component executor Docker image can now be defined and configured directly\n in \ndocker-compose.yml\n.\n\n\nOpenMPF Docker images now make use of Docker labels.\n\n\n\n\n\n\nEAST Text Region Detection Component\n\n\n\n\n\nThis new component uses the Efficient and Accurate Scene Text (EAST) detection model to detect text regions in images\n and videos. It reports their location, angle of rotation, and text type (\nSTRUCTURED\n or \nUNSTRUCTURED\n), and supports\n a variety of settings to control the behavior of merging text regions into larger regions. It does not perform OCR on\n the text or track detections across video frames. Thus, each video track is at most one detection long. For more\n information, see\n the \nREADME\n.\n\n\nOptionally, this component can be built as a Docker image using the Python component executor Docker image, allowing\n it to exist apart from the Node Manager image.\n\n\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to support reading tessdata \n*.traineddata\n files at a specified \nMODELS_DIR_PATH\n. This allows users to\n install new \n*.traineddata\n files post deployment.\n\n\nUpdated to optionally perform Tesseract Orientation and Script Detection (OSD). When enabled, the component will\n attempt to use the orientation results of OSD to automatically rotate the image, as well as perform OCR using the\n scripts detected by OSD.\n\n\nUpdated to optionally rotate a feed-forward text region 180 degrees to account for upside-down text.\n\n\nNow supports the following preprocessing properties for both structured and unstructured text:\n\n\nText sharpening\n\n\nText rescaling\n\n\nOtsu image thresholding\n\n\nAdaptive thresholding\n\n\nHistogram equalization\n\n\nAdaptive histogram equalization (also known as Contrast Limited Adaptive Histogram Equalization (CLAHE))\n\n\n\n\n\n\nWill use the \nTEXT_TYPE\n detection property in feed-forward regions provided by the EAST component to determine which\n preprocessing steps to perform.\n\n\nFor more information on these new features, see\n the \nREADME\n.\n\n\nRemoved gibberish and string filters since they only worked on English text.\n\n\n\n\nActiveMQ Profiles\n\n\n\n\n\nThe ActiveMQ Docker image now supports custom profiles. The container selects an \nactivemq.xml\n and \nenv\n file to use\n at runtime based on the value of the \nACTIVE_MQ_PROFILE\n environment variable. Among others, these files contain\n configuration settings for Java heap space and component queue memory limits.\n\n\nThis release only supports a \ndefault\n profile setting, as defined by \nactivemq-default.xml\n and \nenv.default\n;\n however, developers are free to add other \nactivemq-.xml\n and \nenv.\n files to the ActiveMQ Docker\n image to suit their needs.\n\n\n\n\nDisabled ActiveMQ Prefetch\n\n\n\n\n\nDisabled ActiveMQ prefetching on all component queues. Previously, a prefetch value of one was resulting in situations\n where one component service could be dispatched two sub-jobs, thereby starving other available component services\n which could process one of those sub-jobs in parallel.\n\n\n\n\nSearch Region Percentages\n\n\n\n\n\nIn addition to using exact pixel values, users can now use percentages for the following properties when specifying\n search regions for C++ and Python components:\n\n\nSEARCH_REGION_TOP_LEFT_X_DETECTION\n\n\nSEARCH_REGION_TOP_LEFT_Y_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_X_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_Y_DETECTION\n\n\n\n\n\n\nFor example, setting \nSEARCH_REGION_TOP_LEFT_X_DETECTION=50%\n will result in components only processing the right half\n of an image or video.\n\n\nOptionally, users can specify exact pixel values of some of these properties and percentages for others.\n\n\n\n\nOther Improvements\n\n\n\n\n\nIncreased the number of ActiveMQ maxConcurrentConsumers for the \nMPF.COMPLETED_DETECTIONS\n queue from 30 to 60.\n\n\nThe Create Job web UI now only displays the content of the \n$MPF_HOME/share/remote-media\n directory instead of all\n of \n$MPF_HOME/share\n, which prevents the Workflow Manager from indexing generated JSON output files, artifacts, and\n markup. Indexing the latter resulted in Java heap space issues for large scale production systems. This is a\n mitigation for issue \n#897\n.\n\n\nThe Job Status web UI now makes proper use of pagination in SQL/Hibernate through the Workflow Manager to avoid\n retrieving the entire jobs table, which was inefficient.\n\n\nThe Workflow Manager will now silently discard all duplicate messages in the ActiveMQ Dead Letter Queue (DLQ),\n regardless of destination. Previously, only messages destined for component sub-job request queues were discarded.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#891\n] Fixed a bug where the Workflow Manager media segmenter\n generated short segments that were minimally \nMIN_SEGMENT_LENGTH+1\n in size instead of \nMIN_SEGMENT_LENGTH\n.\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. This seems to have been resolved by disabling ActiveMQ prefetch behavior on component queues.\n\n\n[\n#855\n] A logback circular reference suppressed exception no longer\n throws a StackOverflowError. This was resolved by transitioning the Workflow Manager and Java components from the\n Logback framework to Log4j2.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#897\n] OpenMPF will attempt to index files located\n in \n$MPF_HOME/share\n as soon as the webapp is started by Tomcat. This is so that those files can be listed in a\n directory tree in the Create Job web UI. The main problem is that once a file gets indexed it's never removed from the\n cache, even if the file is manually deleted, resulting in a memory leak.\n\n\n\n\nLate Additions: November 2019\n\n\n\n\n\nUser names, roles, and passwords can now be set by using an optional \nuser.properties\n file. This allows\n administrators to override the default OpenMPF users that come preconfigured, which may be a security risk. Refer to\n the \"Configure Users\" section of the\n openmpf-docker \nREADME\n for\n more information.\n\n\n\n\nLate Additions: December 2019\n\n\n\n\n\nTransitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL\n database in the cloud.\n\n\nUpdated the EAST component to support a \nTEMPORARY_PADDING\n and \nFINAL_PADDING\n property. The first property\n determines how much padding is added to detections during the non-maximum suppression or merging step. This padding is\n effectively removed from the final detections. The second property is used to control the final amount of padding on\n the output regions. Refer to\n the \nREADME\n.\n\n\n\n\nOpenMPF 4.0.x\n\n\n4.0.0: February 2019\n\n\n\nDocumentation\n\n\n\n\n\nAdded an \nObject Storage Guide\n with information on how to configure OpenMPF to work\n with a custom NGINX object storage server, and how to run jobs that use an S3 object storage server. Note that the\n system properties for the custom NGINX object storage server have changed since the last release.\n\n\n\n\nUpgrade to Tesseract 4.0\n\n\n\n\n\nBoth the Tesseract OCR Text Detection Component and OpenALPR License Plate Detection Components have been updated to\n use the new version of Tesseract.\n\n\nAdditionally, Leptonica has been upgraded from 1.72 to 1.75.\n\n\n\n\nDocker Deployment\n\n\n\n\n\nThe Docker images now use the yum package manager to install ImageMagick6 from a public RPM repository instead of\n downloading the RPMs directly from imagemagick.org. This resolves an issue with the OpenMPF Docker build where RPMs\n on \nimagemagick.org\n were no longer available.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to allow the user to set a \nTESSERACT_OEM\n property in order to select an OCR engine mode (OEM).\n\n\n\"script/Latin\" can now be specified as the \nTESSERACT_LANGUAGE\n. When selected, Tesseract will select all Latin\n characters, which can be from different Latin languages.\n\n\n\n\nCeph S3 Object Storage\n\n\n\n\n\nAdded support for downloading files from, and uploading files to, an S3 object storage server. The following job\n properties can be provided: \nS3_ACCESS_KEY\n, \nS3_SECRET_KEY\n, \nS3_RESULTS_BUCKET\n, \nS3_UPLOAD_ONLY\n.\n\n\nAt this time, only support for Ceph object storage has been tested. However, the Workflow Manager uses the AWS SDK for\n Java to communicate with the object store, so it is possible that other S3-compatible storage solutions may work as\n well.\n\n\n\n\nISO-8601 Timestamps\n\n\n\n\n\nAll timestamps in the JSON output object, and streaming video callbacks, are now in the ISO-8601 format (e.g. \"\n 2018-12-19T12:12:59.995-05:00\"). This new format includes the time zone, which makes it possible to compare timestamps\n generated between systems in different time zones.\n\n\nThis change does not affect the track and detection start and stop offset times, which are still reported in\n milliseconds since the start of the video.\n\n\n\n\nReduced Redis Usage\n\n\n\n\n\nThe Workflow Manager has been refactored to reduce usage of the Redis in-memory database. In general, Redis is not\n necessary for storing job information and only resulted in introducing potential delays in accessing that data over\n the network stack.\n\n\nNow, only track and detection data is stored in Redis for batch jobs. This reduces the amount of memory the Workflow\n Manager requires of the Java Virtual Machine. Compared to the other job information, track and detection data can\n potentially be relatively much larger. In the future, we plan to store frame data in Redis for streaming jobs as well.\n\n\n\n\nCaffe Vehicle Color Estimation\n\n\n\n\n\nThe Caffe\n Component \nmodels.ini\n\n file has been updated with a \"vehicle_color\" section with links for downloading\n the \nReza Fuad Rachmadi's Vehicle Color Recognition Using Convolutional Neural Network\n\n model files.\n\n\nThe following pipelines have been added. These require the above model files to be placed\n in \n$MPF_HOME/share/models/CaffeDetection\n:\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM TINY YOLO VEHICLE DETECTOR) PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM YOLO VEHICLE DETECTOR) PIPELINE\n\n\n\n\n\n\n\n\nTrack Merging and Minimum Track Length\n\n\n\n\n\nThe following system properties now have \"video\" in their names:\n\n\ndetection.video.track.merging.enabled\n\n\ndetection.video.track.min.gap\n\n\ndetection.video.track.min.length\n\n\ndetection.video.track.overlap.threshold\n\n\n\n\n\n\nThe above properties can be overridden by the following job properties, respectively. These have not been renamed\n since the last release:\n\n\nMERGE_TRACKS\n\n\nMIN_GAP_BETWEEN_TRACKS\n\n\nMIN_TRACK_LENGTH\n\n\nMIN_OVERLAP\n\n\n\n\n\n\nThese system and job properties now only apply to video media. This resolves an issue where users had\n set \ndetection.track.min.length=5\n, which resulted in dropping all image media tracks. By design, each image track can\n only contain a single detection.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a bug where the Docker entrypoint scripts appended properties to the end\n of \n$MPF_HOME/share/config/mpf-custom.properties\n every time the Docker deployment was restarted, resulting in entries\n like \ndetection.segment.target.length=5000,5000,5000\n.\n\n\nUpgrading to Tesseract 4 fixes a bug where, when specifying \nTESSERACT_LANGUAGE\n, if one of the languages is Arabic,\n then Arabic must be specified last. Arabic can now be specified first, for example: \nara+eng\n.\n\n\nFixed a bug where the minimum track length property was being applied to image tracks. Now it's only applied to video\n tracks.\n\n\nFixed a bug where ImageMagick6 installation failed while building Docker images.\n\n\n\n\nOpenMPF 3.0.x\n\n\n3.0.0: December 2018\n\n\n\n\n\nNOTE:\n The \nBuild Guide\n and \nInstall Guide\n are outdated. The old process for manually configuring a Build VM, using it to build an OpenMPF package, and installing that package, is deprecated in favor of Docker containers. Please refer to the openmpf-docker \nREADME\n.\n\n\nNOTE:\n Do not attempt to register or unregister a component through the Nodes UI in a Docker deployment. It may appear to succeed, but the changes will not affect the child Node Manager containers, only the Workflow Manager container. Also, do not attempt to use the \nmpf\n command line tools in a Docker deployment.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nREADME\n\n , \nSWARM\n guide,\n and \nCONTRIBUTING\n guide for Docker deployment.\n\n\nUpdated the \nUser Guide\n with information on how track\n properties and track confidence are handled when merging tracks.\n\n\nAdded README files for new components. Refer to the component sections below.\n\n\n\n\nDocker Support\n\n\n\n\n\nOpenMPF can now be built and distributed as 5 Docker images: openmpf_workflow_manager, openmpf_node_manager,\n openmpf_active_mq, mysql_database, and redis.\n\n\nThese images can be deployed on a single host using \ndocker-compose up\n.\n\n\nThey can also be deployed across multiple hosts in a Docker swarm cluster using \ndocker stack deploy\n.\n\n\nGPU support is enabled through the NVIDIA Docker runtime.\n\n\nBoth HTTP and HTTPS deployments are supported.\n\n\n\n\n\n\nJSON Output Object\n\n\n\n\n\nAdded a \ntrackProperties\n field at the track level that works in much the same way as the \ndetectionProperties\n field\n at the detection level. Both are maps that contain zero or more key-value pairs. The component APIs have always\n supported the ability to return track-level properties, but they were never represented in the JSON output object,\n until now.\n\n\nSimilarly, added a track \nconfidence\n field. The component APIs always supported setting it, but the value was never\n used in the JSON output object, until now.\n\n\nAdded \njobErrors\n and\njobWarnings\n fields. The \njobErrors\n field will mention that there are items\n in \ndetectionProcessingErrors\n fields.\n\n\nThe \noffset\n, \nstartOffset\n, and \nstopOffset\n fields have been removed in favor of the existing \noffsetFrame\n\n , \nstartOffsetFrame\n, and \nstopOffsetFrame\n fields, respectively. They were redundant and deprecated.\n\n\nAdded a \nmpf.output.objects.exemplars.only\n system property, and \nOUTPUT_EXEMPLARS_ONLY\n job property, that can be set\n to reduce the size of the JSON output object by only recording the track exemplars instead of all of the detections in\n each track.\n\n\nAdded a \nmpf.output.objects.last.stage.only\n system property, and \nOUTPUT_LAST_STAGE_ONLY\n job property, that can be\n set to reduce the size of the JSON output object by only recording the detections for the last non-markup stage of a\n pipeline.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThe Darknet component can now support processing streaming video.\n\n\nIn batch mode, video frames are prefetched, decoded, and stored in a buffer using a separate thread from the one that\n performs the detection. The size of the prefetch buffer can be configured by setting \nFRAME_QUEUE_CAPACITY\n.\n\n\nThe Darknet component can now perform basic tracking and generate video tracks with multiple detections. Both the\n default detection mode and preprocessor detection mode are supported.\n\n\nThe Darknet component has been updated to support the full and tiny YOLOv3 models. The YOLOv2 models are no longer\n supported.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nThis new component extracts text found in an image and reports it as a single-detection track.\n\n\nPDF documents can also be processed with one track detection per page.\n\n\nUsers may set the language of each track using the \nTESSERACT_LANGUAGE\n property as well as adjust other image\n preprocessing properties for text extraction.\n\n\nRefer to\n the \nREADME\n.\n\n\n\n\nOpenCV Scene Change Detection Component\n\n\n\n\n\nThis new component detects and segments a given video by scenes. Each scene change is detected using histogram\n comparison, edge comparison, brightness (fade outs), and overall hue/saturation/value differences between adjacent\n frames.\n\n\nUsers can toggle each type of of scene change detection technique as well as threshold properties for each detection\n method.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThis new component extracts text contained in documents and performs language detection. 71 languages and most\n document formats (.txt, .pptx, .docx, .doc, .pdf, etc.) are supported.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Image Detection Component\n\n\n\n\n\nThis new component extracts images embedded in document formats (.pdf, .ppt, .doc) and stores them on disk in a\n specified directory.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTrack-Level Properties and Confidence\n\n\n\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\nComponents have been updated to return meaningful track-level properties. Caffe and Darknet include \nCLASSIFICATION\n,\n OALPR includes the exemplar \nTEXT\n, and Sphinx includes the \nTRANSCRIPTION\n.\n\n\nThe Workflow Manager will now populate the track-level confidence. It is the same as the exemplar confidence, which is\n the max of all of the track detections.\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\n\n\n\nAdded \nhttp.object.storage.*\n system properties for configuring an optional custom NGINX object storage server on\n which to store generated detection artifacts, JSON output objects, and markup files.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n,\n which is the default behavior when an object storage server is not specified.\n\n\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field,\n and, if appropriate, the \nmarkupResult.message\n field. If the job completes without other issues, the final status\n will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nThe NGINX storage server runs custom server-side code which we can make available upon request. In the future, we plan\n to support more common storage server solutions, such as Amazon S3.\n\n\n\n\n\n\nActiveMQ\n\n\n\n\n\nThe \nMPF_OUTPUT\n queue is no longer supported and has been removed. Job producers can specify a callback URL when\n creating a job so that they are alerted when the job is complete. Users observed heap space issues with ActiveMQ after\n running thousands of jobs without consuming messages from the \nMPF_OUTPUT\n queue.\n\n\nThe Workflow Manager will now silently discard duplicate sub-job request messages in the ActiveMQ Dead Letter Queue (\n DLQ). This fixes a bug where the Workflow Manager would prematurely terminate jobs corresponding to the duplicate\n messages. It's assumed that ActiveMQ will only place a duplicate message in the DLQ if the original message, or\n another duplicate, can be delivered.\n\n\n\n\nNode Auto-Configuration\n\n\n\n\n\nAdded the \nnode.auto.config.enabled\n, \nnode.auto.unconfig.enabled\n, and \nnode.auto.config.num.services.per.component\n\n system properties for automatically managing the configuration of services when nodes join and leave the OpenMPF\n cluster.\n\n\nDocker will assign a a hostname with a randomly-generated id to containers in a swarm deployment. The above properties\n allow the Workflow Manager to automatically discover and configure services on child Node Manager components, which is\n convenient since the hostname of those containers cannot be known in advance, and new containers with new hostnames\n are created when the swarm is restarted.\n\n\n\n\nJob Status Web UI\n\n\n\n\n\nAdded the \nweb.broadcast.job.status.enabled\n and \nweb.job.polling.interval\n system properties that can be used to\n configure if the Workflow Manager automatically broadcasts updates to the Job Status web UI. By default, the\n broadcasts are enabled.\n\n\nIn a production environment that processes hundreds of jobs or more at the same time, this behavior can result in\n overloading the web UI, causing it to slow down and freeze up. To prevent this, set \nweb.broadcast.job.status.enabled\n\n to \nfalse\n. If \nweb.job.polling.interval\n is set to a non-zero value, the web UI will poll for updates at that\n interval (specified in milliseconds).\n\n\nTo disable broadcasts and polling, set \nweb.broadcast.job.status.enabled\n to \nfalse\n and \nweb.job.polling.interval\n to\n a zero or negative value. Users will then need to manually refresh the Job Status web page using their web browser.\n\n\n\n\nOther Improvements\n\n\n\n\n\nNow using variable-length text fields in the mySQL database for string data that may exceed 255 characters.\n\n\nUpdated the MPFImageReader tool to use OpenCV video capture behind the scenes to support reading data from HTTP URLs.\n\n\nPython components can now include pre-built wheel files in the plugin package.\n\n\nWe now use a \nJenkinsfile\n Groovy script for our\n Jenkins build process. This allows us to use revision control for our continuous integration process and share that\n process with the open source community.\n\n\nAdded \nremote.media.download.retries\n and \nremote.media.download.sleep\n system properties that can be used to\n configure how the Workflow Manager will attempt to retry downloading remote media if it encounters a problem.\n\n\nArtifact extraction now uses MPFVideoCapture, which employs various fallback strategies for extracting frames in cases\n where a video is not well-formed or corrupted. For components that use MPFVideoCapture, this enables better\n consistency between the frames they process and the artifacts that are later extracted.\n\n\n\n\nBug Fixes\n\n\n\n\n\nJobs now properly end in \nERROR\n if an invalid media URL is provided or there is a problem accessing remote media.\n\n\nJobs now end in \nCOMPLETE_WITH_ERRORS\n when a detection splitter error occurs due to missing system properties.\n\n\nComponents can now include their own version of the Google Protobuf library. It will not conflict with the version\n used by the rest of OpenMPF.\n\n\nThe Java component executor now sets the proper job id in the job name instead of using the ActiveMQ message request\n id.\n\n\nThe Java component executor now sets the run directory using \nsetRunDirectory()\n.\n\n\nActions can now be properly added using an \"extras\" component. An extras component only includes a \ndescriptor.json\n\n file and declares Actions, Tasks, and Pipelines using other component algorithms.\n\n\nRefer to the items listed in the \nActiveMQ\n section.\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. The reason is currently unknown.\n\n\n[\n#544\n] Image artifacts retain some permissions from source files\n available on the local host. This can result in some of the image artifacts having executable permissions.\n\n\n[\n#604\n] The Sphinx component cannot be unregistered\n because \n$MPF_HOME/plugins/SphinxSpeechDetection/lib\n is owned by root on a deployment machine.\n\n\n[\n#623\n] The Nodes UI does not work correctly\n when \n[POST] /rest/nodes/config\n is used at the same time. This is because the UI's state is not automatically updated\n to reflect changes made through the REST endpoint.\n\n\n[\n#783\n] The Tesseract OCR Text Detection Component has\n a \nknown issue\n because it uses Tesseract 3. If a combination\n of languages is specified using \nTESSERACT_LANGUAGE\n, and one of the languages is Arabic, then Arabic must be\n specified last. For example, for English and Arabic, \neng+ara\n will work, but \nara+eng\n will not.\n\n\n[\n#784\n] Sometimes services do not start on OpenMPF nodes, and those\n services cannot be started through the Nodes web UI. This is not a Docker-specific problem, but it has been observed\n in a Docker swarm deployment when auto-configuration is enabled. The workaround is to restart the Docker swarm\n deployment, or remove the entire node in the Nodes UI and add it again.\n\n\n\n\nOpenMPF 2.1.x\n\n\n2.1.0: June 2018\n\n\n\n\n\nNOTE:\n If building this release on a machine used to build a previous version of OpenMPF, then please run \nsudo pip install --upgrade pip\n to update to at least pip 10.0.1. If not, the OpenMPF build script will fail to properly download .whl files for Python modules.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded the \nPython Batch Component API\n.\n\n\nAdded the \nNode Guide\n.\n\n\nAdded the \nGPU Support Guide\n.\n\n\nUpdated the \nInstall Guide\n with an \"(Optional) Install the NVIDIA CUDA Toolkit\" section.\n\n\nRenamed Admin Manual to Admin Guide for consistency.\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nDevelopers can now write batch components in Python using the mpf_component_api module.\n\n\nDependencies can be specified in a setup.py file. OpenMPF will automatically download the .whl files using pip at\n build time.\n\n\nWhen deployed, a virtualenv is created for the Python component so that it runs in a sandbox isolated from the rest of\n the system.\n\n\nOpenMPF ImageReader and VideoCapture tools are provided in the mpf_component_util module.\n\n\nExample Python components are provided for reference.\n\n\n\n\nSpare Nodes\n\n\n\n\n\nSpare nodes can join and leave an OpenMPF cluster while the Workflow Manager is running. You can create a spare node\n by cloning an existing OpenMPF child node. Refer to the \nNode Guide\n.\n\n\nNote that changes made using the Component Registration web page only affect core nodes, not spare nodes. Core nodes\n are those configured during the OpenMPF installation process.\n\n\nAdded \nmpf list-nodes\n command to list the core nodes and available spare nodes.\n\n\nOpenMPF now uses the JGroups FILE_PING protocol for peer discovery instead of TCPPING. This means that the list of\n OpenMPF nodes no longer needs to be fully specified when the Workflow Manager starts. Instead, the Workflow Manager,\n and Node Manager process on each node, use the files in \n$MPF_HOME/share/nodes\n to determine which nodes are currently\n available.\n\n\nUpdated JGroups from 3.6.4. to 4.0.11.\n\n\nThe environment variables specified in \n/etc/profile.d/mpf.sh\n have been simplified. Of note, \nALL_MPF_NODES\n has been\n replaced by \nCORE_MPF_NODES\n.\n\n\n\n\nDefault Detection System Properties\n\n\n\n\n\nThe detection properties that specify the default values when creating new jobs can now be updated at runtime without\n restarting the Workflow Manager. Changing these properties will only have an effect on new jobs, not jobs that are\n currently running.\n\n\nThese default detection system properties are separated from the general system properties in the Properties web page.\n The latter still require the Workflow Manager to be restarted for changes to take effect.\n\n\nThe Apache Commons Configuration library is now used to read and write properties files. When defining a property\n value using an environment variable in the Properties web page, or \n$MPF_HOME/config/mpf-custom.properties\n, be sure\n to prepend the variable name with \nenv:\n. For example:\n\n\n\n\ndetection.models.dir.path=${env:MPF_HOME}/models/\n\n\n\n\n\nAlternatively, you can define system properties using other system properties:\n\n\n\n\ndetection.models.dir.path=${mpf.share.path}/models/\n\n\n\nAdaptive Frame Interval\n\n\n\n\n\nThe \nFRAME_RATE_CAP\n property can be used to set a threshold on the maximum number of frames to process within one\n second of the native video time. This property takes precedence over the user-provided / pipeline-provided value\n for \nFRAME_INTERVAL\n. When the \nFRAME_RATE_CAP\n property is specified, an internal frame interval value is calculated\n as follows:\n\n\n\n\ncalcFrameInterval = max(1, floor(mediaNativeFPS / frameRateCapProp));\n\n\n\n\n\nFRAME_RATE_CAP\n may be disabled by setting it <= 0. \nFRAME_INTERVAL\n can be disabled in the same way.\n\n\nIf \nFRAME_RATE_CAP\n is disabled, then \nFRAME_INTERVAL\n will be used instead.\n\n\nIf both \nFRAME_RATE_CAP\n and \nFRAME_INTERVAL\n are disabled, then a value of 1 will be used for \nFRAME_INTERVAL\n.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThis release includes a component that uses the \nDarknet neural network framework\n to\n perform detection and classification of objects using trained models.\n\n\nPipelines for the Tiny YOLO and YOLOv2 models are provided. Due to its large size, the YOLOv2 weights file must be\n downloaded separately and placed in \n$MPF_HOME/share/models/DarknetDetection\n in order to use the YOLOv2 pipelines.\n Refer to \nDarknetDetection/plugin-files/models/models.ini\n for more information.\n\n\nThis component supports a preprocessor mode and default mode of operation. If preprocessor mode is enabled, and\n multiple Darknet detections in a frame share the same classification, then those are merged into a single detection\n where the region corresponds to the superset region that encapsulates all of the original detections, and the\n confidence value is the probability that at least one of the original detections is a true positive. If disabled,\n multiple Darknet detections in a frame are not merged together.\n\n\nDetections are not tracked across frames. One track is generated per detection.\n\n\nThis component supports an optional \nCLASS_WHITELIST_FILE\n property. When provided, only detections with class names\n listed in the file will be generated.\n\n\nThis component can be compiled with GPU support if the NVIDIA CUDA Toolkit is installed on the build machine. Refer to\n the \nGPU Support Guide\n. If the toolkit is not found, then the component will compile with CPU\n support only.\n\n\nTo run on a GPU, set the \nCUDA_DEVICE_ID\n job property, or set the detection.cuda.device.id system property, >= 0.\n\n\nWhen \nCUDA_DEVICE_ID\n >= 0, you can set the \nFALLBACK_TO_CPU_WHEN_GPU_PROBLEM\n job property, or the\n detection.use.cpu.when.gpu.problem system property, to \nTRUE\n if you want to run the component logic on the CPU\n instead of the GPU when a GPU problem is detected.\n\n\n\n\nModels Directory\n\n\n\n\n\nThe\n$MPF_HOME/share/models\n directory is now used by the Darknet and Caffe components to store model files and\n associated files, such as classification names files, weights files, etc. This allows users to more easily add model\n files post-deployment. Instead of copying the model files to \n$MPF_HOME/plugins//models\n directory on\n each node in the OpenMPF cluster, they only need to copy them to the shared directory once.\n\n\nTo add new models to the Darknet and Caffe component, add an entry to the\n respective \n/plugin-files/models/models.ini\n file.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nPython components are packaged with their respective dependencies as .whl files. This can be automated by providing a\n setup.py file. An example OpenCV Python component is provided that demonstrates how the component is packaged and\n deployed with the opencv-python module. When deployed, a virtualenv is created for the component with the .whl files\n installed in it.\n\n\nWhen deploying OpenMPF, \nLD_LIBRARY_PATH\n is no longer set system-wide. Refer to Known Issues.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Nodes page to distinguish between core nodes and spare nodes, and to show when a node is online or\n offline.\n\n\nUpdated the Component Registration page to list the core nodes as a reminder that changes will not affect spare nodes.\n\n\nUpdated the Properties page to separate the default detection properties from the general system properties.\n\n\n\n\nBug Fixes\n\n\n\n\n\nCustom Action, task, and pipeline names can now contain \"(\" and \")\" characters again.\n\n\nDetection location elements for audio tracks and generic tracks in a JSON output object will now have a y value of \n0\n\n instead of \n1\n.\n\n\nStreaming health report and summary report timestamps have been corrected to represent hours in the 0-23 range instead\n of 1-24.\n\n\nSingle-frame .gif files are now segmented properly and no longer result in a NullPointerException.\n\n\nLD_LIBRARY_PATH\n is now set at the process level for Tomcat, the Node Manager, and component services, instead of at\n the system level in \n/etc/profile.d/mpf.sh\n. Also, deployments no longer create \n/etc/ld.so.conf.d/mpf.conf\n. This\n better isolates OpenMPF from the rest of the system and prevents issues, such as being unable to use SSH, when system\n libraries are not compatible with OpenMPF libraries. The latter situation may occur when running \nyum update\n on the\n system, which can make OpenMPF unusable until a new deployment package with compatible libraries is installed.\n\n\nThe Workflow Manager will no longer generate an \"Error retrieving the SingleJobInfo model\" line in the log if someone\n is viewing the Job Status page when a job submitted through the REST API is in progress.\n\n\n\n\nKnown Issues\n\n\n\n\n\nWhen multiple component services of the same type on the same node log to the same file at the same time, sometimes\n log lines will not be captured in the log file. The logging frameworks (log4j and log4cxx) do not support that usage.\n This problem happens more frequently on systems running many component services at the same time.\n\n\nThe following exception was observed:\n\n\n\n\ncom.google.protobuf.InvalidProtocolBufferException: Message missing required fields: data_uri\n\n\n\n\n\n\nFurther debugging is necessary to determine the reason why that message was missing that field. The situation is not easily reproducible. It may occur when ActiveMQ and / or the system is under heavy load and sends duplicate messages in attempt to ensure message delivery. Some of those messages seem to end up in the dead letter queue (DLQ). For now, we've improved the way we handle messages in the DLQ. If OpenMPF can process a message successfully, the job is marked as \nCOMPLETED_WITH_ERRORS\n, and the message is moved from \nActiveMQ.DLQ\n to \nMPF.DLQ_PROCESSED_MESSAGES\n. If OpenMPF cannot process a message successfully, it is moved from \nActiveMQ.DLQ to MPF.DLQ_INVALID_MESSAGES\n.\n\n\n\n\n\n\nThe \nmpf stop\n command will stop the Workflow Manager, which will in turn send commands to all of the available nodes\n to stop all running component services. If a service is processing a sub-job when the quit command is received, that\n service process will not terminate until that sub-job is completely processed. Thus, the service may put a sub-job\n response on the ActiveMQ response queue after the Workflow Manager has terminated. That will not cause a problem\n because the queues are flushed the next time the Workflow Manager starts; however, there will be a problem if the\n service finishes processing the sub-job after the Workflow Manager is restarted. At that time, the Workflow Manager\n will have no knowledge of the old job and will in turn generate warnings in the log about how the job id is \"not known\n to the system\" and/or \"not found as a batch or a streaming job\". These can be safely ignored. Often, if these messages\n appear in the log, then C++ services were running after stopping the Workflow Manager. To address this, you may wish\n to run \nsudo killall amq_detection_component\n after running \nmpf stop\n.\n\n\n\n\nOpenMPF 2.0.x\n\n\n2.0.0: February 2018\n\n\n\n\n\nNOTE:\n Components built for previous releases of OpenMPF are not compatible with OpenMPF 2.0.0 due to Batch Component API changes to support generic detections, and changes made to the format of the \ndescriptor.json\n file to support stream processing.\n\n\nNOTE:\n This release contains basic support for processing video streams. Currently, the only way to make use of that functionality is through the REST API. Streaming jobs and services cannot be created or monitored through the web UI. Only the SuBSENSE component has been updated to support streaming. Only single-stage pipelines are supported at this time.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated documents to distinguish the batch component APIs from the streaming component API.\n\n\nAdded the \nC++ Streaming Component API\n.\n\n\nUpdated the \nC++ Batch Component API\n to describe support for generic detections.\n\n\nUpdated the \nREST API\n with endpoints for streaming jobs.\n\n\n\n\nSupport for Generic Detections\n\n\n\n\n\nC++ and Java components can now declare support for the \nUNKNOWN\n data type. The respective batch APIs have been\n updated with a function that will enable a component to process an \nMPFGenericJob\n, which represents a piece of media\n that is not a video, image, or audio file.\n\n\nNote that these API changes make OpenMPF R2.0.0 incompatible with components built for previous releases of OpenMPF.\n Specifically, the new component executor will not be able to load the component logic library.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nAdded the following function to support generic detections:\n\n\nMPFDetectionError GetDetections(const MPFGenericJob &job, vector &tracks)\n\n\n\n\n\n\n\n\nJava Batch Component API\n\n\n\n\n\nAdded the following method to support generic detections:\n\n\nList getDetections(MPFGenericJob job)\n\n\n\n\n\n\n\n\nStreaming REST API\n\n\n\n\n\nAdded the following REST endpoints for streaming jobs:\n\n\n[GET] /rest/streaming/jobs\n: Returns a list of streaming job ids.\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job. Users can register for health report and\n summary report callbacks.\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job.\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job.\n\n\n\n\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nUpdated to support generic detections.\n\n\nUpdated Redis to store information about streaming jobs.\n\n\nAdded controllers for streaming job REST endpoints.\n\n\nAdded ability to generate health reports and segment summary reports for streaming jobs.\n\n\nImproved code flow between the Workflow Manager and master Node Manager to support streaming jobs.\n\n\nAdded ActiveMQ queues to enable the C++ Streaming Component Executor to send reports and job status to the Workflow\n Manager.\n\n\n\n\nNode Manager\n\n\n\n\n\nUpdated the master Node Manager and child Node Managers to spawn component services on demand to handle streaming\n jobs, cancel those jobs, and to monitor the status of those processes.\n\n\nUsing .ini files to represent streaming job properties and enable better communication between a child Node Manager\n and C++ Streaming Component Executor.\n\n\n\n\nC++ Streaming Component API\n\n\n\n\n\nDeveloped the C++ Streaming Component API with the following functions:\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n: Constructor that takes a streaming video job.\n\n\nstring GetDetectionType()\n: Returns the type of detection (i.e. \"FACE\").\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n: Indicates the beginning of a new video segment.\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n: Processes a single frame for the current video\n segment.\n\n\nvector EndSegment()\n: Indicates the end of the current video segment.\n\n\n\n\n\n\nUpdated the C++ Hello World component to support streaming jobs.\n\n\n\n\nC++ Streaming Component Executor\n\n\n\n\n\nDeveloped the C++ Streaming Component Executor to load a streaming component logic library, read frames from a video\n stream, and exercise the component logic through the C++ Streaming Component API.\n\n\nWhen the C++ Streaming Component Executor cannot read a frame from the stream, it will sleep for at least 1\n millisecond, doubling the amount of sleep time per attempt until it reaches the \nstallTimeout\n value specified when\n the job was created. While stalled, the job status will be \nSTALLED\n. After the timeout is exceeded, the job will\n be \nTERMINATED\n.\n\n\nThe C++ Streaming Component Executor supports \nFRAME_INTERVAL\n, as well as rotation, horizontal flipping, and\n cropping (region of interest) properties. Does not support \nUSE_KEY_FRAMES\n.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded the following Java classes to the interoperability package to simplify third party integration:\n\n\nJsonHealthReportCollection\n: Represents the JSON content of a health report callback. Contains one or\n more \nJsonHealthReport\n objects.\n\n\nJsonSegmentSummaryReport\n: Represents the JSON content of a summary report callback. Content is similar to the\n JSON output object used for batch processing.\n\n\n\n\n\n\n\n\nSuBSENSE Component\n\n\n\n\n\nThe SuBSENSE component now supports both batch processing and stream processing.\n\n\nEach video segment will be processed independently of the rest. In other words, tracks will be generated on a\n segment-by-segment basis and tracks will not carry over between segments.\n\n\nNote that the last frame in the previous segment will be used to determine if there is motion in the first frame of\n the next segment.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nUpdated \ndescriptor.json\n fields to allow components to support batch and/or streaming jobs. Components that use the\n old \ndescriptor.json\n file format cannot be registered through the web UI.\n\n\nBatch component logic and streaming component logic are compiled into separate libraries.\n\n\nThe mySQL \nstreaming_job_request\n table has been updated with the following fields, which are used to populate the\n JSON health reports:\n\n\nstatus_detail\n: (Optional) A user-friendly description of the current job status.\n\n\nactivity_frame_id\n: The frame id associated with the last job activity. Activity is defined as the start of a new\n track for the current segment.\n\n\nactivity_timestamp\n: The timestamp associated with the last job activity.\n\n\n\n\n\n\n\n\nWeb User Interface\n\n\n\n\n\nAdded column names to the table that appears when the user clicks in the Media button associated with a job on the Job\n Status page. Now descriptive comments are provided when table cells are empty.\n\n\n\n\nBug Fixes\n\n\n\n\n\nUpgraded Tika to 1.17 to resolve an issue with improper indentation in a Python file (rotation.py) that resulted in\n generating at least one error message per image processed. When processing a large number of images, this would\n generate may error messages, causing the Automatic Bug Reporting Tool daemon (abrtd) process to run at 100% CPU. Once\n in that state, that process would stay there, essentially wasting on CPU core. This resulted in some of the Jenkins\n virtual machines we used for testing to become unresponsive.\n\n\n\n\nKnown Issues\n\n\n\n\n\n\n\nOpenCV 3.3.0 \ncv::imread()\n does not properly decode some TIFF images that have EXIF orientation metadata. It can\n handle images that are flipped horizontally, but not vertically. It also has issues with rotated images. Since most\n components rely on that function to read image data, those components may silently fail to generate detections for\n those kinds of images.\n\n\n\n\n\n\nUsing single quotes, apsotrophes, or double quotes in the name of an algorithm, action, task, or pipeline configured\n on an existing OpenMPF system will result in a failure to perform an OpenMPF upgrade on that system. Specifically, the\n step where pre-existing custom actions, tasks, and pipelines are carried over to the upgraded version of OpenMPF will\n fail. Please do not use those special characters while naming those elements. If this has been done already, then\n those elements should be manually renamed in the XML files prior to an upgrade attempt.\n\n\n\n\n\n\nOpenMPF uses OpenCV, which uses FFmpeg, to connect to video streams. If a proxy and/or firewall prevents the network\n connection from succeeding, then OpenCV, or the underlying FFmpeg library, will segfault. This causes the C++\n Streaming Component Executor process to fail. In turn, the job status will be set to \nERROR\n with a status detail\n message of \"Unexpected error. See logs for details\". In this case, the logs will not contain any useful information.\n You can identify a segfault by the following line in the node-manager log:\n\n\n\n\n\n\n2018-02-15 16:01:21,814 INFO [pool-3-thread-4] o.m.m.nms.streaming.StreamingProcess - Process: Component exited with exit code 139\u00a0\n\n\n\n\n\nTo determine if FFmpeg can connect to the stream or not, run \nffmpeg -i \n in a terminal window. Here's an example when it's successful:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[rtsp @ 0x1924240] UDP timeout, retrying with TCP\nInput #0, rtsp, from 'rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov':\n Metadata:\n title : BigBuckBunny_115k.mov\n Duration: 00:09:56.48, start: 0.000000, bitrate: N/A\n Stream #0:0: Audio: aac (LC), 12000 Hz, stereo, fltp\n Stream #0:1: Video: h264 (Constrained Baseline), yuv420p(progressive), 240x160, 24 fps, 24 tbr, 90k tbn, 48 tbc\nAt least one output file must be specified\n\n\n\n\n\nHere's an example when it's not successful, so there may be network issues:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[tcp @ 0x171c300] Connection to tcp://184.72.239.149:554?timeout=0 failed: Invalid argument\nrtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov: Invalid argument\n\n\n\n\n\nTika 1.17 does not come pre-packaged with support for some embedded image formats in PDF files, possibly to avoid\n patent issues. OpenMPF does not handle embedded images in PDFs, so that's not a problem. Tika will print out the\n following warnings, which can be safely ignored:\n\n\n\n\nJan 22, 2018 11:02:15 AM org.apache.tika.config.InitializableProblemHandler$3 handleInitializableProblem\nWARNING: JBIG2ImageReader not loaded. jbig2 files will be ignored\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nTIFFImageWriter not loaded. tiff files will not be processed\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nJ2KImageReader not loaded. JPEG2000 files will not be processed.\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\n\n\n\n\nOpenMPF 1.0.x\n\n\n1.0.0: October 2017\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nBuild Guide\n with instructions for installing the latest JDK,\n latest JRE, FFmpeg 3.3.3, new codecs, and OpenCV 3.3.\n\n\nAdded an \nAcknowledgements\n section that provides information on third party dependencies\n leveraged by the OpenMPF.\n\n\nAdded a \nFeed Forward Guide\n that explains feed forward processing and how to use it.\n\n\nAdded missing requirements checklist content to\n the \nInstall Guide\n.\n\n\nUpdated the README at the top level of each of the primary repositories to help with user navigation and provide\n general information.\n\n\n\n\nUpgrade to FFmpeg 3.3.3 and OpenCV 3.3\n\n\n\n\n\nUpdated core framework from FFmpeg 2.6.3 to FFmpeg 3.3.3.\n\n\nAdded the following FFmpeg codecs: x256, VP9, AAC, Opus, Speex.\n\n\nUpdated core framework and components from OpenCV 3.2 to OpenCV 3.3. No longer building with opencv_contrib.\n\n\n\n\nFeed Forward Behavior\n\n\n\n\n\nUpdated the Workflow Manager (WFM) and all video components to optionally perform feed forward processing for batch\n jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will\n only process the frames associated with the detections in those tracks. This differs from the default segmenting\n behavior, which does not preserve detection regions or track information between stages.\n\n\nTo enable this behavior, the optional \nFEED_FORWARD_TYPE\n property must be set to \nFRAME\n, \nSUPERSET_REGION\n,\n or \nREGION\n. If set to \nFRAME\n then the components in the next stage will process the whole frame region associated\n with each detection in the track passed forward. If set to \nSUPERSET_REGION\n then the components in the next stage\n will determine the bounding box that encapsulates all of the detection regions in the track, and only process the\n pixel data within that superset region. If set to \nREGION\n then the components in the next stage will process the\n region associated with each detection in the track passed forward, which may vary in size and position from frame to\n frame.\n\n\nThe optional \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n property can be set to a number to limit the number of detections\n passed forward in a track. For example, if set to \"5\", then only the top 5 detections in the track will be passed\n forward and processed by the next stage. The top detections are defined as those with the highest confidence values,\n or if the confidence values are the same, those with the lowest frame index.\n\n\nNote that setting the feed forward properties has no effect on the first pipeline stage because there is no prior\n stage that can pass tracks to it.\n\n\n\n\nCaffe Component\n\n\n\n\n\nUpdated the Caffe component to process images in the BGR color space instead of the RGB color space. This addresses a\n bug found in OpenCV. Refer to the Bug Fixes section below.\n\n\nAdded support for processing videos.\n\n\nAdded support for an optional \nACTIVATION_LAYER_LIST\n property. For each network layer specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is an encoded string of the\n JSON representation of an OpenCV matrix of the activation values for that layer. The activation values are obtained\n after the Caffe network has processed the frame data.\n\n\nAdded support for an optional \nSPECTRAL_HASH_FILE_LIST\n property. For each JSON file specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is a string of 0's and 1's\n representing the spectral hash calculated using the information in the spectral hash JSON file. The spectral hash is\n calculated using activation values after the Caffe network has processed the frame data.\n\n\nAdded a pipeline to showcase the above two features for the GoogLeNet Caffe model.\n\n\nRemoved the \nTRANSPOSE\n property from the Caffe component since it was not necessary.\n\n\nAdded red, green, and blue mean subtraction values to the GoogLeNet pipeline.\n\n\n\n\nUse Key Frames\n\n\n\n\n\nAdded support for an optional \nUSE_KEY_FRAMES\n property to each video component. When true the component will only\n look at key frames (I-frames) from the input video. Can be used in conjunction with \nFRAME_INTERVAL\n. For example,\n when \nUSE_KEY_FRAMES\n is true, and \nFRAME_INTERVAL\n is set to \"2\", then every other key frame will be processed.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\n\n\n\nUpdated the MPFVideoCapture and MPFImageReader tools to handle feed forward properties.\n\n\nUpdated the MPFVideoCapture tool to handle \nFRAME_INTERVAL\n and \nUSE_KEY_FRAMES\n properties.\n\n\nUpdated all existing components to leverage these tools as much as possible.\n\n\nWe encourage component developers to use these tools to automatically take care of common frame grabbing and frame\n manipulation behaviors, and not to reinvent the wheel.\n\n\n\n\nDead Letter Queue\n\n\n\n\n\nIf for some reason a sub-job request that should have gone to a component ends up on the ActiveMQ Dead Letter Queue (\n DLQ), then the WFM will now process that failed request so that the job can complete. The ActiveMQ management page\n will now show that \nActiveMQ.DLQ\n has 1 consumer. It will also show unconsumed messages\n in \nMPF.PROCESSED_DLQ_MESSAGES\n. Those are left for auditing purposes. The \"Message Detail\" for these shows the string\n representation of the original job request protobuf message.\n\n\n\n\nUpgrade Path\n\n\n\n\n\nRemoved the Release 0.8 to Release 0.9 upgrade path in the deployment scripts.\n\n\nAdded support for a Release 0.9 to Release 1.0.0 upgrade path, and a Release 0.10.0 to Release 1.0.0 upgrade path.\n\n\n\n\nMarkup\n\n\n\n\n\nBounding boxes are now drawn along the interpolated path between detection regions whenever there are one or more\n frames in a track which do not have detections associated with them.\n\n\nFor each track, the color of the bounding box is now a randomly selected hue in the HSV color space. The colors are\n evenly distributed using the golden ratio.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a \nbug in OpenCV\n where the Caffe example code was processing\n images in the RGB color space instead of the BGR color space. Updated the OpenMPF Caffe component accordingly.\n\n\nFixed a bug in the OpenCV person detection component that caused bounding boxes to be too large for detections near\n the edge of a frame.\n\n\nResubmitting jobs now properly carries over configured job properties.\n\n\nFixed a bug in the build order of the OpenMPF project so that test modules that the WFM depends on are built before\n the WFM itself.\n\n\nThe Markup component draws bounding boxes between detections when a \nFRAME_INTERVAL\n is specified. This is so that the\n bounding box in the marked-up video appears in every frame. Fixed a bug where the bounding boxes drawn on\n non-detection frames appeared to stand still rather than move along the interpolated path between detection regions.\n\n\nFixed a bug on the OALPR license plate detection component where it was not properly handling the \nSEARCH_REGION_*\n\n properties.\n\n\nSupport for the \nMIN_GAP_BETWEEN_SEGMENTS\n property was not implemented properly. When the gap between two segments is\n less than this property value then the segments should be merged; otherwise, the segments should remain separate. In\n some cases, the exact opposite was happening. This bug has been fixed.\n\n\n\n\nKnown Issues\n\n\n\n\n\nBecause of the number of additional ActiveMQ messages involved, enabling feed forward for low resolution video may\n take longer than the non-feed-forward behavior.\n\n\n\n\nOpenMPF 0.x.x\n\n\n0.10.0: July 2017\n\n\n\n\n\nWARNING:\n There is no longer a \nDEFAULT CAFFE ACTION\n, \nDEFAULT CAFFE TASK\n, or \nDEFAULT CAFFE PIPELINE\n. There is now a \nCAFFE GOOGLENET DETECTION PIPELINE\n and \nCAFFE YAHOO NSFW DETECTION PIPELINE\n, which each have a respective action and task.\n\n\nNOTE:\n MPFImageReader has been re-enabled in this version of OpenMPF since we upgraded to OpenCV 3.2, which addressed the known issues with \nimread()\n, auto-orientation, and jpeg files in OpenCV 3.1.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nContributor Guide\n that provides guidelines for contributing to the OpenMPF\n codebase.\n\n\nUpdated the \nJava Batch Component API\n with links to the example Java components.\n\n\nUpdated the \nBuild Guide\n with instructions for OpenCV 3.2.\n\n\n\n\nUpgrade to OpenCV 3.2\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.1 to OpenCV 3.2.\n\n\n\n\nSupport for Animated gifs\n\n\n\n\n\nAll gifs are now treated as videos. Each gif will be handled as an MPFVideoJob.\n\n\nUnanimated gifs are treated as 1-frame videos.\n\n\nThe WFM Media Inspector now populates the \nmedia_properties\n map with a \nFRAME_COUNT\n entry (in addition to\n the \nDURATION\n and \nFPS\n entries).\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for the Yahoo Not Suitable for Work (NSFW) Caffe model for explicit material detection.\n\n\nUpdated the Caffe component to support the OpenCV 3.2 Deep Neural Network (DNN) module.\n\n\n\n\nFuture Support for Streaming Video\n\n\n\n\n\nNOTE:\n At this time, OpenMPF does not support streaming video. This section details what's being / has been done so far to prepare for that feature.\n\n\n\n\n\n\nThe codebase is being updated / refactored to support both the current \"batch\" job functionality and new \"streaming\"\n job functionality.\n\n\nbatch job: complete video files are written to disk before they are processed\n\n\nstreaming job: video frames are read from a streaming endpoint (such as RTSP) and processed in near real time\n\n\n\n\n\n\nThe REST API is being updated with endpoints for streaming jobs:\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job\n\n\n\n\n\n\nThe Redis and mySQL databases are being updated to support streaming video jobs.\n\n\nA batch job will never have the same id as a streaming job. The integer ids will always be unique.\n\n\n\n\n\n\n\n\nBug Fixes\n\n\n\n\n\nThe MOG and SuBSENSE component services could segfault and terminate if the \nUSE_MOTION_TRACKING\n property was set to\n \u201c1\u201d and a detection was found close to the edge of the frame. Specifically, this would only happen if the video had a\n width and/or height dimension that was not an exact power of two.\n\n\nThe reason was because the code downsamples each frame by a power of two and rounds the value of the width and\n height up to the nearest integer. Later on when upscaling detection rectangles back to a size that\u2019s relative to\n the original image, the resized rectangle sometimes extended beyond the bounds of the original frame.\n\n\n\n\n\n\n\n\nKnown Issues\n\n\n\n\n\nIf a job is submitted through the REST API, and a user to logged into the web UI and looking at the job status page,\n the WFM may generate \"Error retrieving the SingleJobInfo model for the job with id\" messages.\n\n\nThis is because the job status is only added to the HTTP session object if the job is submitted through the web\n UI. When the UI queries the job status it inspects this object.\n\n\nThis message does not appear if job status is obtained using the \n[GET] /rest/jobs/{id}\n endpoint.\n\n\n\n\n\n\nThe \n[GET] /rest/jobs/stats\n endpoint aggregates information about all of the jobs ever run on the system. If\n thousands of jobs have been run, this call could take minutes to complete. The code should be improved to execute a\n direct mySQL query.\n\n\n\n\n0.9.0: April 2017\n\n\n\n\n\nWARNING:\n MPFImageReader has been disabled in this version of OpenMPF. Component developers should use MPFVideoCapture instead. This affects components developed against previous versions of OpenMPF and components developed against this version of OpenMPF. Please refer to the Known Issues section for more information.\n\n\nWARNING:\n The OALPR Text Detection Component has been renamed to OALPR \nLicense Plate\n Text Detection Component. This affects the name of the component package and the name of the actions, tasks, and pipelines. When upgrading from R0.8 to R0.9, if the old OALPR Text Detection Component is installed in R0.8 then you will be prompted to install it again at the end of the upgrade path script. We recommend declining this prompt because the old component will conflict with the new component.\n\n\nWARNING:\n Action, task, and pipeline names that started with \nMOTION DETECTION PREPROCESSOR\n have been renamed \nMOG MOTION DETECTION PREPROCESSOR\n. Similarly, \nWITH MOTION PREPROCESSOR\n has changed to \nWITH MOG MOTION PREPROCESSOR\n.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n to reflect job properties, algorithm-specific properties, and\n media-specific properties.\n\n\nStreamlined the \nC++ Batch Component API\n document for clarity and simplicity.\n\n\nCompleted the \nJava Batch Component API\n document.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to reflect web UI changes.\n\n\nUpdated the \nBuild Guide\n with instructions for GitHub repositories.\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nAdded support for job properties, which will override pre-defined pipeline properties.\n\n\nAdded support for algorithm-specific properties, which will apply to a single stage of the pipeline and will override\n job properties and pre-defined pipeline properties.\n\n\nAdded support for media-specific properties, which will apply to a single piece and media and will override job\n properties, algorithm-specific properties, and pre-defined pipeline properties.\n\n\nComponents can now be automatically registered and installed when the web application starts in Tomcat.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nThe \"Close All\" button on pop-up notifications now dismisses all notifications from the queue, not just the visible\n ones.\n\n\nJob completion notifications now only appear for jobs created during the current login session instead of all jobs.\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties can be set using the web interface when creating a\n job. Once files are selected for a job, these properties can be set individually or by groups of files.\n\n\nThe Node and Process Status page has been merged into the Node Configuration page for simplicity and ease of use.\n\n\nThe Media Markup results page has been merged into the Job Status page for simplicity and ease of use.\n\n\nThe File Manager UI has been improved to handle large numbers of files and symbolic links.\n\n\nThe side navigation menu is now replaced by a top navigation bar.\n\n\n\n\nREST API\n\n\n\n\n\nAdded an optional jobProperties object to the \n/rest/jobs/\n request which contains String key-value pairs which\n override the pipeline's pre-configured job properties.\n\n\nAdded an optional algorithmProperties object to the \n/rest/jobs/\n request which can be used to configure properties\n for specific algorithms in the pipeline. These properties override the pipeline's pre-configured job properties. They\n also override the values in the jobProperties object.\n\n\nUpdated the \n/rest/jobs/\n request to add more detail to media, replacing a list of mediaUri Strings with a list of\n media objects, each of which contains a mediaUri and an optional mediaProperties map. The mediaProperties map can be\n used to configure properties for the specific piece of media. These properties override the pipeline's pre-configured\n job properties, values in the jobProperties object, and values in the algorithmProperties object.\n\n\nStreamlined the actions, tasks, and pipelines endpoints that are used by the web UI.\n\n\n\n\nFlipping, Rotation, and Region of Interest\n\n\n\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties will no longer appear in the detectionProperties\n map in the JSON detection output object. When applied to an algorithm these properties now appear in the\n pipeline.stages.actions.properties element. When applied to a piece of media these properties will now appear in the\n the media.mediaProperties element.\n\n\nThe OpenMPF now supports multiple regions of interest in a single media file. Each region will produce tracks\n separately, and the tracks for each region will be listed in the JSON output as if from a separate media file.\n\n\n\n\nComponent API\n\n\n\n\n\nJava Batch Component API is functionally complete for third-party development, with the exception of Component Adapter\n and frame transformation utilities classes.\n\n\nRe-architected the Java Batch Component API to use a more traditional Java method structure of returning track lists\n and throwing exceptions (rather than modifying input track lists and returning statuses), and encapsulating job\n properties into MPFJob objects:\n\n\nList getDetections(MPFVideoJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFAudioJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFImageJob job) throws MPFComponentDetectionError\n\n\n\n\n\n\nCreated examples for the Java Batch Component API.\n\n\nReorganized the Java and C++ component source code to enable component development without the OpenMPF core, which\n will simplify component development and streamline the code base.\n\n\n\n\nJSON Output Objects\n\n\n\n\n\nThe JSON output object for the job now contains a jobProperties map which contains all properties defined for the job\n in the job request. For example, if the job request specifies a \nCONFIDENCE_THRESHOLD\n of then the jobProperties map\n in the output will also list a \nCONFIDENCE_THRESHOLD\n of 5.\n\n\nThe JSON output object for the job now contains a algorithmProperties element which contains all algorithm-specific\n properties defined for the job in the job request. For example, if the job request specifies a \nFRAME_INTERVAL\n of 2\n for FACECV then the algorithmProperties element in the output will contain an entry for \"FACECV\" and that entry will\n list a \nFRAME_INTERVAL\n of 2.\n\n\nEach JSON media output object now contains a mediaProperties map which contains all media-specific properties defined\n by the job request. For example, if the job request specifies a \nROTATION\n of 90 degrees for a single piece of media\n then the mediaProperties map for that piece of piece will list a \nROTATION\n of 90.\n\n\nThe content of JSON output objects are now organized by detection type (e.g. MOTION, FACE, PERSON, TEXT, etc.) rather\n than action type.\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for flip, rotation, and cropping to regions of interest.\n\n\nAdded support for returning multiple classifications per detection based on user-defined settings. The classification\n list is in order of decreasing confidence value.\n\n\n\n\nNew Pipelines\n\n\n\n\n\nNew SuBSENSE motion preprocessor pipelines have been added to components that perform detection on video.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nActions.xml\n, \nAlgorithms.xml\n, \nnodeManagerConfig.xml\n, \nnodeServicesPalette.json\n, \nPipelines.xml\n, and \nTasks.xml\n\n are no longer stored within the Workflow Manager WAR file. They are now stored under \n$MPF_HOME/data\n. This makes it\n easier to upgrade the Workflow Manager and makes it easier for users to access these files.\n\n\nEach component can now be optionally installed and registered during deployment. Components not registered are set to\n the \nUPLOADED\n state. They can then be removed or registered through the Component Registration page.\n\n\nJava components are now packaged as tar.gz files instead of RPMs, bringing them into alignment with C++ components.\n\n\nOpenMPF R0.9 can be installed over OpenMPF R0.8. The deployment scripts will determine that an upgrade should take\n place.\n\n\nAfter the upgrade, user-defined actions, tasks, and pipelines will have \"CUSTOM\" prepended to their name.\n\n\nThe job_request table in the mySQL database will have a new \"output_object_version\" column. This column will\n have \"1.0\" for jobs created using OpenMPF R0.8 and \"2.0\" for jobs created using OpenMPF R0.9. The JSON output\n object schema has changed between these versions.\n\n\n\n\n\n\nReorganized source code repositories so that component SDKs can be downloaded separately from the OpenMPF core and so\n that components are grouped by license and maturity. Build scripts have been created to streamline and simplify the\n build process across the various repositories.\n\n\n\n\nUpgrade to OpenCV 3.1\n\n\n\n\n\nThe OpenMPF software has been ported to use OpenCV 3.1, including all of the C++ detection components and the markup\n component. For the OpenALPR license plate detection component, the versions of the openalpr, tesseract, and leptonica\n libraries were also upgraded to openalpr-2.3.0, tesseract-3.0.4, and leptonica-1.7.2. For the SuBSENSE motion\n component, the version of the SuBSENSE library was upgraded to use the code found at this\n location: \nhttps://bitbucket.org/pierre_luc_st_charles/subsense/src\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\nMOG motion detection always detected motion in frame 0 of a video. Because motion can only be detected between two\n adjacent frames, frame 1 is now the first frame in which motion can be detected.\n\n\nMOG motion detection never detected motion in the first frame of a video segment (other than the first video segment\n because of the frame 0 bug described above). Now, motion is detected using the first frame before the start of a\n segment, rather than the first frame of the segment.\n\n\nThe above bugs were also present in SuBSENSE motion detection and have been fixed.\n\n\nSuBSENSE motion detection generated tracks where the frame numbers were off by one. Corrected the frame index logic.\n\n\nVery large video files caused an out of memory error in the system during Workflow Manager media inspection.\n\n\nA job would fail when processing images with an invalid metadata tag for the camera flash setting.\n\n\nUsers were permitted to select invalid file types using the File Manager UI.\n\n\n\n\nKnown Issues\n\n\n\n\n\nMPFImageReader does not work reliably with the current release version of OpenCV 3.1\n: In OpenCV 3.1, new\n functionality was introduced to interpret EXIF information when reading jpeg files.\n\n\nThere are two issues with this new functionality that impact our ability to use the OpenCV \nimread()\n function with\n MPFImageReader:\n\n\nFirst, because of a bug in the OpenCV code, reading a jpeg file that contains exif information could cause it to\n hang. (See \nhttps://github.com/opencv/opencv/issues/6665\n.)\n\n\nSecond, it is not possible to tell the \nimread()\nfunction to ignore the EXIF data, so the image it returns is\n automatically rotated. (See \nhttps://github.com/opencv/opencv/issues/6348\n.) This results in the MPFImageReader\n applying a second rotation to the image due to the EXIF information.\n\n\n\n\n\n\nTo address these issues, we developed the following workarounds:\n\n\nCreated a version of the MPFVideoCapture that works with an MPFImageJob. The new MPFVideoCapture can pull frames\n from both video files and images. MPFVideoCapture leverages cv::VideoCapture, which does not have the two issues\n described above.\n\n\nDisabled the use of MPFImageReader to prevent new users from trying to develop code leveraging this previous\n functionality.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nOpenMPF 9.0.x\n\n\n9.0.0: May 2024\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nQuality Selection Guide\n.\n\n\n\n\nQuality Selection\n\n\n\n\n\nCan now specify a \nQUALITY_SELECTION_PROPERTY\n and \nQUALITY_SELECTION_THRESHOLD\n for choosing exemplars, artifacts,\n and controlling feed-forward behavior.\n\n\nThe following old job properties and old system properties are no longer supported. The tables show the new properties\n that should be used instead:\n\n\n\n\n\n\n\n\n\n\nOld Job Property\n\n\nNew Job Properties\n\n\n\n\n\n\n\n\n\n\nCONFIDENCE_THRESHOLD\n\n\nQUALITY_SELECTION_PROPERTY\nQUALITY_SELECTION_THRESHOLD\n\n\n\n\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n\n\n\n\n\n\nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOld System Property\n\n\nNew System Properties\n\n\n\n\n\n\n\n\n\n\ndetection.confidence.threshold\n\n\ndetection.quality.selection.prop\ndetection.quality.selection.threshold\n\n\n\n\n\n\ndetection.artifact.extraction.policy.top.confidence.count\n\n\ndetection.artifact.extraction.policy.top.quality.count\n\n\n\n\n\n\n\n\n\n\nBy default, \nQUALITY_SELECTION_PROPERTY\n is set to the value of \ndetection.quality.selection.prop\n system property,\n which, by default, is \nCONFIDENCE\n. In most cases this preserves the previous behavior.\n\n\nBy default, \nQUALITY_SELECTION_THRESHOLD\n is set to the value of \ndetection.quality.selection.threshold\n system\n property, which, by default, is \n-Infinity\n. This setting disables the threshold. Previously, the default value of\n \ndetection.confidence.threshold\n was -1, which disabled the threshold for most components.\n\n\nComponents that previously used \nCONFIDENCE_THRESHOLD\n now have \nQUALITY_SELECTION_PROPERTY=CONFIDENCE\n. Also,\n \nQUALITY_SELECTION_THRESHOLD\n is set to the previous value of \nCONFIDENCE_THRESHOLD\n. For example, see \nthis\n commit\n\n for changes made to the OcvYoloDetection component.\n\n\nEXEMPLAR_POLICY\n is now set to \nQUALITY\n by default. This setting results in choosing the detection within each track\n with the maximum quality according to the \nQUALITY_SELECTION_PROPERTY\n. Previously, the selection was always made\n based on highest detection confidence.\n\n\nSimilarly, the new \nFEED_FORWARD_TOP_QUALITY_COUNT\n and \nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n properties use\n \nQUALITY_SELECTION_PROPERTY\n and \nQUALITY_SELECTION_THRESHOLD\n.\n\n\nRefer to the \nQuality Selection Guide\n for details.\n\n\n\n\nTransformer Tagging Component\n\n\n\n\n\nThis component uses a user-specified corpus JSON file to match known phrases against each sentence in the input text\n data.\n\n\nThe input text sentences that generate match scores above the threshold are called \"trigger sentences\". These\n sentences are grouped by \"tag\" based on which entry in the corpus they matched against.\n\n\nThe underlying \nall-mpnet-base-v2 model\n was trained\n on a variety of text data in order to understand the commonalities in phrasing, subject, and context.\n\n\nRefer to the \nREADME\n\n for details.\n\n\n\n\nKeyword Tagging Component Output\n\n\n\n\n\nUpdated the Keyword Tagging Component to generate output in the same format as the Transformer Tagging Component. For\n example, the output properties used to take the form \n TRIGGER WORDS\n and \n TRIGGER WORDS OFFSET\n:\n\n\n\n\nTEXT TRIGGER WORDS\nTEXT TRIGGER WORDS OFFSET\nTRANSLATION TRIGGER WORDS\nTRANSLATION TRIGGER WORDS OFFSET\n\n\n\n\n\nNow the output properties take the form \n TRIGGER WORDS\n and \n TRIGGER WORDS OFFSET\n:\n\n\n\n\nTEXT TRAVEL TRIGGER WORDS\nTEXT TRAVEL TRIGGER WORDS OFFSET\nTRANSLATION TRAVEL TRIGGER WORDS\nTRANSLATION TRAVEL TRIGGER WORDS OFFSET\n\n\n\n\n\nNotice that in the above example the new output properties include the word \nTRAVEL\n. If trigger words are detected\n for other tags, such as \nFINANCIAL\n and \nVEHICLE\n, those words will be used in separate \nTRIGGER WORDS\n and\n \nTRIGGER WORDS OFFSET\n output properties.\n\n\nThis change enables the job consumer to determine which trigger words are associated with each entry in the \nTAGS\n\n output property.\n\n\nRefer to the \"Outputs\" section of the\n \nREADME\n for details.\n\n\n\n\nReporting Component Processing Time\n\n\n\n\n\nThe JSON output object contains a new section for reporting component processing time in milliseconds. For example:\n\n\n\n\n\"timing\": {\n \"processingTime\": 1514,\n \"actions\": [\n {\n \"name\": \"OCV YOLO VEHICLE DETECTION ACTION\",\n \"processingTime\": 1431\n },\n {\n \"name\": \"TENSORFLOW VEHICLE COLOR DETECTION (WITH FF REGION) ACTION\",\n \"processingTime\": 83\n }\n ]\n},\n\n\n\n\n\nThis does not include the time sub-jobs spent waiting in queues, or processing time by the Workflow Manager, such as\n the time to perform media inspection.\n\n\nAlso, the above JSON is reported in the TiesDB job record within the \ndataObject\n field.\n\n\n\n\nNLP Text Splitter Utility\n\n\n\n\n\nThe new NLP Text Splitter utility uses spaCy or \nWhere's the Point (WtP)\n\n models for determining how to break up text into sentences.\n\n\nSupports both CPU processing and optional GPU processing.\n\n\nUpdated the Azure Translation Component to use this utility to ensure that translation requests are within the 50,000\n character limit.\n\n\nRefer to the\n \nREADME\n\n for details.\n\n\n\n\nCLIP Component Video Support\n\n\n\n\n\nThe CLIP Component now supports processing videos in addition to the previous ability to process images. Specify the\n batch size using \nDETECTION_FRAME_BATCH_SIZE\n.\n\n\nThe component also supports a new, larger, and more accurate \nViT-L/14\n model in addition to the previous \nViT-B/32\n\n model. Both models are supported via the optional Triton server as well as within the component itself for non-Triton\n deployments.\n\n\nRefer to the\n \nREADME\n\n for performance metrics.\n\n\nThe \nNUMBER_OF_TEMPLATES\n property has been renamed to \nTEMPLATE_TYPE\n and now accepts one of the following values:\n \nopenai_1\n, \nopenai_7\n, \nopenai_80\n.\n\n\n\n\nImport Root Certificates for Components\n\n\n\n\n\nCan now specify a \nMPF_CA_CERTS\n environment variable for component Docker services to import root certificates.\n\n\nMay be useful when components need to communicate with external web services.\n\n\nRefer to the \nREADME\n for details.\n\n\n\n\nDocker Secrets for Environment Variables\n\n\n\n\n\nCan now use Docker secrets for environment variables in the Docker compose file.\n\n\nThis prevents exposing information as plain text in \ndocker-compose.yml\n.\n\n\nMay be useful for environment variables like:\n\n\nWorkflow Manager username and password: \nWFM_USER\n and \nWFM_PASSWORD\n\n\nKeystore password when enabling Workflow Manager HTTPS: \nKEYSTORE_PASSWORD\n\n\nAzure credentials: \nMPF_PROP_ACS_URL\n and \nMPF_PROP_ACS_SUBSCRIPTION_KEY\n\n\n\n\n\n\nRefer to the\n \nREADME\n\n for details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1692\n] Create a TransformerTagging component\n\n\n[\n#1718\n] Support a \nQUALITY_SELECTION_PROP\n to specify how the WFM should choose an exemplar\n\n\n[\n#1754\n] Report amount of time components spent executing a job\n\n\n[\n#1756\n] Support \nMPF_CA_CERTS\n for components\n\n\n[\n#1771\n] Azure Translation: Identify character limits. Split text using NLP Text Splitter.\n\n\n[\n#1798\n] Add NLP Text Splitter to Python Component SDK\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1694\n] Update CLIP component to support videos\n\n\n[\n#1706\n] Update KeywordTagging to work with TransformerTagging\n\n\n[\n#1745\n] Support using docker secrets for environment variables in \ndocker-compose.yml\n\n\n[\n#1769\n] Upgrade to proto3 and clean up \n.proto\n files\n\n\n[\n#1774\n] Update how TransformerTagging tokenizes sentences\n\n\n[\n#1785\n] Upgrade to OpenCV 4.9\n\n\n[\n#1786\n] Modify the behavior of Markup when \nCONFIDENCE\n is the bounding box label to be displayed\n\n\n[\n#1797\n] Further update Azure Translation and STT language maps\n\n\n[\n#1803\n] Upgrade Postgres client used by Workflow Manager\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1781\n] Markup boxes are not drawn when animation is disabled and there are gaps in a track\n\n\n[\n#1799\n] Keyword Tagging removes newlines so character offsets don't line up with original text\n\n\n\n\nOpenMPF 8.0.x\n\n\n8.0.4: May 2024\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1805\n] Workflow Manager incorrectly detects whether certain videos\n are constant or variable frame rate\n\n\n\n\n8.0.3: April 2024\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1788\n] Azure Speech and Translation: Update supported language\n mappings\n\n\n\n\n8.0.2: March 2024\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n with new \n[GET] /rest/queues\n and \n[GET] /rest/queues/{name}\n endpoints.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1776\n] Add REST endpoint for retrieving the ActiveMQ message counts\n for each queue\n\n\n\n\n8.0.1: March 2024\n\n\n\nUpdates\n\n\n\n\n\n[\n#1768\n] Add Option to Merge Text Sections in TikaTextDetection\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1763\n] Media inspections fails when ffprobe does not specify a\n stream \"codec_type\"\n\n\n\n\n8.0.0: December 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nOpenID Connect Guide\n.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to remove\n \n/workflow-manager\n from the Workflow Manager base URL. The Admin Guide includes a section for the new Hawtio web\n console.\n\n\nUpdated the \nREST API\n to use path parameters for pipelines, tasks, actions, and algorithms\n endpoints.\n\n\nUpdated the \nComponent Descriptor Reference\n with \nalgorithm.trackType\n.\n\n\nUpdated the \nC++ Batch Component API\n, \nPython Batch Component\n API\n, and \nJava Batch Component API\n to\n remove the ability to get the detection type since track type is now specified in \ndescriptor.json\n.\n\n\nCreated a new \nTrigger Guide\n.\n\n\nCreated a new \nRoll Up Guide\n.\n\n\n\n\nOpenID-Connect (OIDC) Authentication\n\n\n\n\n\nThe Workflow Manager can now optionally use an OpenID Connect (OIDC) provider to handle authentication for users of\n the web UI and clients of the REST API. The URI for the OIDC provider is specified using the \nOIDC_ISSUER_URI\n\n environment variable.\n\n\nWhen enabled, OIDC is used to authenticate components when they register with the Workflow Manager.\n\n\nWhen \nCALLBACK_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token in job request callbacks.\n\n\nWhen \nTIES_DB_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token when posting to a TiesDb server.\n\n\nWhen OIDC is not enabled, the Workflow Manager uses basic authentication with usernames and passwords, as in previous\n versions of OpenMPF.\n\n\nRefer to the \nOpenID Connect Guide\n for more information on the various OIDC\n environment variables and a Keycloak example.\n\n\n\n\nEmbedded ActiveMQ Broker and Hawtio\n\n\n\n\n\nActiveMQ is now part of the Workflow Manager Spring Boot web application and is no longer run as a separate Docker\n service. This enables ActiveMQ to integrate with Spring Security so it can be protected by the Workflow Manager's OIDC\n support.\n\n\nThe Workflow Manager is the sender or recipient of all ActiveMQ messages, so embedding ActiveMQ in the Workflow\n Manager prevents a network hop on all messages.\n\n\nThe ActiveMQ management page has been replaced by \nHawtio\n, which is more feature rich and can be\n used to monitor the state of the ActiveMQ queues used for communication between the Workflow Manager and the\n components. The Hawtio web console can be accessed by selecting \"Hawtio\" from the \"Configuration\" dropdown menu in the\n top menu bar of the web UI.\n\n\nImportantly, the base URL for the Workflow Manager is now http://localhost:8080 instead of\n http://localhost:8080/workflow-manager. \n/workflow-manager\n is no longer part of the path. This change was made to\n enable Hawtio integration.\n\n\n\n\nREST API Updates\n\n\n\n\n\nThe following changes have been made to the REST endpoints to address a limitation with Swagger (OpenAPI). These\n changes enable the REST endpoints to properly show up in the Swagger page, which is accessed by selecting \"REST API\"\n from the \"Configuration\" dropdown menu in the top menu bar of the web UI.\n\n\n\n\n\n\n\n\n\n\nOld REST Endpoint\n\n\nNew REST Endpoint\n\n\n\n\n\n\n\n\n\n\n[GET] /rest/pipelines?name={name}\n\n\n[GET] /rest/pipelines/{name}\n\n\n\n\n\n\n[GET] /rest/tasks?name={name}\n\n\n[GET] /rest/tasks/{name}\n\n\n\n\n\n\n[GET] /rest/actions?name={name}\n\n\n[GET] /rest/actions/{name}\n\n\n\n\n\n\n[GET] /rest/algorithms?name={name}\n\n\n[GET] /rest/algorithms/{name}\n\n\n\n\n\n\n[DELETE] /rest/pipelines?name={name}\n\n\n[DELETE] /rest/pipelines/{name}\n\n\n\n\n\n\n[DELETE] /rest/tasks?name={name}\n\n\n[DELETE] /rest/tasks/{name}\n\n\n\n\n\n\n[DELETE] /rest/actions?name={name}\n\n\n[DELETE] /rest/actions/{name}\n\n\n\n\n\n\n\n\n\n\nIn general, the name is now specified as part of the URL path instead of as a URL parameter.\n\n\n/\n and \n;\n characters are no longer allowed in these names.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nEach component's \ndescriptor.json\n now requires an \nalgorithm.trackType\n field. This is used by the Workflow Manager\n to determine the kind of tracks that may be generated by the component (e.g. \nFACE\n, \nTEXT\n, \nCLASS\n, etc.). This is\n now used in place of the component API calls that were used to get the detection type. \n\n\n\n\nComponent API Updates\n\n\n\n\n\nThe following changes were made since the track type is now part of each component's \ndescriptor.json\n:\n\n\nRemoved \nGetDetectionType()\n from the CPP Component API.\n\n\nRemoved \ndetection_type\n from the Python Component API.\n\n\nRemoved \ngetDetectionType()\n from the Java Component API.\n\n\n\n\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects use \naction\n instead of \nsource\n in the track type group. Also, \nsource\n is removed from each track.\n\n\nConsider this example of the old JSON output:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"confidence\": 8.799637,\n ...\n\n\n\n\n\nThe corresponding new JSON output is:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"action\": \"OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"confidence\": 8.799637,\n ...\n\n\n\nTrigger Support\n\n\n\n\n\nA \nTRIGGER\n property can now be added to any action in a pipeline. It will only be used if \nFEED_FORWARD_TYPE\n is\n provided and set to something other than \nNONE\n. The \nTRIGGER\n property is used to conditionally control if the\n Workflow Manager executes that action. Each feed-forward track that is not executed is passed to the next stage of the\n pipeline. This results in skipping untriggered actions.\n\n\nThe value of \nTRIGGER\n takes the form \n=[;...]\n. For example, if the value is\n \nCLASSIFICATION=car\n then the Workflow Manager would only execute the associated action using feed-forward tracks from\n the previous stage in the pipeline if those tracks have the \nCLASSIFICATION\n track property with a value of \ncar\n.\n This could be useful to skip a license plate detection action. To enable the action to trigger on more than just \ncar\n\n tracks you can provide a list of valid values. For example, \nCLASSIFICATION=car;truck;bus\n.\n\n\nThe \nTrigger Guide\n goes into more detail and provides an example of a pipeline with\n multiple speech-to-text stages. \nTRIGGER\n is used to select which speech-to-text algorithm is executed based on the\n detected language in the media.\n\n\n\n\nRoll Up Support\n\n\n\n\n\nThe Workflow Manager can be configured to replace the values of track and detection properties\n after receiving tracks and detections from a component. For example, the \nCLASSIFICATION\n property\n may be set to \"car\", \"bus\", and \"truck\". Those can be rolled up into \"vehicle\".\n\n\nTo use this feature, set the \nROLL_UP_FILE\n property to the path of a JSON file that matches\n the format of this example:\n\n\n\n\n[\n {\n \"propertyToProcess\": \"CLASSIFICATION\",\n \"originalPropertyCopy\": \"ORIGINAL CLASSIFICATION\",\n \"groups\": [\n {\n \"rollUp\": \"vehicle\",\n \"members\": [\n \"truck\",\n \"car\",\n \"bus\"\n ]\n }\n ]\n }\n]\n\n\n\n\n\nRefer to the \nRoll Up Guide\n for an explanation and more details.\n\n\n\n\nChanged All \"whitelist\" References to \"allow list\"\n\n\n\n\n\nIn an effort to be more culturally sensitive, all references to \"whitelist\" have been removed or renamed to \"allow\n list\".\n\n\nThe \nwhitelist.\n prefix has been removed from the entries in the \nmediaType.properties\n file. For example,\n \nwhitelist.image/gif=VIDEO\n is now \nimage/gif=VIDEO\n.\n\n\nThe OcvDnnDetection component \nFEED_FORWARD_WHITELIST_FILE\n property has been renamed to\n \nFEED_FORWARD_ALLOW_LIST_FILE\n.\n\n\nThe OcvYoloDetection component \nCLASS_WHITELIST_FILE\n property has been renamed to \nCLASS_ALLOW_LIST_FILE\n.\n\n\n\n\nArgos Translation Component\n\n\n\n\n\nThis new component utilizes \nArgos Translate\n to translate input\n text from a given source language to English. It can be used in a feed-forward pipeline to process tracks with\n language and/or script identifiers from an upstream stage.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nWhisper Speech-to-Text and Translation Component\n\n\n\n\n\nThis new component utilizes \nOpenAI Whisper\n to perform language detection,\n speech-to-text transcription, or speech translation.\n\n\nIf multiple languages are spoken in a single piece of media, language detection will detect only one of them.\n\n\nNote that Whisper is not designed to return a transcription in the source language when performing translation, so we\n implemented the component to perform an additional transcribe call when configured to perform translation.\n\n\nRefer to the \nREADME\n\n for details.\n\n\n\n\nContrastive Language\u2013Image Pre-training (CLIP) Component\n\n\n\n\n\nThis new component utilizes \nCLIP\n to classify images using the 80 COCO classes, 1000\n ImageNet classes, or a list of user-provided classes. It can run on a CPU or GPU, and can make calls to an NVIDIA\n Triton inference server.\n\n\nClassification is performed by taking the class names and filling in one or more text prompts. For example, \"a photo\n of {}\", where \"{}\" can be \"dog\" or \"cat\". An embedding is generated using the text prompt(s) for each class and\n compared against the image embedding to get a match score. Optionally, users can provide a list of their own text\n prompts.\n\n\nOpenAI trained the CLIP model using a wide variety of images and their respective captions from the Internet. This may\n make it suitable for a wide variety of classification tasks without further training (known as zero-shot\n classification). For example, a user could make up a list of classes for arbitrary objects like \"walrus\", \"paperclip\",\n \"pizza\", etc., and use the default text prompts.\n\n\nIt is also possible to use CLIP to classify concepts like scenes and sentiment. For example, using a text prompt of \"a\n {} scene\" where the classes are \"safe\", \"violent\", and \"dangerous\".\n\n\nOptionally, the CLIP component can return the image embedding as the track \nFEATURE\n. For example, this can be used\n for search and retrieval tasks by comparing it to other embeddings enrolled in a database.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1547\n] Create Argos translation component\n\n\n[\n#1574\n] Update the WFM to support an optional \nTRIGGER\n property on any action\n\n\n[\n#1598\n] Create a Whisper component for speech-to-text and and translation\n\n\n[\n#1644\n] Create CLIP component for processing images\n\n\n[\n#1704\n] Update Workflow Manager to authenticate users and REST clients using OIDC\n\n\n[\n#1730\n] Update Workflow Manager to optionally use OIDC when sending callbacks and posting to TiesDb\n\n\n[\n#1733\n] Update Workflow Manager to use an embedded ActiveMQ broker\n\n\n[\n#1793\n] Add Roll Up support to Workflow Manager\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#799\n] Avoid unnecessary serialization between Camel routes\n\n\n[\n#949\n] Change \n/pipelines?name=MYPIPELINE\n REST endpoint to \n/pipelines/MYPIPELINE\n\n\n[\n#1643\n] Remove \nLONG_SPEAKER_ID\n and instead only use \nSPEAKER_ID\n\n\n[\n#1645\n] Refactor camel code\n\n\n[\n#1705\n] Change all references to \"whitelist\" to \"allow list\" and \"blacklist\" to \"block list\"\n\n\n[\n#1759\n] Disable markup animation by default\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1642\n] \nInProgressBatchJobsService.setProcessedAction\n is now called when a previous task produces no tracks\n\n\n[\n#1755\n] The Workflow Manager logs page does not properly handle multi-byte characters\n\n\n\n\nOpenMPF 7.2.x\n\n\n7.2.6: January 2024\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nHealth Check Guide\n.\n\n\n\n\nHealth Check Support\n\n\n\n\n\nThe C++ and Python component executors can be configured to run health checks on components prior to running jobs.\n Health checks are configured using environment variables:\n\n\nHEALTH_CHECK\n: When set to \"ENABLED\", the component executor will run health checks.\n\n\nHEALTH_CHECK_TIMEOUT\n: When set to a positive integer, specifies the minimum number of seconds between health\n checks. When absent or set to 0, a health check will run before every job.\n\n\nHEALTH_CHECK_RETRY_MAX_ATTEMPTS\n: When set to a positive integer, specifies the number of consecutive health\n check failures that will cause the component service to exit. When absent or set to 0, the component service will\n never exit because of a failed health check.\n\n\n\n\n\n\nAlso, an INI file must be provided at \n$MPF_HOME/plugins//health/health-check.ini\n. For example:\n\n\n\n\nmedia=$MPF_HOME/plugins/OcvFaceDetection/health/meds_faces_image.png\nmin_num_tracks=2\nmedia_type=IMAGE\n\n[job_properties]\nJOB PROP1=VALUE1\nJOB PROP2=VALUE2\n\n[media_properties]\nMEDIA PROP=MEDIA VALUE\n\n\n\n\n\nRefer to the \nHealth Check Guide\n for an explanation and more details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1731\n] Implement health checks for C++ and Python components\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1727\n] Update ffmpeg to 6.1\n\n\n\n\n7.2.5: November 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1715\n] Upgrade ActiveMQ to 5.17.6\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1711\n] When selecting detections with the highest confidence,\n Workflow Manager should consistently handle detections with equal confidence\n\n\n\n\n7.2.4: September 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1707\n] Fix bug where TiesDB check status reports\n \nNO_TIES_DB_URL_IN_JOB\n instead of \nMEDIA_MIME_TYPES_ABSENT\n\n\n\n\n7.2.3: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1697\n] Prevent OcvYoloDetection component from deadlocking on\n strange frame sizes when using Triton\n\n\n\n\n7.2.2: June 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1693\n] Add property to enable/disable SAS in AzureSpeech\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1695\n] Fix memory leak in KeywordTagging component\n\n\n\n\n7.2.1: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1678\n] Fix bug where ffmpeg hangs when processing some kinds of\n unsupported/corrupted media\n\n\n\n\n7.2.0: May 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nTiesDb Guide\n.\n\n\nUpdated the \nComponent Descriptor Reference\n with \noutputChangedCounter\n.\n\n\nUpdated the \nREST API\n with a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint.\n\n\nUpdated the REST API \n[POST] /rest/jobs\n response with \ntiesDbCheckStatus\n and \noutputObjectUri\n.\n\n\n\n\nTiesDb Re-Post\n\n\n\n\n\nAdded a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint that accepts an array of job ids as an input and will attempt to\n re-post the job assertions (records) to TiesDb for each one. \n\n\nAdded a \"TiesDb\" column to the Job Status page. If there is a problem posting a record to the TiesDb server the column\n will contain an \"ERROR\" button. Clicking on it will provide a description of the error and a button that can be used\n to re-post the associated job records.\n\n\n\n\nTiesDb Checking\n\n\n\n\n\nIf the \nTIES_DB_URL\n job property or \nties.db.url\n system property is set when submitting a job creation request, \n then the Workflow Manager will attempt to check TiesDb for existing job results before running the job again.\n\n\nThe Workflow Manager will attempt to use the most-recently-created job results, preferring jobs that completed without\n errors or warnings, and preferring jobs that completed with warnings over completed with errors.\n\n\nTo prevent this check, set \nSKIP_TIES_DB_CHECK=true\n. That will force the job to run and attempt to post the new\n job results to TiesDb.\n\n\nWhen using TiesDb, we strongly recommend providing both the \nMEDIA_HASH\n and \nMIME_TYPE\n in the \nmedia.metadata\n map\n in the job request. This will enable the Workflow Manager to skip media inspection. When using S3 object storage, this\n means that the Workflow Manager will not need to download the media before checking TiesDb for existing job records.\n\n\nThe \n[POST] /rest/jobs\n response now contains a \ntiesDbCheckStatus\n and \noutputObjectUri\n field. \ntiesDbCheckStatus\n\n will be set to one of the following values:\n\n\nNOT_REQUESTED\n\n\nNO_TIES_DB_URL_IN_JOB\n\n\nMEDIA_HASHES_ABSENT\n\n\nMEDIA_MIME_TYPES_ABSENT\n\n\nNO_MATCH\n\n\nFOUND_MATCH\n\n\n\n\n\n\nWhen there is a \nFOUND_MATCH\n, the \noutputObjectUri\n will be set to the URI of the old TiesDb record if S3 copy is\n not enabled.\n\n\nBy default, the \nties.db.s3.copy.enabled\n system property is set to \ntrue\n. This means that the Workflow Manager will\n attempt to copy all of the artifacts, markup, and derivative media associated with the job in TiesDb from the S3\n locations associated with the old job to the new S3 location specified in the new job. A new JSON output object will\n be generated. To disable this behavior set the system property, or \nTIES_DB_S3_COPY_ENABLED\n, to \nfalse\n. Then the\n Workflow Manager will simply provide a link to the old JSON as the result of the new job.\n\n\nIf there is a problem copying between S3 locations, the \"TiesDb\" column to the Job Status page will show a\n \"COPY ERROR\" button. Clicking on it will provide a description of the error.\n\n\n\n\nTiesDb Linked Media\n\n\n\n\n\nAdded support for \nLINKED_MEDIA_HASH\n in the \nmedia.properties\n section of the job creation request. When specified,\n the value of \nLINKED_MEDIA_HASH\n will be used instead of the actual media hash when creating a record in TiesDb,\n and also when looking for existing records in TiesDb.\n\n\nThis feature can be used to submit a transcoded (or thumbnail) version of an image to process instead of the source\n image. For example, the source image may be in a format not supported by OpenMPF. In this case, the value of\n \nLINKED_MEDIA_HASH\n can be set to the source image, but the rest of the job creation request would specify\n the \nmedia.mediaUri\n and \nmedia.metadata\n for the transcoded version of that image.\n\n\n\n\nOutput Changed Counter\n\n\n\n\n\nAdded the \noutput.changed.counter\n system property to the Workflow Manager and \noutputChangedCounter\n field to each\n component's \ndescriptor.json\n. These values are used when calculating the hash for a job when its record is posted to\n TiesDb, and also when checking TiesDb for existing records when a new job is submitted.\n\n\nIf the Workflow Manager is updated for any reason that should invalidate pre-existing job results, such as a\n change to the fields in the JSON output object, or significant improvements to track merging, for example, then the\n value of \noutput.changed.counter\n should be incremented by one. This will ensure that records in TiesDb will not be\n used so that all future jobs will need to be (re)run at least once until the counter is incremented again.\n\n\nThe same is true for each component. If a component is updated for any reason that should invalidate\n pre-existing job results, such as changes to input or output properties, or substantial improvements to the algorithm,\n then the value of \noutputChangedCounter\n should be incremented by one.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects will include \ntiesDbSourceJobId\n and \ntiesDbSourceMediaPath\n when the Workflow Manager can use\n previous job results stored in TiesDB. Note that the Workflow Manager will not generate a new JSON output object\n unless \nS3_RESULTS_BUCKET\n is set to a valid value, S3 access and secret keys are provided, and\n \nTIES_DB_S3_COPY_ENABLED=true\n.\n\n\n\n\nffprobe for Media Inspection\n\n\n\n\n\nThe Workflow Manager media inspection behavior now uses \nffprobe\n with \n-print_format json\n to return more precise\n \nFPS\n values for the \nmedia.mediaMetadata\n in the JSON output object. For example, the previous version of the\n Workflow Manager would return \n29.97\n, where the new version will return \n29.97002997002997\n. In multi-hour-long\n vidoes this can prevent cases where the last few frames were being ignored.\n\n\nThe previous version of the Workflow Manager was using both \nffmpeg\n and OpenCV to determine the number of frames in\n a video. We removed the OpenCV frame counter in this version because the \nffprobe\n approach is more accurate.\n The \nffprobe\n command replaces the old \nffmpeg\n command. \n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Job Status page to be more efficient. Searching a database of hundreds of thousands of jobs takes a long\n time. By limiting the search to one page of results at a time the UI is more responsive.\n\n\nRemoved timeout and bootout. The user session will no longer automatically end due to time out, or due to the same\n user logging in from a different host or browser. These behaviors were deemed too disruptive by end users.\n\n\nUpdated the Job Status page to include a \"TiesDb\" column that reports TiesDb status, such as when posting records\n to TiesDb and when retrieving existing records.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1438\n] Create a REST endpoint that will attempt to re-post to TiesDb\n\n\n[\n#1613\n] Check TiesDb before running a job\n\n\n[\n#1650\n] Create TiesDb records for thumbnail jobs under the parent media\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1342\n] Use ffprobe to get FPS during media inspection\n\n\n[\n#1564\n] Use ffprobe's JSON output instead of regexes during media inspection\n\n\n[\n#1601\n] Update the Workflow Manager jobs table to be more efficient\n\n\n[\n#1611\n] Remove Workflow Manager timeout and bootout behavior\n\n\n\n\nOpenMPF 7.1.x\n\n\n7.1.12: March 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1667\n] Handle Webp files with extra data at the end that cause components to crash\n\n\n\n\n7.1.10: March 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1662\n] Monitor StorageBackend\n\n\n\n\n7.1.9: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1675\n] Prevent upgrade of cudnn in yolo server dockerfile\n\n\n\n\n7.1.8: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1649\n] Install specific version of libcudnn8 in Docker build\n\n\n\n\n7.1.7: February 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1674\n] Update \nSPEAKER_ID\n logic, set \nLONG_SPEAKER_ID=0\n\n\n\n\n7.1.5: January 2023\n\n\n\nFeatures\n\n\n\n\n\n[\n#1542\n] Update Azure Speech Detection component to select transcription language based on feed-forward track\n\n\n[\n#1543\n] Update audio transcoder to accept subsegments\n\n\n[\n#1605\n] Update Azure Translation to use detected language from upstream\n\n\n\n\n7.1.1: December 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1634\n] Update version numbers to 7.1\n\n\n\n\n7.1.0: December 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_UPLOAD_OBJECT_KEY_PREFIX\n.\n\n\nUpdated the Markup Guide with \nMARKUP_TEXT_LABEL_MAX_LENGTH\n.\n\n\n\n\nExemplar Selection Policy\n\n\n\n\n\nThe policy for selecting the exemplar detection for each track can now be set using the \nEXEMPLAR_POLICY\n job property\n with following values:\n\n\nCONFIDENCE\n: Select the detection with the maximum confidence. If some confidences are the same, select the\n detection with the lower frame number. This is the default setting.\n\n\nFIRST\n: Select the detection with the lowest frame number\n\n\nLAST\n: Select the detection with the highest frame number\n\n\nMIDDLE\n: Select the detection with the frame number closest to the middle frame of the track, preferring the\n detection with the lower frame number if there is an even number of frames\n\n\n\n\n\n\n\n\nAutomatic Rotation and Horizontal Flip Enabled by Default\n\n\n\n\n\nIt is no longer necessary to explicitly set \nAUTO_ROTATE\n and \nAUTO_FLIP\n to true since that is now the default value.\n\n\nThese properties affect all video and image components that use the MPFImageReader and MPFVideoCapture tools. When\n true, if the image has EXIF data, or there is metadata associated with a video that ffmpeg understands, the tools will\n use that information to properly orient the frames before returning the frames to the component for processing.\n\n\n\n\nSupport S3 Object Storage Key Prefix\n\n\n\n\n\nSet the \nS3_UPLOAD_OBJECT_KEY_PREFIX\n job property or \ns3.upload.object.key.prefix\n system property to add a prefix to\n object keys when the Workflow Manager uploads objects to the S3 object store. This affects the JSON output object,\n artifacts, markup files, and derivative media.\n\n\nSpecifically, the Workflow Manager will upload objects to\n \n///\n.\n\n\nFor example, if you wish to add \"work/\" to the object key, then set \nS3_UPLOAD_OBJECT_KEY_PREFIX=work/\n.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1526\n] Allow markup to display more than 10 characters in the text\n part of the label\n\n\n[\n#1527\n] Enable the Workflow Manager to select the middle detection\n as the exemplar\n\n\n[\n#1566\n] Make \nAUTO_ROTATE\n and \nAUTO_FLIP\n true by default\n\n\n[\n#1569\n] Modify C++ and Python component executor to automatically\n add the job name to log messages\n\n\n[\n#1621\n] Make S3 object keys used for upload configurable\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1602\n] Update Workflow Manager to use Spring Boot\n\n\n[\n#1631\n] Update byte-buddy, Mockito, and Hibernate versions to\n resolve build issue. Most notably, update Hibernate to 5.6.14.\n\n\n[\n#1632\n] Update ActiveMQ to 5.17.3\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1581\n] Don't change track start and end frame when\n \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n is disabled\n\n\n[\n#1595\n] Work around how Ubuntu only recognizes certificate files\n that end in .crt\n\n\n[\n#1610\n] Prevent premature pipeline creation when using web UI\n\n\n[\n#1612\n] At startup, prevent Workflow Manager from consuming from\n queues before purging them\n\n\n\n\nOpenMPF 7.0.x\n\n\n7.0.3: September 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1561\n] Fix logging for Python components when running through CLI\n runner\n\n\n[\n#1583\n] Can now properly view media while job is in progress\n\n\n[\n#1587\n] Fix bugs in amq_detection_component's use of select\n\n\n\n\n7.0.2: August 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1562\n] Fix bug where an ffmpeg change prevented detecting video\n rotation\n\n\n\n\n7.0.0: July 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Development Environment Guide by replacing steps for CentOS 7 with Ubuntu 20.04.\n\n\nAdded the Derivative Media Guide.\n\n\nUpdated the Batch Component APIs with revised error codes.\n\n\nUpdated the Python Batch Component API and Python base Docker image README with instructions for\n using \npyproject.toml\n and \nsetup.cfg\n.\n\n\nUpdated the Admin Guide and User Guide with images that show the new TiesDb and Callback columns in the job status UI.\n\n\nUpdated the REST API with the \npipelineDefinition\n, \nframeRanges\n, and \ntimeRanges\n fields now supported by the\n \n[POST] /rest/jobs\n endpoint.\n\n\nUpdated the OcvYoloDetection component README with information on using the NVIDIA Triton inference server.\n\n\nUpdated the Markup Guide with \nMARKUP_ANIMATION_ENABLED\n and \nMARKUP_LABELS_TRACK_INDEX_ENABLED\n.\n\n\nUpdated the Contributor Guide with new steps for generating documentation.\n\n\n\n\nTransition from CentOS 7 to Ubuntu 20.04\n\n\n\n\n\nAll the Docker images that previously used CentOS 7 as a base now use Ubuntu 20.04.\n\n\nWe decided not to use CentOS 8, which is a version of CentOS Stream, due to concerns about stability.\n\n\nAlso, Ubuntu is a very common OS within the AI and ML space, and has significant community support.\n\n\n\n\nUse Job Id that Enables Load Balancing\n\n\n\n\n\nThe Workflow Manager can now optionally accept job ids of the form \n-\n through\n the REST endpoints, where \n\n is the same as the shorter id used in previous releases. The\n \n-\n prefix enables better tracking and separation of jobs run across multiple\n Workflow Manager instances in a cluster.\n\n\nThe prefix can be set in the \ndocker-compose.yml\n file by assigning \n{{.Node.Hostname}}\n to the \nNODE_HOSTNAME\n\n environment variable for the Workflow Manager service, or hard-coding \nNODE_HOSTNAME\n to the desired hostname.\n\n\nThe shorter version of the id can still be used in REST requests, but the longer id will always be returned by the\n Workflow Manager when responding to those requests.\n\n\nThe shorter id will always be used internally by the Workflow Manager, meaning the job status web UI and log messages\n will all use the shorter job id. \n\n\n\n\nSupport for Derivative Media\n\n\n\n\n\nThe TikaImageDetection component now returns \nMEDIA\n tracks instead of \nIMAGE\n tracks when extracting images from\n documents, such as PDFs, Word documents, and PowerPoint slides. The document is considered the \"source\", or \"parent\",\n media, and the images are considered the \"derivative\", or \"child\", media.\n\n\nActions can now be configured with \nSOURCE_MEDIA_ONLY=true\n or \nDERIVATIVE_MEDIA_ONLY=true\n, which will result in only\n performing the action on that kind of media. Feed forward can still be used to pass track information from one stage\n to another. The tracks will skip the stages (actions) that don't apply.\n\n\nThis enables complex pipelines like one that extracts text from a PDF using TikaTextDetection, OCRs embedded images\n using EastTextDetection and TesseractOCRTextDetection, and runs all of the \nTEXT\n tracks through KeywordTagging.\n\n\nAdded the following pipelines to the TikaImageDetection component:\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING AND MARKUP PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE AND MARKUP PIPELINE\n\n\n\n\n\n\n\n\nReport when Job Callbacks and TiesDb POSTs Fail\n\n\n\n\n\nThe job status UI displays two new columns, one that indicates the status of posting to TiesDB, and one that indicates\n the status of posting the job callback to the job producer.\n\n\nAdditionally, the \n[GET] /rest/jobs/{id}\n endpoint now includes a \ntiesDbStatus\n and \ncallbackStatus\n field.\n\n\nNote that, by design, the JSON output itself does not contain these statuses.\n\n\n\n\nAllow Pipelines to be Specified in a Job Request\n\n\n\n\n\nOptionally, the \npipelineDefinition\n field can be provided instead of the \npipelineName\n field when using the\n \n[POST] /rest/jobs\n endpoint in order to specify a pipeline on the fly for that specific job run. It will not be saved\n for later reuse.\n\n\nThe format of the pipeline definition is similar to that in a \ndescriptor.json\n file, with separate sections for\n defining \ntasks\n and \nactions\n. Pre-existing tasks and actions known to the Workflow Manager can be specified in the\n definition. They do not need to be defined again.\n\n\nThis feature is a convenient alternative to creating persistent definitions using the \n[POST] /rest/pipelines\n,\n \n[POST] /rest/tasks\n, and \n[POST] /rest/actions\n endpoints. For example, this feature could be used to quickly add or\n remove a motion preprocessing stage from a pipeline.\n\n\n\n\nAllow User-Specified Segment Boundaries\n\n\n\n\n\nOptionally, multiple \nframeRanges\n and/or \ntimeRanges\n fields can be provided when using the \n[POST] /rest/jobs\n\n endpoint in order to manually specify segment boundaries. These values will override the normal segmenting behavior of\n the Workflow Manager.\n\n\nNote that overlapping ranges will be combined and large ranges may still be split up according to the value of\n \nTARGET_SEGMENT_LENGTH\n and \nVFR_TARGET_SEGMENT_LENGTH\n.\n\n\nNote that \nframeRanges\n is specified using the frame number and \ntimeRanges\n is specified in milliseconds.\n\n\n\n\nAdd Triton Inference Server support to YOLO component\n\n\n\n\n\nThe OcvYoloDetection component now supports the ability to send requests to an NVIDIA Triton Inference Server by\n setting \nENABLE_TRITON=true\n. If set to false, the component will process jobs using OpenCV DNN on the local host\n running the Docker service, as per normal.\n\n\nBy default \nTRITON_SERVER=ocv-yolo-detection-server:8001\n, which\n corresponds to the \nocv-yolo-detection-server\n entry in your \ndocker-compose.yml\n file. Refer to the example entry\n within \ndocker-compose.components.yml\n\n . That entry uses a pre-built and pre-configured version of the Triton server.\n\n\nThe Triton server runs the YOLOv4 model within the TensorRT framework, which performs a warmup operation when the\n server starts up to determine which optimizations to enable for the available GPU hardware. \n*.engine\n files are\n generated within the \nyolo_engine_file\n Docker volume for later reuse.\n\n\nTo further improve inferencing speed, shared memory can be configured between the \nocv-yolo-detection\n client service and the\n \nocv-yolo-detection-server\n service if they are running on the same host. Set \nTRITON_USE_SHM=true\n and configure the\n server with a \n/dev/shm:/dev/shm\n Docker volume.\n\n\nDepending on the available GPU hardware, the Triton server can achieve speeds that are 5x faster than OpenCV DNN with\n tracking enabled, no shared memory, and nearly 9x faster with tracking disabled, with shared memory. Our tests used a\n single RTX 2080 GPU.\n\n\n\n\nRemoved Unused and Redundant Error Codes\n\n\n\n\n\nThe error codes shown on the left were redundant and replaced with the corresponding error codes on the right:\n\n\n\n\n\n\n\n\n\n\nOld Error Code\n\n\nNew Error Code\n\n\n\n\n\n\n\n\n\n\nMPF_IMAGE_READ_ERROR\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\n\n\n\n\nMPF_BOUNDING_BOX_SIZE_ERROR\n\n\nMPF_BAD_FRAME_SIZE\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_INT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_FLOAT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_INVALID_FRAME_INTERVAL\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_DETECTION_TRACKING_FAILED\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\n\n\n\n\n\n\nAlso, the following error codes are no longer being used and have been removed:\n\n\n\n\nMPF_UNRECOGNIZED_DATA_TYPE\n\n\nAll media types can now be processed since we support the \nUNKNOWN\n (a.k.a. \"generic\")\n media type\n\n\n\n\n\n\nMPF_INVALID_DATAFILE_URI\n\n\nThe Workflow Manager will reject a job with an invalid media URI before it gets to a\n component\n\n\n\n\n\n\nMPF_INVALID_START_FRAME\n\n\nMPF_INVALID_STOP_FRAME\n\n\nMPF_INVALID_ROTATION\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nBy default, the Markup component draws bounding boxes to fill in the gaps between detections in each track by\n interpolating the box size and position. This can now be disabled by setting the job property\n \nMARKUP_ANIMATION_ENABLED=false\n, or the system property \nmarkup.video.animation.enabled=false\n.\n Disabling this feature can be useful to prevent floating boxes from cluttering the marked-up frames.\n\n\nThe Markup component will now start each bounding box label with a track index like \n[0]\n that can be used to\n correlate the box with the track in the JSON output object. The JSON output now contains an \nindex\n field for every\n track, relative to each piece of media, that is simply an integer that starts at 0 and counts upward. This can be\n disabled by setting the job property \nMARKUP_LABELS_TRACK_INDEX_ENABLED=false\n, or the system property\n \nmarkup.labels.track.index.enabled=false\n.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nComponents that generate \nMEDIA\n tracks will result in new derivative \nmedia\n entries in the JSON output file. This\n means it's possible to provide a single piece of media as an input and have more than one \nmedia\n entry in the JSON\n output. The output will always include the original media.\n\n\nEach \nmedia\n entry in the JSON output now contains a \nparentMediaId\n in addition to the \nmediaId\n. The \nparentMediaId\n\n for original source media will always be set to -1; otherwise, for derivative media, the \nparentMediaId\n is set the\n \nmediaId\n of the source media from which the child media was derived.\n\n\nEach \nmedia\n entry also contains a new \nframeRanges\n and \ntimeRanges\n collection.\n\n\nThe JSON output file also contains a new \nindex\n field for every track, relative to each piece of media.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#792\n] Perform detection on images extracted from PDFs\n\n\n[\n#1283\n] Add user-specified segment boundaries\n\n\n[\n#1374\n] Transition from CentOS 7 to Ubuntu 20.04\n\n\n[\n#1396\n] Report when job callbacks and TiesDb POSTs fail\n\n\n[\n#1398\n] Add Triton Inference Server support to YOLO component\n\n\n[\n#1428\n] Allow pipelines to be specified in a job request\n\n\n[\n#1454\n] Transition from Clair scans to Trivy scans\n\n\n[\n#1485\n] Use \npyproject.toml\n and \nsetup.cfg\n instead of \nsetup.py\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#803\n] Update Tika Image Detection to generate one track per piece of extracted media\n\n\n[\n#808\n] Update Tika Text Detection component to not use leading zeros for \nPAGE_NUM\n\n\n[\n#1105\n] Remove dependency on QT from C++ SDK\n\n\n[\n#1282\n] Use job id that enables load balancing\n\n\n[\n#1303\n] Update Tika Image Detection to return \nMEDIA\n tracks\n\n\n[\n#1319\n] Review existing error codes and remove unused or redundant error codes\n\n\n[\n#1384\n] Update Apache Tika to 2.4.1 for TikaImageDetection and TikaTextDetection Components\n\n\n[\n#1436\n] CLI Runner should initialize a component once when handling multiple jobs\n\n\n[\n#1465\n] Remove YoloV3 support from OcvYoloDetection component\n\n\n[\n#1513\n] Update to Spring 5.3.18\n\n\n[\n#1528\n] CLI runner should also sort by startOffsetTime\n\n\n[\n#1540\n] Upgrade to Java 17\n\n\n[\n#1549\n] Allow markup animation to be disabled\n\n\n[\n#1550\n] Add track index to markup\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1372\n] Tika Image Detection no longer misses images in PowerPoint and Word documents\n\n\n[\n#1449\n] Simon data is now refreshed when clicking the Processes tab\n\n\n[\n#1495\n] Fix bug where invalid CSRF token found for \n/workflow-manager/login\n\n\n\n\nOpenMPF 6.3.x\n\n\n6.3.14: May 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1530\n] Fix S3 code memory leak\n\n\n\n\n6.3.12: April 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1519\n] Upgrade to OpenCV 4.5.5\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1520\n] S3 code now retries on most 400 errors\n\n\n\n\n6.3.11: April 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_SESSION_TOKEN\n, \nS3_USE_VIRTUAL_HOST\n, \nS3_HOST\n, and \nS3_REGION\n.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1496\n] Update S3 client code\n\n\n[\n#1514\n] Update Tomcat to 8.5.78\n\n\n\n\n6.3.10: March 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1486\n] Fix bug where \nMOVING\n was being added to immutable map twice\n\n\n[\n#1498\n] Can now provide media metadata when frameTimeInfo is missing\n\n\n[\n#1501\n] MPFVideoCapture now properly reads frames from videos with rotation metadata\n\n\n[\n#1502\n] Detections with \nHORIZONTAL_FLIP\n will no longer result in illformed detections and incorrectly padded regions\n\n\n[\n#1503\n] Videos with rotation metadata will no longer result in corrupt markup\n\n\n\n\n6.3.8: January 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1469\n] \nTENSORFLOW VEHICLE COLOR DETECTION\n pipelines no longer refer to YOLO tasks that no longer exist\n\n\n\n\n6.3.7: January 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1466\n] Upgrade log4j to 2.17.1\n\n\n\n\n6.3.6: December 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1457\n] Upgrade log4j to 2.16.0\n\n\n\n\n6.3.5: November 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1451\n] Make concurrent callbacks configurable\n\n\n\n\n6.3.4: November 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1441\n] Modify AdminStatisticsController so that it doesn't hold all jobs in memory at once\n\n\n\n\n6.3.3: October 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1425\n] Make protobuf size limit configurable\n\n\n\n\n6.3.2: October 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1420\n] Sphinx component no longer omits audio at end of video files\n\n\n[\n#1422\n] Media inspection now correctly calculates milliseconds from ffmpeg duration\n\n\n\n\n6.3.1: September 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1404\n] Improve OcvDnnDetection vehicle color detection\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1251\n] Add version to JSON output object\n\n\n[\n#1272\n] Update Keyword Tagging to work on multiple inputs\n\n\n[\n#1350\n] Retire old components to the graveyard: DlibFaceDetection, DarknetDetection, and OcvPersonDetection\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n now behaves as expected\n\n\n[\n#1271\n] Azure speech component no longer omits audio at end of video files\n\n\n[\n#1389\n] NLP text correction component now properly reads the value of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n[\n#1403\n] Corrected README to state that the Azure Speech Component doesn't support v2 of the API\n\n\n[\n#1406\n] Speech detections in videos are no longer dropped if using keyword tagging\n\n\n[\n#1411\n] Exception no longer occurs when adding \nSHRUNK_TO_NOTHING=TRUE\n to an immutable map in multiple pipeline stages\n\n\n[\n#1413\n] Speech detections in videos are no longer dropped if using translation\n\n\n\n\n6.3.0: September 2021\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the API documents, Development Environment Guide, Node Guide, Install Guide, User Guide, Admin Guide, and\n others to clarify the difference between Docker and non-Docker behaviors.\n\n\nTransformed Packaging and Registering a Component document into Component Descriptor Reference.\n\n\nSplit Media Segmentation Guide from User Guide.\n\n\nUpdated and renamed the Workflow Manager document to Workflow Manager Architecture.\n\n\nUpdated the various Docker guides to clarify the difference between building Docker images from scratch versus\n building them using pre-built base images on Docker Hub, emphasizing the latter.\n\n\nUpdated the Contributor Guide to document the hotfix pull request process.\n\n\n\n\nTiesDb Integration\n\n\n\n\n\nTiesDb is a PostgreSQL DB with a RESTful API that stores media metadata. The metadata entries are queried using the\n hash (sha256, md5) of the media file. TIES stands\n for \nTriage Import Export Schema\n. TiesDb is deployed and managed externally to\n OpenMPF. For more information please contact us.\n\n\nWhen a job completes, OpenMPF can post assertions to media entries that exist in TiesDb. In general, one assertion is\n generated for each algorithm run on a piece of media. It contains the job status, algorithm name, detection\n type (\nFACE\n, \nTEXT\n, \nMOTION\n, etc.), and number of tracks generated, as well as a link to the full JSON output\n object.\n\n\nEach assertion serves as a lasting record so that job producers may first check TiesDb to see if an algorithm was run\n on a piece of media before submitting the same job to OpenMPF again.\n\n\nTo enable TiesDb support, set the \nTIES_DB_URL\n job property or \nties.db.url\n system property to\n the \n://:\n part of the URL. The Workflow Manager will append\n the \n/api/db/supplementals?sha256Hash=\n part. Here is an example of a TiesDb POST:\n\n\n\n\n{\n \"dataObject\": {\n \"sha256OutputHash\": \"1f8f2a8b2f5178765dd4a2e952f97f5037c290ee8d011cd7e92fb8f57bc75f17\",\n \"outputType\": \"FACE\",\n \"algorithm\": \"FACECV\",\n \"processDate\": \"2021-09-09T21:37:30.516-04:00\",\n \"pipeline\": \"OCV FACE DETECTION PIPELINE\",\n \"outputUri\": \"file:///home/mpf/git/openmpf-projects/openmpf/trunk/install/share/output-objects/1284/detection.json\",\n \"jobStatus\": \"COMPLETE\",\n \"jobId\": 1284,\n \"systemVersion\": \"6.3\",\n \"trackCount\": 1,\n \"systemHostname\": \"openmpf-master\"\n },\n \"system\": \"OpenMPF\",\n \"securityTag\": \"UNCLASSIFIED\",\n \"informationType\": \"OpenMPF FACE\",\n \"assertionId\": \"4874829f666d79881f7803207c7359dc781b97d2c68b471136bf7235a397c5cd\"\n}\n\n\n\nNatural Language Processing (NLP) Text Correction Component\n\n\n\n\n\nThis component utilizes the \nCyHunspell\n library, which is a Python\n port of the \nHunspell\n spell-checking library, to perform post-processing\n correction of OCR text. In general, it's intended to be used in a pipeline after a component like\n TesseractOCRTextDetection that generates \nTEXT\n tracks. These tracks are then fed-forward into NlpTextCorrection,\n which will add a \nCORRECTED TEXT\n property to the existing tracks.\n The \nTESSERACT OCR TEXT DETECTION WITH NLP TEXT CORRECTION PIPELINE\n performs this behavior. The component can also\n run on its own to process plain text files. Refer to\n the \nREADME\n for details.\n\n\n\n\nAzure Cognitive Services (ACS) Read Component\n\n\n\n\n\nThis component utilizes\n the \nAzure Cognitive Services Read Detection REST endpoint\n\n to extract formatted text from documents (PDFs), images, and videos. Refer to\n the \nREADME\n for\n details.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1151\n] Now supports \nIN_PROGRESS_WITH_WARNINGS\n status\n\n\n[\n#1234\n] Now sorts JSON output object media by media id\n\n\n[\n#1341\n] Added job id to all batch-job-specific Workflow Manager log\n messages\n\n\n[\n#1349\n] Improved reporting and recording job status\n\n\n[\n#1353\n] Updated the Workflow Manager to remove and warn about\n zero-size detections\n\n\n[\n#1382\n] Updated Tika version to 1.27 for TikaImageDetection and\n TikaTextDetection components\n\n\n[\n#1387\n] Markup can now be configured in a\n component's \ndescriptor.json\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1080\n] Batch jobs no longer prematurely set to 100% completion\n during artifact extraction\n\n\n[\n#1106\n] When a job ends in \nERROR\n or \nCANCELLED_BY_SHUTDOWN\n the\n job status UI now shows an End Date\n\n\n[\n#1158\n] JSON output object URI no longer changes when callback fails\n\n\n[\n#1317\n] TikaTextDetection no longer generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n[\n#1337\n] Now using \nMPF_BAD_FRAME_SIZE\n instead\n of \nMPF_DETECTION_FAILED\n for OpenCV empty/resize exception\n\n\n[\n#1359\n] Image detection tracks no longer\n have \nendOffsetFrameInclusive\n set to 1\n\n\n[\n#1373\n] When uploading large files through the Workflow Manager web\n UI, now more than the first 865032704 bytes get written\n\n\n[\n#1379\n] TikaImageDetection component now avoids conflicts by no\n longer using the same path when extracting images for jobs with multiple pieces of media\n\n\n[\n#1386\n] FeedForwardFrameCropper in the Python SDK now handles\n negative coordinates properly\n\n\n[\n#1391\n] If a job is configured to upload markup and markup fails,\n the job no longer gets stuck\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1372\n] TikaImageDetection misses images in PowerPoint and Word\n documents\n\n\n[\n#1389\n] NlpTextCorrection does not properly read the value\n of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n\n\nOpenMPF 6.2.x\n\n\n6.2.5: July 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1367\n] Enable cross-origin resource sharing on Workflow Manager\n\n\n\n\n6.2.4: June 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1356\n] AzureSpeech now properly reports when media is missing audio stream\n\n\n[\n#1357\n] AzureSpeech now handles case where speaker id is not present\n\n\n\n\n6.2.2: June 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1333\n] Combine media name and job id into one WFM log line\n\n\n[\n#1336\n] Remove duplicate \"Setting status of job to COMPLETE\" Workflow Manager log line and other improvements\n\n\n[\n#1338\n] Update OpenCV DNN Detection component to optionally use feed-forward confidence values\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1237\n] Fixed jQuery DataTables bug: \"int parameter 'draw' is present but cannot be translated into a null value\"\n\n\n[\n#1254\n] Jobs table no longer flickers when polling is enabled and the search box is used\n\n\n[\n#1308\n] Prevent OCV YOLO Tracking from generating zero-sized detections\n\n\n[\n#1313\n] Fix JSON output object timestamps for variable frame rate videos\n\n\n\n\n6.2.1: May 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1330\n] Return error codes for \nmodels_ini_parser.py\n exceptions\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1331\n] Decoding certain heic images no longer causes Workflow Manager to segfault\n\n\n\n\n6.2.0: May 2021\n\n\n\nTesseract OCR Text Detection Component Support for Videos\n\n\n\n\n\nThe component can now process videos in addition to images and PDFs. Each video frame is processed sequentially.\n The \nMAX_PARALLEL_SCRIPT_THREADS\n property determines how many threads to use to process each frame, one thread per\n language or script.\n\n\nNote that for videos without much text, it may be faster to disable threading by\n setting \nMAX_PARALLEL_SCRIPT_THREADS=1\n. This will allow the component to reuse TessAPI instances instead of creating\n new ones for every frame. Please refer to the Known Issues section.\n\n\nResolved issues: \n#1285\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1086\n] Added support for \nCOULD_NOT_OPEN_MEDIA\n\n and \nCOULD_NOT_READ_MEDIA\n error types\n\n\n[\n#1159\n] Split \nIssueCodes.REMOTE_STORAGE\n\n into \nREMOTE_STORAGE_DOWNLOAD\n and \nREMOTE_STORAGE_UPLOAD\n\n\n[\n#1250\n] Modified \n/rest/jobs/{id}\n to include the job's media\n\n\n[\n#1312\n] Created \nNETWORK_ERROR\n error code for when a component\n can't connect to an external server. Updated Python HTTP retry code to return \nNETWORK_ERROR\n. This affects the Azure\n components.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1008\n] Use global TessAPI instances with parallel processing\n\n\n\n\nOpenMPF 6.1.x\n\n\n6.1.6: May 2021\n\n\n\nHandle Variable Frame Rate Videos\n\n\n\n\n\nThe Workflow Manager will attempt to detect if a video is constant frame rate (CFR) or variable frame rate (VFR)\n during media inspection. If no determination can be made, it will default to VFR behavior. If CFR, the JSON output\n object will have a \nHAS_CONSTANT_FRAME_RATE=true\n property in the \nmediaMetadata\n field.\n\n\nWhen \nMPFVideoCapture\n handles a CFR video it will use OpenCV to set the frame position, unless the position is within\n 16 frames of the current position, in which case it will iteratively use OpenCV \ngrab()\n to advance to the desired\n frame.\n\n\nWhen \nMPFVideoCapture\n handles a VFR video it will always iteratively use OpenCV \ngrab()\n to advance to the desired\n frame because setting the frame position directly has been shown to not work correctly on VFR videos.\n\n\nWhen a video is split into multiple segments, \nMPFVideoCapture\n must iteratively use \ngrab()\n to advance from frame 0\n to the start of the segment. This introduces performance overhead. To mitigate this we recommend using larger video\n segments than those used for CFR videos.\n\n\nIn addition to the existing \nTARGET_SEGMENT_LENGTH\n and \nMIN_SEGMENT_LENGTH\n job\n properties (\ndetection.segment.target.length\n and \ndetection.segment.minimum.length\n system properties) for CFR\n videos, the Workflow Manager now supports the \nVFR_TARGET_SEGMENT_LENGTH\n and \nVFR_MIN_SEGMENT_LENGTH\n job\n properties (\ndetection.vfr.segment.target.length\n and \ndetection.vfr.segment.minimum.length\n system properties) for\n VFR videos.\n\n\nNote that the timestamps associated with tracks and detections in a VFR video may be wrong. Please refer to the Known\n Issues section.\n\n\nResolved issues: \n#1307\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1287\n] Updated Tika Text Detection Component to break up large\n chunks of text. The component now generates tracks with both a \nPAGE_NUM\n property and \nSECTION_NUM\n property. Please\n refer to\n the \nREADME\n.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1313\n] Incorrect JSON output object timestamps for variable frame\n rate videos\n\n\n[\n#1317\n] Tika Text Detection component generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n\n\n6.1.5: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1300\n] Parallelized S3 artifact upload. Use\n the \ndetection.artifact.extraction.parallel.upload.count\n system property to configure the number of parallel uploads.\n\n\n\n\n6.1.4: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1299\n] Improved artifact extraction performance when there is no\n rotation or flip\n\n\n\n\n6.1.3: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1295\n] Improved artifact extraction and markup JNI memory\n utilization\n\n\n[\n#1297\n] Limited Workflow Manager IO threads to a reasonable number\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1296\n] Fixed ActiveMQ job priorities\n\n\n\n\n6.1.2: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1294\n] Limited ffmpeg threads to a reasonable number\n\n\n\n\n6.1.1: April 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1292\n] Don't skip artifact extraction for failed media\n\n\n\n\n6.1.0: April 2021\n\n\n\nOpenMPF Command Line Runner\n\n\n\n\n\nThe Command Line Runner allows users to run jobs with a single component without the Workflow Manager.\n\n\nIt outputs results in a JSON structure that is a subset of the regular OpenMPF output.\n\n\nIt only supports C++ and Python components.\n\n\nSee the\n \nREADME\n\n for more information.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nComponent code should no longer configure Log4CXX. The component executor now handles configuring Log4CXX. Component\n code should call \nlog4cxx::Logger::getLogger(\"\")\n\n to get access to the logger. Calls to \nlog4cxx::xml::DOMConfigurator::configure(logconfig_file);\n\n should be removed.\n\n\n\n\nPython Batch Component API \n\n\n\n\n\nComponent code should no longer configure logging. The component executor now handles configuring logging. Calls\n to \nmpf.configure_logging\n should be replaced with\n \nlogging.getLogger('')\n.\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\n\n\nIn order to support running a component through the CLI runner, C++ component developers should set\n the \nLD_LIBRARY_PATH\n environment variable in the final stage of their Dockerfiles. It should generally be set\n like: \nENV LD_LIBRARY_PATH $PLUGINS_DIR//lib\n.\n\n\n\n\n\n\nBecause of the logging changes mentioned above, components no longer need to set the\n \nCOMPONENT_LOG_NAME\n environment variable in their Dockerfiles.\n\n\n\n\n\n\nAdded the\n \nopenmpf_python_executor_ssb\n base image\n\n . It can be used instead of \nopenmpf_python_component_build\n and \nopenmpf_python_executor\n to simplify Dockerfiles for\n Python components that are pure Python and have no build time dependencies.\n\n\n\n\n\n\nLabel Moving vs. Non-Moving Tracks\n\n\n\n\n\nThe Workflow Manager can now identify whether a track is moving or non-moving. This is determined by calculating the\n average bounding box for a track by averaging the size and position of all the detections in the track. Then, for each\n detection in the track, the intersection over union (IoU) is calculated between that detection and the average\n detection. If the IoU for at least \nMOVING_TRACK_MIN_DETECTIONS\n number of detections is less than or equal to\n \nMOVING_TRACK_MAX_IOU\n, then the track is considered a moving track.\n\n\nAdded the following Workflow Manager job properties. These can be set for any video job:\n\n\nMOVING_TRACK_LABELS_ENABLED\n: When set to true, attempt to label tracks as either moving or non-moving objects.\n Each track will have a \nMOVING\n property set to \nTRUE\n or \nFALSE\n.\n\n\nMOVING_TRACKS_ONLY\n: When set to true, remove any tracks that were marked as not moving.\n\n\nMOVING_TRACK_MAX_IOU\n: The maximum IoU overlap between detection bounding boxes and the average per-track\n bounding box for objects to be considered moving. Value is expected to be between 0 and 1. Note that the lower\n IoU, the more likely the object is moving.\n\n\nMOVING_TRACK_MIN_DETECTIONS\n: The minimum number of moving detections for a track to be labeled as moving.\n\n\n\n\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nUsers can now watch videos directly in the OpenMPF web UI within the media pop-up dialog for each job. Most modern web\n browsers support videos encoded in VP9 and H.264. If a video cannot be played, users have the option to download it\n and play it using a stand-alone media player.\n\n\nTo set the markup encoder use \nMARKUP_VIDEO_ENCODER\n. The default encoder has changed from \nmjpeg\n to \nvp9\n. As a\n result, it will take longer to generate marked up videos, but they will be higher quality and can be viewed in the web\n UI.\n\n\nEach bounding box in the marked up media is now labeled. By default, the label shows the track-level \nCLASSIFICATION\n\n and associated confidence value. The information shown in the label can be changed by\n setting \nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n and \nMARKUP_LABELS_NUMERIC_PROP_TO_SHOW\n. To show information for each\n individual detection, rather than the entire track, set \nMARKUP_LABELS_FROM_DETECTIONS=TRUE\n.\n\n\nExemplar detections in video tracks include a star icon in their label.\n\n\nOptionally, set \nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED=TRUE\n to show icons that represent if the track is moving or\n non-moving.\n\n\nOptionally, set \nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED=TRUE\n to show icons that represent the source of the detection.\n For example, if the box is the result of an algorithm detection, tracking performing gap fill, or Workflow Manager\n animation.\n\n\nEach frame of a marked-up video now has a frame number in the upper right corner.\n\n\nPlease refer to the \nMarkup Guide\n for the complete set of markup properties, icon definitions, and\n encoder considerations.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1181\n] Updated the Tesseract OCR Text Detection component from\n Tesseract version 4.0.0 to 4.1.1\n\n\n[\n#1232\n] Updated the Azure Speech Detection component from Azure\n Batch Transcription version 2.0 to 3.0\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1187\n] EXIF orientation is now preserved during markup and artifact\n extraction\n\n\n[\n#1257\n] Updated \nOUTPUT_LAST_TASK_ONLY\n to work on all media types\n\n\n\n\nOpenMPF 6.0.x\n\n\n6.0.11: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1284\n] Updated the Azure Translation component to count emoji as 2\n characters\n\n\n\n\n6.0.10: March 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1270\n] The Azure Cognitive Services components now retry HTTP\n requests\n\n\n\n\n6.0.9: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1273\n] Setting \nTRANSLATION\n to the empty string no longer prevents\n Keyword Tagging\n\n\n\n\n6.0.6: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1265\n] Updated the Tika Text Detection component to handle\n spreadsheets\n\n\n[\n#1268\n] Updated the Tika Text Detection component to remove metadata\n\n\n\n\n6.0.5: February 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1266\n] The Azure Translation component now handles the final\n segment correctly when guessing sentence breaks\n\n\n\n\n6.0.4: February 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1264\n] Updated the Azure Translation component to handle large\n amounts of text\n\n\n[\n#1269\n] AzureTranslation no longer tries to translate text that is\n already in the \nTO_LANGUAGE\n\n\n\n\n6.0.3: February 2021\n\n\n\nOpenCV YOLO Detection Component\n\n\n\n\n\nThis new component utilizes the OpenCV Deep Neural Networks (DNN) framework to detect and classify objects in images\n and videos using Darknet YOLOv4 models trained on the COCO dataset. It supports both CPU and GPU modes of operation.\n Tracking is performed using a combination of intersection over union, pixel difference after Fast Fourier transform (\n FFT) phase correlation, Kalman filtering, and OpenCV MOSSE tracking. Refer to\n the \nREADME\n for details.\n\n\n\n\n6.0.2: January 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1249\n] FFmpeg no longer reports different frame counts for the same\n piece of media\n\n\n\n\n6.0.1: December 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1238\n] The JSON output object is now generated when remote media\n cannot be downloaded.\n\n\n\n\n6.0.0: December 2020\n\n\n\nUpgrade to OpenCV 4.5.0\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.4.7 to OpenCV 4.5.0.\n\n\nOpenCV is now built with CUDA support, including cuDNN (CUDA Deep Neural Network library) and cuBLAS (CUDA Basic\n Linear Algebra Subroutines library). All C++ components that use the base C++ builder and executor Docker images have\n CUDA support built in, giving developers the option to make use of it.\n\n\nAdded GPU support to the OcvDnnDetection component.\n\n\n\n\nAzure Cognitive Services (ACS) Translation Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Translator REST endpoint\n\n to translate text from one language (locale) to another. Generally, it's intended to operate on feed-forward tracks\n that contain detections with \nTEXT\n and \nTRANSCRIPT\n properties. It can also operate on plain text file inputs. Refer\n to the \nREADME\n for\n details.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded \nalgorithm\n field to the element that describes a collection of tracks generated by an action in the JSON output\n object. For example:\n\n\n\n\n\"output\": {\n \"FACE\": [{\n \"source\": \"+#MOG MOTION DETECTION PREPROCESSOR ACTION#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [{ ... }],\n ...\n },\n\n\n\nMerge Tasks in JSON Output Object\n\n\n\n\n\nThe output of two tasks in the JSON output object can be merged by setting the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n\n property to true. This is a Workflow Manager property and can be set on any task in any pipeline, although it has no\n effect when set on the first task or the Markup task.\n\n\nWhen the output of two tasks are merged, the tracks for the previous task will not be shown in the JSON output object,\n and no artifacts are generated for it. The task will be listed under \nTRACKS MERGED\n, if it's not already listed\n under \nTRACKS SUPPRESSED\n due to the \nmpf.output.objects.last.task.only\n system property setting,\n or \nOUTPUT_LAST_TASK_ONLY\n property. The tracks associated with the second task will inherit the detection type and\n algorithm of the previous task.\n\n\nFor example, the \nTESSERACT OCR TEXT DETECTION WITH KEYWORD TAGGING PIPELINE\n is defined as\n the \nTESSERACT OCR TEXT DETECTION TASK\n followed by the \nKEYWORD TAGGING (WITH FF REGION) TASK\n. The second task\n sets \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n to true. The resulting JSON output object contains one set of keyword-tagged\n OCR tracks that have the \nTEXT\n detection type and \nTESSERACTOCR\n algorithm (both inherited from\n the \nTESSERACT OCR TEXT DETECTION TASK\n):\n\n\n\n\n\"output\": {\n \"TRACKS MERGED\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }],\n \"TEXT\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [{\n \"type\": \"TEXT\",\n \"trackProperties\": {\n \"TAGS\": \"ANIMAL\",\n \"TEXT\": \"The quick brown fox\",\n \"TEXT_LANGUAGE\": \"script/Latin\",\n \"TRIGGER_WORDS\": \"fox\",\n \"TRIGGER_WORDS_OFFSET\": \"16-18\"\n ...\n\n\n\n\n\nNote that you can use the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n setting on multiple tasks. For example, if you set it as a\n job property it will be applied to all tasks (with the exception of Markup - in which case the task before Markup is\n used), so you will only get the output of the last task in the pipeline. The last task will inherit the detection type\n and algorithm of the first task in the pipeline.\n\n\n\n\nTesseract Custom Dictionaries\n\n\n\n\n\nThe Tesseract component Docker image now contains an \n/opt/mpf/tessdata_model_updater\n binary that you can use to\n update \n*.traineddata\n models with a custom dictionary, as well as extract files from existing models. Refer to\n the \nDICTIONARIES\n\n guide to learn how to use the tool.\n\n\nIn general, legacy \n*.traineddata\n models are more influenced by words in their dictionary than more modern\n LSTM \n*.traineddata\n models. Also, refer to the known issue below.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1243\n] Unpacking a \n*.traineddata\n model, for example, in order to\n modify its dictionary, and then repacking it may result in dropping some of the words present in the original\n dictionary file. This may be due to some kind of compression or filtering. It's unknown what effect this has on OCR\n results.\n\n\n\n\nOpenMPF 5.1.x\n\n\n5.1.3: December 2020\n\n\n\nSetting Properties as Docker Environment Variables\n\n\n\n\n\nAny property that can be set as a job property can now be set as a Docker environment variable by prefixing it\n with \nMPF_PROP_\n. For example, setting the \nMPF_PROP_TRTIS_SERVER\n environment variable in the \ntrtis-detection\n\n service in your \ndocker-compose.yml\n file will have the same effect as setting the \nTRTIS_SERVER\n job property.\n\n\nProperties set in this way will take precedence over all other property types (job, algorithm, media, etc). It is not\n possible to change the value of properties set via environment variables at runtime and therefore they should only be\n used to specify properties that will not change throughout the entire lifetime of the service.\n\n\n\n\nUpdates\n\n\n\n\n\nThe \nmpf.output.objects.censored.properties\n system property can be used to prevent properties from being shown in\n JSON output objects. The value for these properties will appear as \n\n.\n\n\nThe Azure Speech Detection component now retries without diarization when diarization is not supported by the selected\n locale.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1230\n] The Azure Speech Detection component now uses a UUID for the\n recording id associated with a piece of media in order to prevent deleting a piece of media while it's in use.\n\n\n\n\n5.1.1: December 2020\n\n\n\nUpdates\n\n\n\n\n\nOnly generate \nFRAME_COUNT\n warning when the frame difference is > 1. This can be configured using\n the \nwarn.frame.count.diff\n system property.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1209\n] The Keyword Tagging component now generates video tracks in\n the JSON output object.\n\n\n[\n#1212\n] The Keyword Tagging component now preserves the detection\n bounding box and confidence.\n\n\n\n\n5.1.0: November 2020\n\n\n\nMedia Inspection Improvements\n\n\n\n\n\nThe Workflow Manager will now handle video files that don't have a video stream as an \nAUDIO\n type, and handle video\n files that don't have a video or audio stream as an \nUNKNOWN\n type. The JSON output object contains a\n new \nmedia.mediaType\n field that will be set to \nVIDEO\n, \nAUDIO\n, \nIMAGE\n, or \nUNKNOWN\n.\n\n\nThe Workflow Manager now configures Tika\n with \ncustom MIME type support\n\n . Currently, this enables the detection of \nvideo/vnd.dlna.mpeg-tts\n and \nimage/jxr\n MIME types.\n\n\nIf the Workflow Manager cannot use Tika to determine the media MIME type then it will fall back to using the\n Linux \nfile\n command with\n a \ncustom magicfile\n\n .\n\n\nOpenMPF now supports Apple-optimized PNGs and HEIC images. Refer to the Bug Fixes section below.\n\n\n\n\nEAST Text Region Detection Component Improvements\n\n\n\n\n\nThe \nTEMPORARY_PADDING\n property has been separated into \nTEMPORARY_PADDING_X\n and \nTEMPORARY_PADDING_Y\n so that X and\n Y padding can be configured independently.\n\n\nThe \nMERGE_MIN_OVERLAP\n property has been renamed to \nMERGE_OVERLAP_THRESHOLD\n so that setting it to a value of 0 will\n merge all regions that touch, regardless of how small the amount of overlap.\n\n\nRefer to\n the \nREADME\n\n for details.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tool Improvements\n\n\n\n\n\nThese tools now support a \nROTATION_FILL_COLOR\n property for setting the fill color for pixels near the corners and\n edges of frames when performing non-orthogonal rotations. Previously, the color was hardcoded to \nBLACK\n. That is\n still the default setting for most components. Now the color can be set to \nWHITE\n, which is the default setting for\n the Tesseract component.\n\n\nThese tools now support a \nROTATION_THRESHOLD\n property for adjusting the threshold at which the frame transformer\n performs rotation. Previously, the value was hardcoded to 0.1 degrees. That is still the default value. Rotation is\n not performed on any \nROTATION\n value less than that threshold. The motivation is that some algorithms detect small\n rotations (for example, on structured text) when there is no rotation. In such cases rotating the frame results in\n fewer detections.\n\n\nOpenMPF now uses FFmpeg when counting video frames. Refer to the Bug Fixes section below.\n\n\n\n\nAzure Cognitive Services (ACS) Form Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Form Detection REST endpoint\n\n to extract formatted text from documents (PDFs) and images. Refer to\n the \nREADME\n for\n details.\n\n\nThis component is capable of performing detections using a specified ACS endpoint URL. For example, different\n endpoints support receipt detection, business card detection, layout analysis, and support for custom models trained\n with or without labeled data.\n\n\nThis component may output the following detection properties depending on the endpoint, model, and media being\n processed: \nTEXT\n, \nTABLE_CSV_OUTPUT\n, \nKEY_VALUE_PAIRS_JSON\n, and \nDOCUMENT_JSON_FIELDS\n.\n\n\n\n\nKeyword Tagging Component\n\n\n\n\n\nThis new component performs the same keyword tagging behavior that was previously part of the Tesseract component, but\n does so on feed-forward tracks that generate detections with \nTEXT\n and \nTRANSCRIPT\n properties. Refer to\n the \nREADME\n for details.\n\n\nIn addition to the Tesseract component, keyword tagging behavior has been removed from the Tika Text component and ACS\n OCR component.\n\n\nExample pipelines have been added to the following components which make use of a final Keyword Tagging component\n stage:\n\n\nTesseract\n\n\nTika Text\n\n\nACS OCR\n\n\nSphinx\n\n\nACS Speech\n\n\n\n\n\n\n\n\nOptionally Skip Media Inspection\n\n\n\n\n\nThe Workflow Manager will skip media inspection if all of the required media metadata is provided in the job request.\n The \nMEDIA_HASH\n and \nMIME_TYPE\n fields are always required. Depending on the media data type, other fields may be\n required or optional:\n\n\nImages\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nVideos\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n, \nFRAME_COUNT\n, \nFPS\n, \nDURATION\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nAudio files\n\n\nRequired: \nDURATION\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nUpdate OpenMPF Python SDK exception handling for Python 3. Now instead of raising an \nEnvironmentError\n, which has\n been deprecated in Python 3, the SDK will raise an \nmpf.DetectionError\n or allow the underlying exception to be\n thrown.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1028\n] OpenMPF can now properly handle Apple-optimized PNGs, which\n have a non-standard data chunk named CgBI before the IHDR chunk. The Workflow Manager\n uses \npngdefry\n to convert the image into a standard PNG for processing. Before\n this fix, Tika would throw an error when trying to determine the MIME type of the Apple-optimized PNG.\n\n\n[\n#1130\n] OpenMPF can now properly handle HEIC images. The Workflow\n Manager uses \nlibheif\n to convert the image into a standard PNG for processing.\n Before this fix, the HEIC image was sometimes falsely identified as a video and the Workflow Manager would fail to\n count the number of frames.\n\n\n[\n#1171\n] The MIME type in the JSON output object is no longer null\n when there is a frame counting exception.\n\n\n[\n#1192\n] When processing videos, the frame count is now obtained from\n both OpenCV and FFmpeg. The lower of the two is used. If they don't match, a \nFRAME_COUNT\n warning is generated.\n Before this fix, on some videos OpenCV would return frame counts that were magnitudes higher than the frames that\n could actually be read. This resulted in failing to process many video segments with a \nBAD_FRAME_SIZE\n error.\n\n\n\n\nOpenMPF 5.0.x\n\n\n5.0.9: October 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1200\n] The MPFVideoCapture and MPFImageReader tools now properly\n handle cropping to frame regions when the region coordinates fall outside of the frame boundary. There was a bug that\n would result in an OpenCV error. Note that the bug only occurred when cropping was not performed with rotation or\n flipping.\n\n\n\n\n5.0.8: October 2020\n\n\n\nUpdates\n\n\n\n\n\nThe Tesseract component now supports a \nTESSDATA_MODELS_SUBDIRECTORY\n property. The component will look for tessdata\n files in \n/\n. This allows users to easily switch between \ntessdata\n\n , \ntessdata_best\n, and \ntessdata_fast\n subdirectories.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1199\n] Added missing synchronized to InProgressBatchJobsService,\n which was resulting in some jobs staying \nIN_PROGRESS\n indefinitely.\n\n\n\n\n5.0.7: September 2020\n\n\n\nTensorRT Inference Server (TRTIS) Object Detection Component\n\n\n\n\n\nThis new component detects objects in images and videos by making use of\n an \nNVIDIA TensorRT Inference Server\n (\n TRTIS), and calculates features that can later be used by other systems to recognize the same object in other media.\n We provide support for running the server as a separate service during a Docker deployment, but an external server\n instance can be used instead.\n\n\nBy default, the ip_irv2_coco model is supported and will optionally classify detected objects\n using \nCOCO labels\n\n . Additionally, features can be generated for whole frames, automatically-detected object regions, and user-specified\n regions. Refer to the \nREADME\n\n .\n\n\n\n\n5.0.6: August 2020\n\n\n\nEnable OcvDnnDetection to Annotate Feed-forward Detections\n\n\n\n\n\nThe OcvDnnDetection component can now by configured to operate only on certain feed-forward detections and annotate\n them with supplementary information. For example, the following pipeline can be configured to generate detections that\n have both \nCLASSIFICATION\n and \nCOLOR\n detection properties:\n\n\n\n\nDarknetDetection (person + vehicle) --> OcvDnnDetection (vehicle color)\n\n\n\n\n\nFor example:\n\n\n\n\n \"detectionProperties\": {\n \"CLASSIFICATION\": \"car\",\n \"CLASSIFICATION CONFIDENCE LIST\": \"0.397336\",\n \"CLASSIFICATION LIST\": \"car\",\n \"COLOR\": \"blue\",\n \"COLOR CONFIDENCE LIST\": \"0.93507; 0.055744\",\n \"COLOR LIST\": \"blue; gray\"\n }\n\n\n\n\n\nThe OcvDnnDetection component now supports the following properties:\n\n\nCLASSIFICATION_TYPE\n: Set this value to change the \nCLASSIFICATION*\n part of each output property name to\n something else. For example, setting it to \nCOLOR\n will generate \nCOLOR\n, \nCOLOR LIST\n,\n and \nCOLOR CONFIDENCE LIST\n. When handling feed-foward detections, the pre-existing \nCLASSIFICATION*\n properties\n will be carried over and the \nCOLOR*\n properties will be added to the detection.\n\n\nFEED_FORWARD_WHITELIST_FILE\n: When \nFEED_FORWARD_TYPE\n is provided and not set to \nNONE\n, only feed-forward\n detections with class names contained in the specified file will be processed. For, example, a file with only \"\n car\" in it will result in performing the exclude behavior (below) for all feed-foward detections that do not have\n a \nCLASSIFICATION\n of \"car\".\n\n\nFEED_FORWARD_EXCLUDE_BEHAVIOR\n: Specifies what to do when excluding detections not specified in\n the \nFEED_FORWARD_WHITELIST_FILE\n. Acceptable values are:\n\n\nPASS_THROUGH\n: Return the excluded detections, without modification, along with any annotated detections.\n\n\nDROP\n: Don't return the excluded detections. Only return annotated detections.\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nMake interop package work with Java 8 to better support exernal job producers and consumers.\n\n\n\n\n5.0.5: August 2020\n\n\n\nUpdates\n\n\n\n\n\nConfigure Camel not to auto-acknowledge messages. Users can now see the number of pending messages in the ActiveMQ\n management console for queues consumed by the Workflow Manager.\n\n\nImprove Tesseract OSD fallback behavior. This prevents selecting the OSD rotation from the fallback pass without the\n OSD script from the fallback pass.\n\n\n\n\n5.0.4: August 2020\n\n\n\nUpdates\n\n\n\n\n\nRetry job callbacks when they fail. The Workflow Manager now supports the \nhttp.callback.timeout.ms\n\n and \nhttp.callback.retries\n system properties.\n\n\nDrop \"duplicate paged in from cursor\" DLQ messages.\n\n\n\n\n5.0.3: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdate ActiveMQ to 5.16.0.\n\n\n\n\n5.0.2: July 2020\n\n\n\nUpdates\n\n\n\n\n\nDisable video segmentation for ACS Speech Detection to prevent issues when generating speaker ids.\n\n\n\n\n5.0.1: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated Tessseract component with \nMAX_PIXELS\n setting to prevent processing large images.\n\n\n\n\n5.0.0: June 2020\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guides to describe the new build process,\n which now includes automatically copying the openmpf repo source code into the openmpf-build image instead of using\n various bind mounts, and building all of the component base builder and executor images.\n\n\nUpdated the openmpf-docker repo \nREADME\n with the\n following sections:\n\n\nHow\n to \nUse Kibana for Log Viewing and Aggregation\n\n\nHow\n to \nRestrict Media Types that a Component Can Process\n\n\nHow\n to \nImport Root Certificates for Additional Certificate Authorities\n\n\n\n\n\n\nUpdated the \nCONTRIBUTING\n guide for Docker\n deployment with information on the new build process and component base builder and executor images.\n\n\nUpdated the \nInstall Guide\n with a pointer to the \"Quick Start\" section on DockerHub.\n\n\nUpdated the \nREST API\n with the new endpoints for getting, deleting, and creating actions, tasks, and\n pipelines, as well as a change to the \n[GET] /rest/info\n endpoint.\n\n\nUpdated the \nC++ Batch Component API\n to describe changes to the \nGetDetection()\n calls,\n which now return a collection of detections or tracks instead of an error code, and to describe improvements to\n exception handling.\n\n\nUpdated the \nC++ Batch Component API\n\n , \nPython Batch Component API\n,\n and \nJava Batch Component API\n with \nMIME_TYPE\n, \nFRAME_WIDTH\n, and \nFRAME_HEIGHT\n media\n properties.\n\n\nUpdated the \nPython Batch Component API\n with information on Python3 and the\n simplification of using a \ndict\n for some of the data members.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nRenamed \nstages\n to \ntasks\n for clarity and consistency with the rest of the code.\n\n\nThe \nmedia\n element no longer contains a \nmessage\n field.\n\n\nEach \ndetectionProcessingError\n element now contains a \ncode\n field.\n\n\nErrors and warnings are now grouped by \nmediaId\n and summarized using a \ndetails\n element that contains a \nsource\n\n , \ncode\n, and \nmessage\n field. Refer\n to \nthis comment\n for an example of the JSON\n structure. Note that errors and warnings generated by the Workflow Manager do not have a \nmediaId\n.\n\n\nWhen an error or warning occurs in multiple frames of a video for a single piece of media it will be represented\n in one \ndetails\n element and the \nmessage\n will list the frame ranges.\n\n\n\n\n\n\n\n\nInteroperability Package\n\n\n\n\n\nRenamed \nJsonStage.java\n to \nJsonTask.java\n.\n\n\nRemoved \nJsonJobRequest.java\n.\n\n\nModified \nJsonDetectionProcessingError.java\n by removing the \nstartOffset\n and \nstopOffset\n fields and adding the\n following new fields: \nstartOffsetFrame\n, \nstopOffsetFrame\n, \nstartOffsetTime\n, \nstopOffsetTime\n, and \ncode\n.\n\n\nUpdated \nJsonMediaOutputObject.java\n by removing \nmessage\n field.\n\n\nAdded \nJsonMediaIssue.java\n and \nJsonIssueDetails.java\n.\n\n\n\n\nPersistent Database\n\n\n\n\n\nThe \ninput_object\n column in the \njob_request\n table has been renamed to \njob\n and the content now contains a\n serialized form of \nBatchJob.java\n instead of \nJsonJobRequest.java\n.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nThe \nGetDetection()\n calls now return a collection instead of an error code:\n\n\nstd::vector GetDetections(const MPFImageJob &job)\n\n\nstd::vector GetDetections(const MPFVideoJob &job)\n\n\nstd::vector GetDetections(const MPFAudioJob &job)\n\n\nstd::vector GetDetections(const MPFGenericJob &job)\n\n\n\n\n\n\nMPFDetectionException\n can now be constructed with a \nwhat\n parameter representing a descriptive error message:\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\n\n\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nSimplified the \ndetection_properties\n and \nframe_locations\n data members to use a Python \ndict\n instead of a custom\n data type.\n\n\n\n\nFull Docker Conversion\n\n\n\n\n\nEach component is now encapsulated in its own Docker image which self-registers with the Workflow Manager at runtime.\n This deconflicts component dependencies, and allows for greater flexibility when deciding which components to deploy\n at runtime.\n\n\nThe Node Manager image has been removed. For Docker deployments, component services should be managed using Docker\n tools external to OpenMPF.\n\n\nIn Docker deployments, streaming job REST endpoints are disabled, the Nodes web page is no longer available, component\n tar.gz packages cannot be registered through the Component Registration web page, and the \nmpf\n command line script\n can now only be run on the Workflow Manager container to modify user settings. The preexisting features are now\n reserved for non-Docker deployments and development environments.\n\n\nThe OpenMPF Docker stack can optionally be deployed with \nKibana\n (which depends on\n Elasticsearch and Filebeat) for viewing log files. Refer to the\n openmpf-docker \nREADME\n\n .\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\nA base builder image and executor image are provided for\n C++ (\nREADME\n),\n Python (\nREADME\n), and\n Java (\nREADME\n) component\n development. Component developers can also refer to the Dockerfile in the source code for each component as reference\n for how to make use of the base images.\n\n\n\n\nRestrict Media Types that a Component Can Process\n\n\n\n\n\nEach component service now supports an optional \nRESTRICT_MEDIA_TYPES\n Docker environment variable that specifies the\n types of media that service will process. For example, \nRESTRICT_MEDIA_TYPES: VIDEO,IMAGE\n will process both videos\n and images, while \nRESTRICT_MEDIA_TYPES: IMAGE\n will only process images. If not specified, the service will process\n all of the media types it natively supports. For example, this feature can be used to ensure that some services are\n always available to process images while others are processing long videos.\n\n\n\n\nImport Additional Root Certificates into the Workflow Manager\n\n\n\n\n\nAdditional root certificates can be imported into the Workflow Manager at runtime by adding an entry\n for \nMPF_CA_CERTS\n to the workflow-manager service's environment variables in \ndocker-compose.core.yml\n\n . \nMPF_CA_CERTS\n must contain a colon-delimited list of absolute file paths. Of note, a root certificate may be used\n to trust the identity of a remote object storage server.\n\n\n\n\nDockerHub\n\n\n\n\n\nPushed prebuilt OpenMPF Docker images to \nDockerHub\n. Refer to the \"Quick Start\"\n section of the OpenMPF Workflow Manager\n image \ndocumentation\n.\n\n\n\n\nVersion Updates\n\n\n\n\n\nUpdated from Oracle Java 8 to OpenJDK 11, which required updating to Tomcat 8.5.41. We now\n use \nCargo\n to run integration tests.\n\n\nUpdated OpenCV from 3.0.0 to 3.4.7 to update Deep Neural Networks (DNN) support.\n\n\nUpdated Python from 2.7 to 3.8.2.\n\n\n\n\nFFmpeg\n\n\n\n\n\nWe are no longer building separate audio and video encoders and decoders for FFmpeg. Instead, we are using the\n built-in decoders that come with FFmpeg by default. This simplifies the build process and redistribution via Docker\n images.\n\n\n\n\nArtifact Extraction\n\n\n\n\n\nThe \nARTIFACT_EXTRACTION_POLICY\n property can now be assigned a value of \nNONE\n, \nVISUAL_TYPES_ONLY\n, \nALL_TYPES\n,\n or \nALL_DETECTIONS\n.\n\n\nWith the \nVISUAL_TYPES_ONLY\n or \nALL_TYPES\n policy, artifacts will be extracted according to\n the \nARTIFACT_EXTRACTION_POLICY*\n properties. With the \nNONE\n and \nALL_DETECTIONS\n policies, those settings are\n ignored.\n\n\nNote that previously \nNONE\n, \nVISUAL_EXEMPLARS_ONLY\n, \nEXEMPLARS_ONLY\n, \nALL_VISUAL_DETECTIONS\n,\n and \nALL_DETECTIONS\n were supported.\n\n\n\n\n\n\nThe following \nARTIFACT_EXTRACTION_POLICY*\n properties are now supported:\n\n\nARTIFACT_EXTRACTION_POLICY_EXEMPLAR_FRAME_PLUS\n: Extract the exemplar frame from the track, plus this many frames\n before and after the exemplar.\n\n\nARTIFACT_EXTRACTION_POLICY_FIRST_FRAME\n: If true, extract the first frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_MIDDLE_FRAME\n: If true, extract the frame with a detection that is closest to the\n middle frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_LAST_FRAME\n: If true, extract the last frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n: Sort the detections in a track by confidence and then extract\n this many detections, starting with those which have the highest confidence.\n\n\nARTIFACT_EXTRACTION_POLICY_CROPPING\n: If true, an artifact will be extracted for each detection in each frame\n that is selected according to the other \nARTIFACT_EXTRACTION_POLICY*\n properties. The extracted artifact will be\n cropped to the width and height of the detection bounding box, and the artifact will be rotated according to the\n detection \nROTATION\n property. If false, the artifact extraction behavior is unchanged from the previous release:\n the entire frame will be extracted without any rotation.\n\n\n\n\n\n\nFor clarity, \nOUTPUT_EXEMPLARS_ONLY\n has been renamed to \nOUTPUT_ARTIFACTS_AND_EXEMPLARS_ONLY\n. Extracted artifacts\n will always be reported in the JSON output object.\n\n\nThe \nmpf.output.objects.exemplars.only\n system property has been renamed\n to \nmpf.output.objects.artifacts.and.exemplars.only\n. It works the same as before with the exception that if an\n artifact is extracted for a detection then that detection will always be represented in the JSON output object,\n whether it's an exemplar or not.\n\n\nThe \nmpf.output.objects.last.stage.only\n system property has been renamed to \nmpf.output.objects.last.task.only\n. It\n works the same as before with the exception that when set to true artifact extraction is skipped for all tasks but the\n last task.\n\n\n\n\nREST Endpoints\n\n\n\n\n\nModified \n[GET] /rest/info\n. Now returns output like \n{\"version\": \"4.1.0\", \"dockerEnabled\": true}\n.\n\n\nAdded the following REST endpoints for getting, removing, and creating actions, tasks, and pipelines. Refer to\n the \nREST API\n for more information:\n\n\n[GET] /rest/actions\n, \n[GET] /rest/tasks\n, \n[GET] /rest/pipelines\n\n\n[DELETE] /rest/actions\n, \n[DELETE] /rest/tasks\n, \n[DELETE] /rest/pipelines\n\n\n[POST] /rest/actions\n , \n[POST] /rest/tasks\n, \n[POST] /rest/pipelines\n\n\n\n\n\n\nAll of the endpoints above are new with the exception of \n[GET] /rest/pipelines\n. The endpoint has changed since the\n last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a\n collection of tasks for each pipelines. Refer to the REST API.\n\n\n[GET] /rest/algorithms\n can be used to get information about algorithms. Note that algorithms are tied to registered\n components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the\n associated component's Docker container so it self-registers with the Workflow Manager.\n\n\n\n\nIncomplete Actions, Tasks, and Pipelines\n\n\n\n\n\nThe previous version of OpenMPF would generate an error when attempting to register a component that included actions,\n tasks, or pipelines that depend on algorithms, actions, or tasks that are not yet registered with the Workflow\n Manager. This required components to be registered in a specific order. Also, when unregistering a component, it\n required the components which depend on it to be unregistered. These dependency checks are no longer enforced.\n\n\nIn general, the Workflow Manager now appropriately handles incomplete actions, tasks, and pipelines by checking if all\n of the elements are defined before executing a job, and then preserving that information in memory until the job is\n complete. This allows components to be registered and removed in an arbitrary order without affecting the state of\n other components, actions, tasks, or pipelines. This also allows actions and tasks to be removed using the new REST\n endpoints and then re-added at a later time while still preserving the elements that depend on them.\n\n\nNote that unregistering a component while a job is running will cause it to stall. Please ensure that no jobs are\n using a component before unregistering it.\n\n\n\n\nPython Arbitrary Rotation\n\n\n\n\n\nThe Python MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270\n degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range\n will be normalized to that range. Floating point values are accepted. This is similar to the existing support\n for \nC++ arbitrary rotation\n.\n\n\n\n\nOpenCV Deep Neural Networks (DNN) Detection Component\n\n\n\n\n\nThis new component replaces the old CaffeDetection component. It supports the same GoogLeNet and Yahoo Not Suitable\n For Work (NSFW) models as the old component, but removes support for the Rezafuad vehicle color detection model in\n favor of a custom TensorFlow vehicle color detection model. In our tests, the new model has proven to be more\n generalizable and provide more accurate results on never-before-seen test data. Refer to\n the \nREADME\n.\n\n\n\n\nAzure Cognitive Services (ACS) Speech Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Batch Transcription REST endpoint\n\n to transcribe speech from audio and video files. Refer to\n the \nREADME\n.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nText tagging has been simplified to only support regular expression searches. Whole keyword searches are a subset of\n regular expression searches, and are therefore still supported. Also, the \ntext-tags.json\n file format has been\n updated to allow for specifying case-sensitive regular expression searches.\n\n\nAdditionally, the \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n detection properties are now supported, which list the\n OCR'd words that resulted in adding a \nTAG\n to the detection, and the character offset of those words within the\n OCR'd \nTEXT\n, respectively.\n\n\nKey changes to tagging output and \ntext-tags.json\n format are outlined below. Refer to\n the \nREADME\n\n for more information:\n\n\nRegex patterns should now be entered in the format \n{\"pattern\": \"regex_pattern\"}\n. Users can add and toggle\n the \n\"caseSensitive\"\n regex flag for each pattern.\n\n\nFor example: \n{\"pattern\": \"(\\\\b)bus(\\\\b)\", \"caseSensitive\": true}\n enables case-sensitive regex pattern\n matching.\n\n\nBy default, each regex pattern, including those in the legacy format, will be case-insensitive.\n\n\n\n\n\n\nAs part of the text tagging update, the \nTAGS\n outputs are now separated by semicolons \n;\n rather than commas \n,\n\n to be consistent with the delimiters for \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n output patterns.\n\n\nBecause semicolons can be part of the trigger word itself, those semicolons will be encapsulated in brackets.\n\n\nFor example, \ndetected trigger with a ;\n in the OCR'd \nTEXT\n is reported\n as \nTRIGGER_WORDS=detected trigger with a [;]; some other trigger\n.\n\n\n\n\n\n\nCommas are now used to group each set of \nTRIGGER_WORDS_OFFSET\n with its respective \nTRIGGER_WORDS\n output.\n Both \nTAGS\n and \nTRIGGER_WORDS\n are separated by semicolons only.\n\n\nFor example: \nTRIGGER_WORDS=trigger1; trigger2\n, \nTRIGGER_WORDS_OFFSET=0-5, 6-10; 12-15\n, means\n that \ntrigger1\n occurs twice in the text at the index ranges 0-5 and 6-10, and \ntrigger2\n occurs at index\n range 12-15.\n\n\n\n\n\n\n\n\n\n\nRegex tagging now follows the C++ ECMAS format (\n see \nexamples here\n) after resolving JSON string conversion\n for regex tags.\n\n\nAs a result the regex patterns \n\\b\n and \n\\p\n in the text tagging file must now be written as \n\\\\b\n and \n\\\\p\n,\n respectively, to match the format of other regex character patterns (ex. \n\\\\d\n, \n\\\\w\n, \n\\\\s\n, etc.).\n\n\n\n\n\n\nThe \nMAX_PARALLEL_SCRIPT_THREADS\n and \nMAX_PARALLEL_PAGE_THREADS\n properties are now supported. When processing\n images, the first property is used to determine how many threads to run in parallel. Each thread performs OCR using a\n different language or script model. When processing PDFs, the second property is used to determine how many threads to\n run in parallel. Each thread performs OCR on a different page of the PDF.\n\n\nThe \nENABLE_OSD_FALLBACK\n property is now supported. If enabled, an additional round of OSD is performed when the\n first round fails to generate script predictions that are above the OSD score and confidence thresholds. In the second\n pass, the component will run OSD on multiple copies of the input text image to get an improved prediction score\n and \nOSD_FALLBACK_OCCURRED\n detection property will be set to true.\n\n\nIf any OSD-detected models are missing, the new \nMISSING_LANGUAGE_MODELS\n detection property will list the missing\n models.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThe Tika text detection component now supports text tagging in the same way as the Tesseract component. Refer to\n the \nREADME\n.\n\n\n\n\nOther Improvements\n\n\n\n\n\nSimplified component \ndescriptor.json\n files by moving the specification of common properties, such\n as \nCONFIDENCE_THRESHOLD\n, \nFRAME_INTERVAL\n, \nMIN_SEGMENT_LENGTH\n, etc., to a single \nworkflow-properties.json\n file.\n Now when the Workflow Manager is updated to support new features, the component \ndescriptor.json\n file will not need\n to be updated.\n\n\nUpdated the Sphinx component to return \nTRANSCRIPT\n instead of \nTRANSCRIPTION\n, which is grammatically correct.\n\n\nWhitespace is now trimmed from property names when jobs are submitted via the REST API.\n\n\nThe Darknet Docker image now includes the YOLOv3 model weights.\n\n\nThe C++ and Python ModelsIniParser now allows users to specify optional fields.\n\n\nWhen a job completion callback fails, but otherwise the job is successful, the final state of the job will\n be \nCOMPLETE_WITH_WARNINGS\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#772\n] Can now create a custom pipeline with long action names using\n the Pipelines 2 UI.\n\n\n[\n#812\n] Now properly setting the start and stop index for elements in\n the \ndetectionProcessingErrors\n collection in the JSON output object. Errors reported for each job segment will now\n appear in the collection.\n\n\n[\n#941\n] Tesseract component no longer segfaults when handling corrupt\n media.\n\n\n[\n#1005\n] Fixed a bug that caused a NullPointerException when\n attempting to get output object JSON via REST before a job completes.\n\n\n[\n#1035\n] The search bar in the Job Status UI can once again for used\n to search for job id.\n\n\n[\n#1104\n] Fixed C++/Python component executor memory leaks.\n\n\n[\n#1108\n] Fixed a bug when handling frames and detections that are\n horizontally flipped. This affected both markup and feed-forward behaviors.\n\n\n[\n#1119\n] Fixed Tesseract component memory leaks and uninitialized\n read issues.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1028\n] Media inspection fails to handle Apple-optimized PNGs with\n the CgBI data chunk before the IHDR chunk.\n\n\n[\n#1109\n] We made the search bar in the Job Status UI more efficient\n by shifting it to a database query, but in doing so introduced a bug where the search operates on UTC time instead of\n local system time.\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n does not behave as expected for\n batch jobs. A user would expect it to control whether the JSON output object is generated, but it's generated\n regardless of that setting.\n\n\n[\n#1032\n] Jobs fail on corrupt QuickTime videos. For these videos, the\n OpenCV-reported frame count is more than twice the actual frame count.\n\n\n[\n#1106\n] When a job ends in ERROR the job status UI does not show an\n End Date.\n\n\n\n\nOpenMPF 4.1.x\n\n\n4.1.14: June 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1120\n] The node-manager Docker image now correctly installs CUDA\n libraries so that GPU-enabled components on that image can run on the GPU.\n\n\n[\n#1064\n] Fixed memory leaks in the Darknet component for various\n network types, and when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.13: June 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated the OpenCV build and media inspection process to properly handle webp images.\n\n\n\n\n4.1.12: May 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated JDK from \njdk-8u181-linux-x64.rpm\n to \njdk-8u251-linux-x64.rpm\n.\n\n\n\n\n4.1.11: May 2020\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nAdded \nINVALID_MIN_IMAGE_SIZE\n job property to filter out images with extremely low width or height.\n\n\nUpdated image rescaling behavior to account for image dimension limits.\n\n\nFixed handling of \nnullptr\n returns from Tesseract API OCR calls.\n\n\n\n\n4.1.8: May 2020\n\n\n\nAzure Cognitive Services (ACS) OCR Component\n\n\n\n\n\nThis new component utilizes\n the \nACS OCR REST endpoint\n\n to extract text from images and videos. Refer to\n the \nREADME\n.\n\n\n\n\n4.1.6: April 2020\n\n\n\nUpdates\n\n\n\n\n\nNow silently discarding ActiveMQ DLQ \"Suppressing duplicate delivery on connection\" messages in addition to \"duplicate\n from store\" messages.\n\n\n\n\n4.1.5: March 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1062\n] Fixed a memory leak in the Darknet component that occurred\n when running jobs on CPU resources with the Tiny YOLO model.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1064\n] The Darknet component has memory leaks for various network\n types, and potentially when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.4: March 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated from Hibernate 5.0.8 to 5.4.12 to support schema-based multitenancy. This allows multiple instances of OpenMPF\n to use the same PostgreSQL database as long as each instance connects to the database as a separate user, and the\n database is configured appropriately. This also required updating Tomcat from 7.0.72 to 7.0.76.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nUpdated the Workflow Manager to include an \noutputobjecturi\n in GET callbacks, and \noutputObjectUri\n in POST\n callbacks, when jobs complete. This URI specifies a file path, or path on the object storage server, depending on\n where the JSON output object is located.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nUpdated \nJsonCallbackBody.java\n to contain an \noutputObjectUri\n field.\n\n\n\n\n4.1.3: February 2020\n\n\n\nFeatures\n\n\n\n\n\nAdded support for \nDETECTION_PADDING_X\n and \nDETECTION_PADDING_Y\n optional job properties. The value can be a\n percentage or whole-number pixel value. When positive, each detection region in each track will be expanded. When\n negative, the region will shrink. If the detection region is shrunk to nothing, the shrunk dimension(s) will be set to\n a value of 1 pixel and the \nSHRUNK_TO_NOTHING\n detection property will be set to true.\n\n\nAdded support for \nDISTANCE_CONFIDENCE_WEIGHT_FACTOR\n and \nSIZE_CONFIDENCE_WEIGHT_FACTOR\n SuBSENSE algorithm\n properties. Increasing the value of the first property will generate detection confidence values that favor being\n closer to the center frame of a track. Increasing the value of the second property will generate detection confidence\n values that favor large detection regions.\n\n\n\n\n4.1.1: January 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1016\n] Fixed a bug that caused a deadlock situation when the media\n inspection process failed quickly when processing many jobs using a pipeline with more than one stage.\n\n\n\n\n4.1.0: July 2019\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nC++ Batch Component API\n to describe the \nROTATION\n\n detection property. See the \nC++ Arbitrary Rotation\n section below.\n\n\nUpdated the \nREST API\n with new component registration REST endpoints. See\n the \nComponent Registration REST Endpoints\n section below.\n\n\nAdded a \nREADME\n for\n the EAST text region detection component. See\n the \nEAST Text Region Detection Component\n section below.\n\n\nUpdated the Tesseract OCR text detection\n component \nREADME\n\n . See the \nTesseract OCR Text Detection Component\n section below.\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guide to describe the new streamlined\n approach to using \ndocker-compose config\n. See the \nDocker Deployment\n section below.\n\n\nFixed the description of \nMIN_SEGMENT_LENGTH\n and associated examples in\n the \nUser Guide\n for\n issue \n#891\n.\n\n\nUpdated the \nJava Batch Component API\n with information on how to use Log4j2.\n Related to resolving issue \n#855\n.\n\n\nUpdated the \nInstall Guide\n to point to the\n Docker \nREADME\n.\n\n\nTransformed the Build Guide into a \nDevelopment Environment Guide\n.\n\n\n\n\n\n\nC++ Arbitrary Rotation\n\n\n\n\n\nThe C++ MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees.\n Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be\n normalized to that range. Floating point values are accepted.\n\n\nWhen using those tools to read frame data, they will automatically correct for rotation so that the returned frame is\n horizontally oriented toward the normal 3 o'clock position.\n\n\nWhen \nFEED_FORWARD_TYPE=REGION\n, these tools will look for a \nROTATION\n detection property in the feed-forward\n detections and automatically correct for rotation. For example, a detection property of \nROTATION=90\n represents\n that the region is rotated 90 degrees counter clockwise, and therefore must be rotated 90 degrees clockwise to\n correct for it.\n\n\nWhen \nFEED_FORWARD_TYPE=SUPERSET_REGION\n, these tools will properly account for the \nROTATION\n detection property\n associated with each feed-forward detection when calculating the bounding box that encapsulates all of those\n regions.\n\n\nWhen \nFEED_FORWARD_TYPE=FRAME\n, these tools will rotate the frame according to the \nROTATION\n job property. It's\n important to note that for rotations other than 0, 90, 180, and 270 degrees the rotated frame dimensions will be\n larger than the original frame dimensions. This is because the frame needs to be expanded to encapsulate the\n entirety of the original rotated frame region. Black pixels are used to fill the empty space near the edges of the\n original frame.\n\n\n\n\n\n\nThe Markup component now places a colored dot at the upper-left corner of each detection region so that users can\n determine the rotation of the region relative to the entire frame.\n\n\n\n\n\n\nComponent Registration REST Endpoints\n\n\n\n\n\nAdded a \n[POST] /rest/components/registerUnmanaged\n endpoint so that components running as separate Docker containers\n can self-register with the Workflow Manager.\n\n\nSince these components are not managed by the Node Manager, they are considered unmanaged OpenMPF components.\n These components are not displayed in Nodes web UI and are tagged as unmanaged in the Component Registration web\n UI where they can only be removed.\n\n\nNote that components uploaded to the Component Registration web UI as .tar.gz files are considered managed\n components.\n\n\n\n\n\n\nAdded a \n[DELETE] /rest/components/{componentName}\n endpoint that can be used to remove managed and unmanaged\n components.\n\n\n\n\nPython Component Executor Docker Image\n\n\n\n\n\nComponent developers can now use a Python component executor Docker image to write a Python component for OpenMPF that\n can be encapsulated within a Docker container. This isolates the build and execution environment from the rest of\n OpenMPF. For more information, see\n the \nREADME\n.\n\n\nComponents developed with this image are not managed by the Node Manager; rather, they self-register with the Workflow\n Manager and their lifetime is determined by their own Docker container.\n\n\n\n\n\n\nDocker Deployment\n\n\n\n\n\nStreamlined single-host \ndocker-compose up\n deployments and multi-host \ndocker stack deploy\n swarm deployments. Now\n users are instructed to create a single \ndocker-compose.yml\n file for both types of deployments.\n\n\nRemoved the \ndocker-generate-compose-files.sh\n script in favor of allowing users the flexibility of combining\n multiple \ndocker-compose.*.yml\n files together using \ndocker-compose config\n. See\n the \nGenerate docker-compose.yml\n\n section of the README.\n\n\nComponents based on the Python component executor Docker image can now be defined and configured directly\n in \ndocker-compose.yml\n.\n\n\nOpenMPF Docker images now make use of Docker labels.\n\n\n\n\n\n\nEAST Text Region Detection Component\n\n\n\n\n\nThis new component uses the Efficient and Accurate Scene Text (EAST) detection model to detect text regions in images\n and videos. It reports their location, angle of rotation, and text type (\nSTRUCTURED\n or \nUNSTRUCTURED\n), and supports\n a variety of settings to control the behavior of merging text regions into larger regions. It does not perform OCR on\n the text or track detections across video frames. Thus, each video track is at most one detection long. For more\n information, see\n the \nREADME\n.\n\n\nOptionally, this component can be built as a Docker image using the Python component executor Docker image, allowing\n it to exist apart from the Node Manager image.\n\n\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to support reading tessdata \n*.traineddata\n files at a specified \nMODELS_DIR_PATH\n. This allows users to\n install new \n*.traineddata\n files post deployment.\n\n\nUpdated to optionally perform Tesseract Orientation and Script Detection (OSD). When enabled, the component will\n attempt to use the orientation results of OSD to automatically rotate the image, as well as perform OCR using the\n scripts detected by OSD.\n\n\nUpdated to optionally rotate a feed-forward text region 180 degrees to account for upside-down text.\n\n\nNow supports the following preprocessing properties for both structured and unstructured text:\n\n\nText sharpening\n\n\nText rescaling\n\n\nOtsu image thresholding\n\n\nAdaptive thresholding\n\n\nHistogram equalization\n\n\nAdaptive histogram equalization (also known as Contrast Limited Adaptive Histogram Equalization (CLAHE))\n\n\n\n\n\n\nWill use the \nTEXT_TYPE\n detection property in feed-forward regions provided by the EAST component to determine which\n preprocessing steps to perform.\n\n\nFor more information on these new features, see\n the \nREADME\n.\n\n\nRemoved gibberish and string filters since they only worked on English text.\n\n\n\n\nActiveMQ Profiles\n\n\n\n\n\nThe ActiveMQ Docker image now supports custom profiles. The container selects an \nactivemq.xml\n and \nenv\n file to use\n at runtime based on the value of the \nACTIVE_MQ_PROFILE\n environment variable. Among others, these files contain\n configuration settings for Java heap space and component queue memory limits.\n\n\nThis release only supports a \ndefault\n profile setting, as defined by \nactivemq-default.xml\n and \nenv.default\n;\n however, developers are free to add other \nactivemq-.xml\n and \nenv.\n files to the ActiveMQ Docker\n image to suit their needs.\n\n\n\n\nDisabled ActiveMQ Prefetch\n\n\n\n\n\nDisabled ActiveMQ prefetching on all component queues. Previously, a prefetch value of one was resulting in situations\n where one component service could be dispatched two sub-jobs, thereby starving other available component services\n which could process one of those sub-jobs in parallel.\n\n\n\n\nSearch Region Percentages\n\n\n\n\n\nIn addition to using exact pixel values, users can now use percentages for the following properties when specifying\n search regions for C++ and Python components:\n\n\nSEARCH_REGION_TOP_LEFT_X_DETECTION\n\n\nSEARCH_REGION_TOP_LEFT_Y_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_X_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_Y_DETECTION\n\n\n\n\n\n\nFor example, setting \nSEARCH_REGION_TOP_LEFT_X_DETECTION=50%\n will result in components only processing the right half\n of an image or video.\n\n\nOptionally, users can specify exact pixel values of some of these properties and percentages for others.\n\n\n\n\nOther Improvements\n\n\n\n\n\nIncreased the number of ActiveMQ maxConcurrentConsumers for the \nMPF.COMPLETED_DETECTIONS\n queue from 30 to 60.\n\n\nThe Create Job web UI now only displays the content of the \n$MPF_HOME/share/remote-media\n directory instead of all\n of \n$MPF_HOME/share\n, which prevents the Workflow Manager from indexing generated JSON output files, artifacts, and\n markup. Indexing the latter resulted in Java heap space issues for large scale production systems. This is a\n mitigation for issue \n#897\n.\n\n\nThe Job Status web UI now makes proper use of pagination in SQL/Hibernate through the Workflow Manager to avoid\n retrieving the entire jobs table, which was inefficient.\n\n\nThe Workflow Manager will now silently discard all duplicate messages in the ActiveMQ Dead Letter Queue (DLQ),\n regardless of destination. Previously, only messages destined for component sub-job request queues were discarded.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#891\n] Fixed a bug where the Workflow Manager media segmenter\n generated short segments that were minimally \nMIN_SEGMENT_LENGTH+1\n in size instead of \nMIN_SEGMENT_LENGTH\n.\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. This seems to have been resolved by disabling ActiveMQ prefetch behavior on component queues.\n\n\n[\n#855\n] A logback circular reference suppressed exception no longer\n throws a StackOverflowError. This was resolved by transitioning the Workflow Manager and Java components from the\n Logback framework to Log4j2.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#897\n] OpenMPF will attempt to index files located\n in \n$MPF_HOME/share\n as soon as the webapp is started by Tomcat. This is so that those files can be listed in a\n directory tree in the Create Job web UI. The main problem is that once a file gets indexed it's never removed from the\n cache, even if the file is manually deleted, resulting in a memory leak.\n\n\n\n\nLate Additions: November 2019\n\n\n\n\n\nUser names, roles, and passwords can now be set by using an optional \nuser.properties\n file. This allows\n administrators to override the default OpenMPF users that come preconfigured, which may be a security risk. Refer to\n the \"Configure Users\" section of the\n openmpf-docker \nREADME\n for\n more information.\n\n\n\n\nLate Additions: December 2019\n\n\n\n\n\nTransitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL\n database in the cloud.\n\n\nUpdated the EAST component to support a \nTEMPORARY_PADDING\n and \nFINAL_PADDING\n property. The first property\n determines how much padding is added to detections during the non-maximum suppression or merging step. This padding is\n effectively removed from the final detections. The second property is used to control the final amount of padding on\n the output regions. Refer to\n the \nREADME\n.\n\n\n\n\nOpenMPF 4.0.x\n\n\n4.0.0: February 2019\n\n\n\nDocumentation\n\n\n\n\n\nAdded an \nObject Storage Guide\n with information on how to configure OpenMPF to work\n with a custom NGINX object storage server, and how to run jobs that use an S3 object storage server. Note that the\n system properties for the custom NGINX object storage server have changed since the last release.\n\n\n\n\nUpgrade to Tesseract 4.0\n\n\n\n\n\nBoth the Tesseract OCR Text Detection Component and OpenALPR License Plate Detection Components have been updated to\n use the new version of Tesseract.\n\n\nAdditionally, Leptonica has been upgraded from 1.72 to 1.75.\n\n\n\n\nDocker Deployment\n\n\n\n\n\nThe Docker images now use the yum package manager to install ImageMagick6 from a public RPM repository instead of\n downloading the RPMs directly from imagemagick.org. This resolves an issue with the OpenMPF Docker build where RPMs\n on \nimagemagick.org\n were no longer available.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to allow the user to set a \nTESSERACT_OEM\n property in order to select an OCR engine mode (OEM).\n\n\n\"script/Latin\" can now be specified as the \nTESSERACT_LANGUAGE\n. When selected, Tesseract will select all Latin\n characters, which can be from different Latin languages.\n\n\n\n\nCeph S3 Object Storage\n\n\n\n\n\nAdded support for downloading files from, and uploading files to, an S3 object storage server. The following job\n properties can be provided: \nS3_ACCESS_KEY\n, \nS3_SECRET_KEY\n, \nS3_RESULTS_BUCKET\n, \nS3_UPLOAD_ONLY\n.\n\n\nAt this time, only support for Ceph object storage has been tested. However, the Workflow Manager uses the AWS SDK for\n Java to communicate with the object store, so it is possible that other S3-compatible storage solutions may work as\n well.\n\n\n\n\nISO-8601 Timestamps\n\n\n\n\n\nAll timestamps in the JSON output object, and streaming video callbacks, are now in the ISO-8601 format (e.g. \"\n 2018-12-19T12:12:59.995-05:00\"). This new format includes the time zone, which makes it possible to compare timestamps\n generated between systems in different time zones.\n\n\nThis change does not affect the track and detection start and stop offset times, which are still reported in\n milliseconds since the start of the video.\n\n\n\n\nReduced Redis Usage\n\n\n\n\n\nThe Workflow Manager has been refactored to reduce usage of the Redis in-memory database. In general, Redis is not\n necessary for storing job information and only resulted in introducing potential delays in accessing that data over\n the network stack.\n\n\nNow, only track and detection data is stored in Redis for batch jobs. This reduces the amount of memory the Workflow\n Manager requires of the Java Virtual Machine. Compared to the other job information, track and detection data can\n potentially be relatively much larger. In the future, we plan to store frame data in Redis for streaming jobs as well.\n\n\n\n\nCaffe Vehicle Color Estimation\n\n\n\n\n\nThe Caffe\n Component \nmodels.ini\n\n file has been updated with a \"vehicle_color\" section with links for downloading\n the \nReza Fuad Rachmadi's Vehicle Color Recognition Using Convolutional Neural Network\n\n model files.\n\n\nThe following pipelines have been added. These require the above model files to be placed\n in \n$MPF_HOME/share/models/CaffeDetection\n:\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM TINY YOLO VEHICLE DETECTOR) PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM YOLO VEHICLE DETECTOR) PIPELINE\n\n\n\n\n\n\n\n\nTrack Merging and Minimum Track Length\n\n\n\n\n\nThe following system properties now have \"video\" in their names:\n\n\ndetection.video.track.merging.enabled\n\n\ndetection.video.track.min.gap\n\n\ndetection.video.track.min.length\n\n\ndetection.video.track.overlap.threshold\n\n\n\n\n\n\nThe above properties can be overridden by the following job properties, respectively. These have not been renamed\n since the last release:\n\n\nMERGE_TRACKS\n\n\nMIN_GAP_BETWEEN_TRACKS\n\n\nMIN_TRACK_LENGTH\n\n\nMIN_OVERLAP\n\n\n\n\n\n\nThese system and job properties now only apply to video media. This resolves an issue where users had\n set \ndetection.track.min.length=5\n, which resulted in dropping all image media tracks. By design, each image track can\n only contain a single detection.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a bug where the Docker entrypoint scripts appended properties to the end\n of \n$MPF_HOME/share/config/mpf-custom.properties\n every time the Docker deployment was restarted, resulting in entries\n like \ndetection.segment.target.length=5000,5000,5000\n.\n\n\nUpgrading to Tesseract 4 fixes a bug where, when specifying \nTESSERACT_LANGUAGE\n, if one of the languages is Arabic,\n then Arabic must be specified last. Arabic can now be specified first, for example: \nara+eng\n.\n\n\nFixed a bug where the minimum track length property was being applied to image tracks. Now it's only applied to video\n tracks.\n\n\nFixed a bug where ImageMagick6 installation failed while building Docker images.\n\n\n\n\nOpenMPF 3.0.x\n\n\n3.0.0: December 2018\n\n\n\n\n\nNOTE:\n The \nBuild Guide\n and \nInstall Guide\n are outdated. The old process for manually configuring a Build VM, using it to build an OpenMPF package, and installing that package, is deprecated in favor of Docker containers. Please refer to the openmpf-docker \nREADME\n.\n\n\nNOTE:\n Do not attempt to register or unregister a component through the Nodes UI in a Docker deployment. It may appear to succeed, but the changes will not affect the child Node Manager containers, only the Workflow Manager container. Also, do not attempt to use the \nmpf\n command line tools in a Docker deployment.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nREADME\n\n , \nSWARM\n guide,\n and \nCONTRIBUTING\n guide for Docker deployment.\n\n\nUpdated the \nUser Guide\n with information on how track\n properties and track confidence are handled when merging tracks.\n\n\nAdded README files for new components. Refer to the component sections below.\n\n\n\n\nDocker Support\n\n\n\n\n\nOpenMPF can now be built and distributed as 5 Docker images: openmpf_workflow_manager, openmpf_node_manager,\n openmpf_active_mq, mysql_database, and redis.\n\n\nThese images can be deployed on a single host using \ndocker-compose up\n.\n\n\nThey can also be deployed across multiple hosts in a Docker swarm cluster using \ndocker stack deploy\n.\n\n\nGPU support is enabled through the NVIDIA Docker runtime.\n\n\nBoth HTTP and HTTPS deployments are supported.\n\n\n\n\n\n\nJSON Output Object\n\n\n\n\n\nAdded a \ntrackProperties\n field at the track level that works in much the same way as the \ndetectionProperties\n field\n at the detection level. Both are maps that contain zero or more key-value pairs. The component APIs have always\n supported the ability to return track-level properties, but they were never represented in the JSON output object,\n until now.\n\n\nSimilarly, added a track \nconfidence\n field. The component APIs always supported setting it, but the value was never\n used in the JSON output object, until now.\n\n\nAdded \njobErrors\n and\njobWarnings\n fields. The \njobErrors\n field will mention that there are items\n in \ndetectionProcessingErrors\n fields.\n\n\nThe \noffset\n, \nstartOffset\n, and \nstopOffset\n fields have been removed in favor of the existing \noffsetFrame\n\n , \nstartOffsetFrame\n, and \nstopOffsetFrame\n fields, respectively. They were redundant and deprecated.\n\n\nAdded a \nmpf.output.objects.exemplars.only\n system property, and \nOUTPUT_EXEMPLARS_ONLY\n job property, that can be set\n to reduce the size of the JSON output object by only recording the track exemplars instead of all of the detections in\n each track.\n\n\nAdded a \nmpf.output.objects.last.stage.only\n system property, and \nOUTPUT_LAST_STAGE_ONLY\n job property, that can be\n set to reduce the size of the JSON output object by only recording the detections for the last non-markup stage of a\n pipeline.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThe Darknet component can now support processing streaming video.\n\n\nIn batch mode, video frames are prefetched, decoded, and stored in a buffer using a separate thread from the one that\n performs the detection. The size of the prefetch buffer can be configured by setting \nFRAME_QUEUE_CAPACITY\n.\n\n\nThe Darknet component can now perform basic tracking and generate video tracks with multiple detections. Both the\n default detection mode and preprocessor detection mode are supported.\n\n\nThe Darknet component has been updated to support the full and tiny YOLOv3 models. The YOLOv2 models are no longer\n supported.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nThis new component extracts text found in an image and reports it as a single-detection track.\n\n\nPDF documents can also be processed with one track detection per page.\n\n\nUsers may set the language of each track using the \nTESSERACT_LANGUAGE\n property as well as adjust other image\n preprocessing properties for text extraction.\n\n\nRefer to\n the \nREADME\n.\n\n\n\n\nOpenCV Scene Change Detection Component\n\n\n\n\n\nThis new component detects and segments a given video by scenes. Each scene change is detected using histogram\n comparison, edge comparison, brightness (fade outs), and overall hue/saturation/value differences between adjacent\n frames.\n\n\nUsers can toggle each type of of scene change detection technique as well as threshold properties for each detection\n method.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThis new component extracts text contained in documents and performs language detection. 71 languages and most\n document formats (.txt, .pptx, .docx, .doc, .pdf, etc.) are supported.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Image Detection Component\n\n\n\n\n\nThis new component extracts images embedded in document formats (.pdf, .ppt, .doc) and stores them on disk in a\n specified directory.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTrack-Level Properties and Confidence\n\n\n\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\nComponents have been updated to return meaningful track-level properties. Caffe and Darknet include \nCLASSIFICATION\n,\n OALPR includes the exemplar \nTEXT\n, and Sphinx includes the \nTRANSCRIPTION\n.\n\n\nThe Workflow Manager will now populate the track-level confidence. It is the same as the exemplar confidence, which is\n the max of all of the track detections.\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\n\n\n\nAdded \nhttp.object.storage.*\n system properties for configuring an optional custom NGINX object storage server on\n which to store generated detection artifacts, JSON output objects, and markup files.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n,\n which is the default behavior when an object storage server is not specified.\n\n\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field,\n and, if appropriate, the \nmarkupResult.message\n field. If the job completes without other issues, the final status\n will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nThe NGINX storage server runs custom server-side code which we can make available upon request. In the future, we plan\n to support more common storage server solutions, such as Amazon S3.\n\n\n\n\n\n\nActiveMQ\n\n\n\n\n\nThe \nMPF_OUTPUT\n queue is no longer supported and has been removed. Job producers can specify a callback URL when\n creating a job so that they are alerted when the job is complete. Users observed heap space issues with ActiveMQ after\n running thousands of jobs without consuming messages from the \nMPF_OUTPUT\n queue.\n\n\nThe Workflow Manager will now silently discard duplicate sub-job request messages in the ActiveMQ Dead Letter Queue (\n DLQ). This fixes a bug where the Workflow Manager would prematurely terminate jobs corresponding to the duplicate\n messages. It's assumed that ActiveMQ will only place a duplicate message in the DLQ if the original message, or\n another duplicate, can be delivered.\n\n\n\n\nNode Auto-Configuration\n\n\n\n\n\nAdded the \nnode.auto.config.enabled\n, \nnode.auto.unconfig.enabled\n, and \nnode.auto.config.num.services.per.component\n\n system properties for automatically managing the configuration of services when nodes join and leave the OpenMPF\n cluster.\n\n\nDocker will assign a a hostname with a randomly-generated id to containers in a swarm deployment. The above properties\n allow the Workflow Manager to automatically discover and configure services on child Node Manager components, which is\n convenient since the hostname of those containers cannot be known in advance, and new containers with new hostnames\n are created when the swarm is restarted.\n\n\n\n\nJob Status Web UI\n\n\n\n\n\nAdded the \nweb.broadcast.job.status.enabled\n and \nweb.job.polling.interval\n system properties that can be used to\n configure if the Workflow Manager automatically broadcasts updates to the Job Status web UI. By default, the\n broadcasts are enabled.\n\n\nIn a production environment that processes hundreds of jobs or more at the same time, this behavior can result in\n overloading the web UI, causing it to slow down and freeze up. To prevent this, set \nweb.broadcast.job.status.enabled\n\n to \nfalse\n. If \nweb.job.polling.interval\n is set to a non-zero value, the web UI will poll for updates at that\n interval (specified in milliseconds).\n\n\nTo disable broadcasts and polling, set \nweb.broadcast.job.status.enabled\n to \nfalse\n and \nweb.job.polling.interval\n to\n a zero or negative value. Users will then need to manually refresh the Job Status web page using their web browser.\n\n\n\n\nOther Improvements\n\n\n\n\n\nNow using variable-length text fields in the mySQL database for string data that may exceed 255 characters.\n\n\nUpdated the MPFImageReader tool to use OpenCV video capture behind the scenes to support reading data from HTTP URLs.\n\n\nPython components can now include pre-built wheel files in the plugin package.\n\n\nWe now use a \nJenkinsfile\n Groovy script for our\n Jenkins build process. This allows us to use revision control for our continuous integration process and share that\n process with the open source community.\n\n\nAdded \nremote.media.download.retries\n and \nremote.media.download.sleep\n system properties that can be used to\n configure how the Workflow Manager will attempt to retry downloading remote media if it encounters a problem.\n\n\nArtifact extraction now uses MPFVideoCapture, which employs various fallback strategies for extracting frames in cases\n where a video is not well-formed or corrupted. For components that use MPFVideoCapture, this enables better\n consistency between the frames they process and the artifacts that are later extracted.\n\n\n\n\nBug Fixes\n\n\n\n\n\nJobs now properly end in \nERROR\n if an invalid media URL is provided or there is a problem accessing remote media.\n\n\nJobs now end in \nCOMPLETE_WITH_ERRORS\n when a detection splitter error occurs due to missing system properties.\n\n\nComponents can now include their own version of the Google Protobuf library. It will not conflict with the version\n used by the rest of OpenMPF.\n\n\nThe Java component executor now sets the proper job id in the job name instead of using the ActiveMQ message request\n id.\n\n\nThe Java component executor now sets the run directory using \nsetRunDirectory()\n.\n\n\nActions can now be properly added using an \"extras\" component. An extras component only includes a \ndescriptor.json\n\n file and declares Actions, Tasks, and Pipelines using other component algorithms.\n\n\nRefer to the items listed in the \nActiveMQ\n section.\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. The reason is currently unknown.\n\n\n[\n#544\n] Image artifacts retain some permissions from source files\n available on the local host. This can result in some of the image artifacts having executable permissions.\n\n\n[\n#604\n] The Sphinx component cannot be unregistered\n because \n$MPF_HOME/plugins/SphinxSpeechDetection/lib\n is owned by root on a deployment machine.\n\n\n[\n#623\n] The Nodes UI does not work correctly\n when \n[POST] /rest/nodes/config\n is used at the same time. This is because the UI's state is not automatically updated\n to reflect changes made through the REST endpoint.\n\n\n[\n#783\n] The Tesseract OCR Text Detection Component has\n a \nknown issue\n because it uses Tesseract 3. If a combination\n of languages is specified using \nTESSERACT_LANGUAGE\n, and one of the languages is Arabic, then Arabic must be\n specified last. For example, for English and Arabic, \neng+ara\n will work, but \nara+eng\n will not.\n\n\n[\n#784\n] Sometimes services do not start on OpenMPF nodes, and those\n services cannot be started through the Nodes web UI. This is not a Docker-specific problem, but it has been observed\n in a Docker swarm deployment when auto-configuration is enabled. The workaround is to restart the Docker swarm\n deployment, or remove the entire node in the Nodes UI and add it again.\n\n\n\n\nOpenMPF 2.1.x\n\n\n2.1.0: June 2018\n\n\n\n\n\nNOTE:\n If building this release on a machine used to build a previous version of OpenMPF, then please run \nsudo pip install --upgrade pip\n to update to at least pip 10.0.1. If not, the OpenMPF build script will fail to properly download .whl files for Python modules.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded the \nPython Batch Component API\n.\n\n\nAdded the \nNode Guide\n.\n\n\nAdded the \nGPU Support Guide\n.\n\n\nUpdated the \nInstall Guide\n with an \"(Optional) Install the NVIDIA CUDA Toolkit\" section.\n\n\nRenamed Admin Manual to Admin Guide for consistency.\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nDevelopers can now write batch components in Python using the mpf_component_api module.\n\n\nDependencies can be specified in a setup.py file. OpenMPF will automatically download the .whl files using pip at\n build time.\n\n\nWhen deployed, a virtualenv is created for the Python component so that it runs in a sandbox isolated from the rest of\n the system.\n\n\nOpenMPF ImageReader and VideoCapture tools are provided in the mpf_component_util module.\n\n\nExample Python components are provided for reference.\n\n\n\n\nSpare Nodes\n\n\n\n\n\nSpare nodes can join and leave an OpenMPF cluster while the Workflow Manager is running. You can create a spare node\n by cloning an existing OpenMPF child node. Refer to the \nNode Guide\n.\n\n\nNote that changes made using the Component Registration web page only affect core nodes, not spare nodes. Core nodes\n are those configured during the OpenMPF installation process.\n\n\nAdded \nmpf list-nodes\n command to list the core nodes and available spare nodes.\n\n\nOpenMPF now uses the JGroups FILE_PING protocol for peer discovery instead of TCPPING. This means that the list of\n OpenMPF nodes no longer needs to be fully specified when the Workflow Manager starts. Instead, the Workflow Manager,\n and Node Manager process on each node, use the files in \n$MPF_HOME/share/nodes\n to determine which nodes are currently\n available.\n\n\nUpdated JGroups from 3.6.4. to 4.0.11.\n\n\nThe environment variables specified in \n/etc/profile.d/mpf.sh\n have been simplified. Of note, \nALL_MPF_NODES\n has been\n replaced by \nCORE_MPF_NODES\n.\n\n\n\n\nDefault Detection System Properties\n\n\n\n\n\nThe detection properties that specify the default values when creating new jobs can now be updated at runtime without\n restarting the Workflow Manager. Changing these properties will only have an effect on new jobs, not jobs that are\n currently running.\n\n\nThese default detection system properties are separated from the general system properties in the Properties web page.\n The latter still require the Workflow Manager to be restarted for changes to take effect.\n\n\nThe Apache Commons Configuration library is now used to read and write properties files. When defining a property\n value using an environment variable in the Properties web page, or \n$MPF_HOME/config/mpf-custom.properties\n, be sure\n to prepend the variable name with \nenv:\n. For example:\n\n\n\n\ndetection.models.dir.path=${env:MPF_HOME}/models/\n\n\n\n\n\nAlternatively, you can define system properties using other system properties:\n\n\n\n\ndetection.models.dir.path=${mpf.share.path}/models/\n\n\n\nAdaptive Frame Interval\n\n\n\n\n\nThe \nFRAME_RATE_CAP\n property can be used to set a threshold on the maximum number of frames to process within one\n second of the native video time. This property takes precedence over the user-provided / pipeline-provided value\n for \nFRAME_INTERVAL\n. When the \nFRAME_RATE_CAP\n property is specified, an internal frame interval value is calculated\n as follows:\n\n\n\n\ncalcFrameInterval = max(1, floor(mediaNativeFPS / frameRateCapProp));\n\n\n\n\n\nFRAME_RATE_CAP\n may be disabled by setting it <= 0. \nFRAME_INTERVAL\n can be disabled in the same way.\n\n\nIf \nFRAME_RATE_CAP\n is disabled, then \nFRAME_INTERVAL\n will be used instead.\n\n\nIf both \nFRAME_RATE_CAP\n and \nFRAME_INTERVAL\n are disabled, then a value of 1 will be used for \nFRAME_INTERVAL\n.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThis release includes a component that uses the \nDarknet neural network framework\n to\n perform detection and classification of objects using trained models.\n\n\nPipelines for the Tiny YOLO and YOLOv2 models are provided. Due to its large size, the YOLOv2 weights file must be\n downloaded separately and placed in \n$MPF_HOME/share/models/DarknetDetection\n in order to use the YOLOv2 pipelines.\n Refer to \nDarknetDetection/plugin-files/models/models.ini\n for more information.\n\n\nThis component supports a preprocessor mode and default mode of operation. If preprocessor mode is enabled, and\n multiple Darknet detections in a frame share the same classification, then those are merged into a single detection\n where the region corresponds to the superset region that encapsulates all of the original detections, and the\n confidence value is the probability that at least one of the original detections is a true positive. If disabled,\n multiple Darknet detections in a frame are not merged together.\n\n\nDetections are not tracked across frames. One track is generated per detection.\n\n\nThis component supports an optional \nCLASS_WHITELIST_FILE\n property. When provided, only detections with class names\n listed in the file will be generated.\n\n\nThis component can be compiled with GPU support if the NVIDIA CUDA Toolkit is installed on the build machine. Refer to\n the \nGPU Support Guide\n. If the toolkit is not found, then the component will compile with CPU\n support only.\n\n\nTo run on a GPU, set the \nCUDA_DEVICE_ID\n job property, or set the detection.cuda.device.id system property, >= 0.\n\n\nWhen \nCUDA_DEVICE_ID\n >= 0, you can set the \nFALLBACK_TO_CPU_WHEN_GPU_PROBLEM\n job property, or the\n detection.use.cpu.when.gpu.problem system property, to \nTRUE\n if you want to run the component logic on the CPU\n instead of the GPU when a GPU problem is detected.\n\n\n\n\nModels Directory\n\n\n\n\n\nThe\n$MPF_HOME/share/models\n directory is now used by the Darknet and Caffe components to store model files and\n associated files, such as classification names files, weights files, etc. This allows users to more easily add model\n files post-deployment. Instead of copying the model files to \n$MPF_HOME/plugins//models\n directory on\n each node in the OpenMPF cluster, they only need to copy them to the shared directory once.\n\n\nTo add new models to the Darknet and Caffe component, add an entry to the\n respective \n/plugin-files/models/models.ini\n file.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nPython components are packaged with their respective dependencies as .whl files. This can be automated by providing a\n setup.py file. An example OpenCV Python component is provided that demonstrates how the component is packaged and\n deployed with the opencv-python module. When deployed, a virtualenv is created for the component with the .whl files\n installed in it.\n\n\nWhen deploying OpenMPF, \nLD_LIBRARY_PATH\n is no longer set system-wide. Refer to Known Issues.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Nodes page to distinguish between core nodes and spare nodes, and to show when a node is online or\n offline.\n\n\nUpdated the Component Registration page to list the core nodes as a reminder that changes will not affect spare nodes.\n\n\nUpdated the Properties page to separate the default detection properties from the general system properties.\n\n\n\n\nBug Fixes\n\n\n\n\n\nCustom Action, task, and pipeline names can now contain \"(\" and \")\" characters again.\n\n\nDetection location elements for audio tracks and generic tracks in a JSON output object will now have a y value of \n0\n\n instead of \n1\n.\n\n\nStreaming health report and summary report timestamps have been corrected to represent hours in the 0-23 range instead\n of 1-24.\n\n\nSingle-frame .gif files are now segmented properly and no longer result in a NullPointerException.\n\n\nLD_LIBRARY_PATH\n is now set at the process level for Tomcat, the Node Manager, and component services, instead of at\n the system level in \n/etc/profile.d/mpf.sh\n. Also, deployments no longer create \n/etc/ld.so.conf.d/mpf.conf\n. This\n better isolates OpenMPF from the rest of the system and prevents issues, such as being unable to use SSH, when system\n libraries are not compatible with OpenMPF libraries. The latter situation may occur when running \nyum update\n on the\n system, which can make OpenMPF unusable until a new deployment package with compatible libraries is installed.\n\n\nThe Workflow Manager will no longer generate an \"Error retrieving the SingleJobInfo model\" line in the log if someone\n is viewing the Job Status page when a job submitted through the REST API is in progress.\n\n\n\n\nKnown Issues\n\n\n\n\n\nWhen multiple component services of the same type on the same node log to the same file at the same time, sometimes\n log lines will not be captured in the log file. The logging frameworks (log4j and log4cxx) do not support that usage.\n This problem happens more frequently on systems running many component services at the same time.\n\n\nThe following exception was observed:\n\n\n\n\ncom.google.protobuf.InvalidProtocolBufferException: Message missing required fields: data_uri\n\n\n\n\n\n\nFurther debugging is necessary to determine the reason why that message was missing that field. The situation is not easily reproducible. It may occur when ActiveMQ and / or the system is under heavy load and sends duplicate messages in attempt to ensure message delivery. Some of those messages seem to end up in the dead letter queue (DLQ). For now, we've improved the way we handle messages in the DLQ. If OpenMPF can process a message successfully, the job is marked as \nCOMPLETED_WITH_ERRORS\n, and the message is moved from \nActiveMQ.DLQ\n to \nMPF.DLQ_PROCESSED_MESSAGES\n. If OpenMPF cannot process a message successfully, it is moved from \nActiveMQ.DLQ to MPF.DLQ_INVALID_MESSAGES\n.\n\n\n\n\n\n\nThe \nmpf stop\n command will stop the Workflow Manager, which will in turn send commands to all of the available nodes\n to stop all running component services. If a service is processing a sub-job when the quit command is received, that\n service process will not terminate until that sub-job is completely processed. Thus, the service may put a sub-job\n response on the ActiveMQ response queue after the Workflow Manager has terminated. That will not cause a problem\n because the queues are flushed the next time the Workflow Manager starts; however, there will be a problem if the\n service finishes processing the sub-job after the Workflow Manager is restarted. At that time, the Workflow Manager\n will have no knowledge of the old job and will in turn generate warnings in the log about how the job id is \"not known\n to the system\" and/or \"not found as a batch or a streaming job\". These can be safely ignored. Often, if these messages\n appear in the log, then C++ services were running after stopping the Workflow Manager. To address this, you may wish\n to run \nsudo killall amq_detection_component\n after running \nmpf stop\n.\n\n\n\n\nOpenMPF 2.0.x\n\n\n2.0.0: February 2018\n\n\n\n\n\nNOTE:\n Components built for previous releases of OpenMPF are not compatible with OpenMPF 2.0.0 due to Batch Component API changes to support generic detections, and changes made to the format of the \ndescriptor.json\n file to support stream processing.\n\n\nNOTE:\n This release contains basic support for processing video streams. Currently, the only way to make use of that functionality is through the REST API. Streaming jobs and services cannot be created or monitored through the web UI. Only the SuBSENSE component has been updated to support streaming. Only single-stage pipelines are supported at this time.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated documents to distinguish the batch component APIs from the streaming component API.\n\n\nAdded the \nC++ Streaming Component API\n.\n\n\nUpdated the \nC++ Batch Component API\n to describe support for generic detections.\n\n\nUpdated the \nREST API\n with endpoints for streaming jobs.\n\n\n\n\nSupport for Generic Detections\n\n\n\n\n\nC++ and Java components can now declare support for the \nUNKNOWN\n data type. The respective batch APIs have been\n updated with a function that will enable a component to process an \nMPFGenericJob\n, which represents a piece of media\n that is not a video, image, or audio file.\n\n\nNote that these API changes make OpenMPF R2.0.0 incompatible with components built for previous releases of OpenMPF.\n Specifically, the new component executor will not be able to load the component logic library.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nAdded the following function to support generic detections:\n\n\nMPFDetectionError GetDetections(const MPFGenericJob &job, vector &tracks)\n\n\n\n\n\n\n\n\nJava Batch Component API\n\n\n\n\n\nAdded the following method to support generic detections:\n\n\nList getDetections(MPFGenericJob job)\n\n\n\n\n\n\n\n\nStreaming REST API\n\n\n\n\n\nAdded the following REST endpoints for streaming jobs:\n\n\n[GET] /rest/streaming/jobs\n: Returns a list of streaming job ids.\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job. Users can register for health report and\n summary report callbacks.\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job.\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job.\n\n\n\n\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nUpdated to support generic detections.\n\n\nUpdated Redis to store information about streaming jobs.\n\n\nAdded controllers for streaming job REST endpoints.\n\n\nAdded ability to generate health reports and segment summary reports for streaming jobs.\n\n\nImproved code flow between the Workflow Manager and master Node Manager to support streaming jobs.\n\n\nAdded ActiveMQ queues to enable the C++ Streaming Component Executor to send reports and job status to the Workflow\n Manager.\n\n\n\n\nNode Manager\n\n\n\n\n\nUpdated the master Node Manager and child Node Managers to spawn component services on demand to handle streaming\n jobs, cancel those jobs, and to monitor the status of those processes.\n\n\nUsing .ini files to represent streaming job properties and enable better communication between a child Node Manager\n and C++ Streaming Component Executor.\n\n\n\n\nC++ Streaming Component API\n\n\n\n\n\nDeveloped the C++ Streaming Component API with the following functions:\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n: Constructor that takes a streaming video job.\n\n\nstring GetDetectionType()\n: Returns the type of detection (i.e. \"FACE\").\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n: Indicates the beginning of a new video segment.\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n: Processes a single frame for the current video\n segment.\n\n\nvector EndSegment()\n: Indicates the end of the current video segment.\n\n\n\n\n\n\nUpdated the C++ Hello World component to support streaming jobs.\n\n\n\n\nC++ Streaming Component Executor\n\n\n\n\n\nDeveloped the C++ Streaming Component Executor to load a streaming component logic library, read frames from a video\n stream, and exercise the component logic through the C++ Streaming Component API.\n\n\nWhen the C++ Streaming Component Executor cannot read a frame from the stream, it will sleep for at least 1\n millisecond, doubling the amount of sleep time per attempt until it reaches the \nstallTimeout\n value specified when\n the job was created. While stalled, the job status will be \nSTALLED\n. After the timeout is exceeded, the job will\n be \nTERMINATED\n.\n\n\nThe C++ Streaming Component Executor supports \nFRAME_INTERVAL\n, as well as rotation, horizontal flipping, and\n cropping (region of interest) properties. Does not support \nUSE_KEY_FRAMES\n.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded the following Java classes to the interoperability package to simplify third party integration:\n\n\nJsonHealthReportCollection\n: Represents the JSON content of a health report callback. Contains one or\n more \nJsonHealthReport\n objects.\n\n\nJsonSegmentSummaryReport\n: Represents the JSON content of a summary report callback. Content is similar to the\n JSON output object used for batch processing.\n\n\n\n\n\n\n\n\nSuBSENSE Component\n\n\n\n\n\nThe SuBSENSE component now supports both batch processing and stream processing.\n\n\nEach video segment will be processed independently of the rest. In other words, tracks will be generated on a\n segment-by-segment basis and tracks will not carry over between segments.\n\n\nNote that the last frame in the previous segment will be used to determine if there is motion in the first frame of\n the next segment.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nUpdated \ndescriptor.json\n fields to allow components to support batch and/or streaming jobs. Components that use the\n old \ndescriptor.json\n file format cannot be registered through the web UI.\n\n\nBatch component logic and streaming component logic are compiled into separate libraries.\n\n\nThe mySQL \nstreaming_job_request\n table has been updated with the following fields, which are used to populate the\n JSON health reports:\n\n\nstatus_detail\n: (Optional) A user-friendly description of the current job status.\n\n\nactivity_frame_id\n: The frame id associated with the last job activity. Activity is defined as the start of a new\n track for the current segment.\n\n\nactivity_timestamp\n: The timestamp associated with the last job activity.\n\n\n\n\n\n\n\n\nWeb User Interface\n\n\n\n\n\nAdded column names to the table that appears when the user clicks in the Media button associated with a job on the Job\n Status page. Now descriptive comments are provided when table cells are empty.\n\n\n\n\nBug Fixes\n\n\n\n\n\nUpgraded Tika to 1.17 to resolve an issue with improper indentation in a Python file (rotation.py) that resulted in\n generating at least one error message per image processed. When processing a large number of images, this would\n generate may error messages, causing the Automatic Bug Reporting Tool daemon (abrtd) process to run at 100% CPU. Once\n in that state, that process would stay there, essentially wasting on CPU core. This resulted in some of the Jenkins\n virtual machines we used for testing to become unresponsive.\n\n\n\n\nKnown Issues\n\n\n\n\n\n\n\nOpenCV 3.3.0 \ncv::imread()\n does not properly decode some TIFF images that have EXIF orientation metadata. It can\n handle images that are flipped horizontally, but not vertically. It also has issues with rotated images. Since most\n components rely on that function to read image data, those components may silently fail to generate detections for\n those kinds of images.\n\n\n\n\n\n\nUsing single quotes, apsotrophes, or double quotes in the name of an algorithm, action, task, or pipeline configured\n on an existing OpenMPF system will result in a failure to perform an OpenMPF upgrade on that system. Specifically, the\n step where pre-existing custom actions, tasks, and pipelines are carried over to the upgraded version of OpenMPF will\n fail. Please do not use those special characters while naming those elements. If this has been done already, then\n those elements should be manually renamed in the XML files prior to an upgrade attempt.\n\n\n\n\n\n\nOpenMPF uses OpenCV, which uses FFmpeg, to connect to video streams. If a proxy and/or firewall prevents the network\n connection from succeeding, then OpenCV, or the underlying FFmpeg library, will segfault. This causes the C++\n Streaming Component Executor process to fail. In turn, the job status will be set to \nERROR\n with a status detail\n message of \"Unexpected error. See logs for details\". In this case, the logs will not contain any useful information.\n You can identify a segfault by the following line in the node-manager log:\n\n\n\n\n\n\n2018-02-15 16:01:21,814 INFO [pool-3-thread-4] o.m.m.nms.streaming.StreamingProcess - Process: Component exited with exit code 139\u00a0\n\n\n\n\n\nTo determine if FFmpeg can connect to the stream or not, run \nffmpeg -i \n in a terminal window. Here's an example when it's successful:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[rtsp @ 0x1924240] UDP timeout, retrying with TCP\nInput #0, rtsp, from 'rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov':\n Metadata:\n title : BigBuckBunny_115k.mov\n Duration: 00:09:56.48, start: 0.000000, bitrate: N/A\n Stream #0:0: Audio: aac (LC), 12000 Hz, stereo, fltp\n Stream #0:1: Video: h264 (Constrained Baseline), yuv420p(progressive), 240x160, 24 fps, 24 tbr, 90k tbn, 48 tbc\nAt least one output file must be specified\n\n\n\n\n\nHere's an example when it's not successful, so there may be network issues:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[tcp @ 0x171c300] Connection to tcp://184.72.239.149:554?timeout=0 failed: Invalid argument\nrtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov: Invalid argument\n\n\n\n\n\nTika 1.17 does not come pre-packaged with support for some embedded image formats in PDF files, possibly to avoid\n patent issues. OpenMPF does not handle embedded images in PDFs, so that's not a problem. Tika will print out the\n following warnings, which can be safely ignored:\n\n\n\n\nJan 22, 2018 11:02:15 AM org.apache.tika.config.InitializableProblemHandler$3 handleInitializableProblem\nWARNING: JBIG2ImageReader not loaded. jbig2 files will be ignored\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nTIFFImageWriter not loaded. tiff files will not be processed\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nJ2KImageReader not loaded. JPEG2000 files will not be processed.\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\n\n\n\n\nOpenMPF 1.0.x\n\n\n1.0.0: October 2017\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nBuild Guide\n with instructions for installing the latest JDK,\n latest JRE, FFmpeg 3.3.3, new codecs, and OpenCV 3.3.\n\n\nAdded an \nAcknowledgements\n section that provides information on third party dependencies\n leveraged by the OpenMPF.\n\n\nAdded a \nFeed Forward Guide\n that explains feed forward processing and how to use it.\n\n\nAdded missing requirements checklist content to\n the \nInstall Guide\n.\n\n\nUpdated the README at the top level of each of the primary repositories to help with user navigation and provide\n general information.\n\n\n\n\nUpgrade to FFmpeg 3.3.3 and OpenCV 3.3\n\n\n\n\n\nUpdated core framework from FFmpeg 2.6.3 to FFmpeg 3.3.3.\n\n\nAdded the following FFmpeg codecs: x256, VP9, AAC, Opus, Speex.\n\n\nUpdated core framework and components from OpenCV 3.2 to OpenCV 3.3. No longer building with opencv_contrib.\n\n\n\n\nFeed Forward Behavior\n\n\n\n\n\nUpdated the Workflow Manager (WFM) and all video components to optionally perform feed forward processing for batch\n jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will\n only process the frames associated with the detections in those tracks. This differs from the default segmenting\n behavior, which does not preserve detection regions or track information between stages.\n\n\nTo enable this behavior, the optional \nFEED_FORWARD_TYPE\n property must be set to \nFRAME\n, \nSUPERSET_REGION\n,\n or \nREGION\n. If set to \nFRAME\n then the components in the next stage will process the whole frame region associated\n with each detection in the track passed forward. If set to \nSUPERSET_REGION\n then the components in the next stage\n will determine the bounding box that encapsulates all of the detection regions in the track, and only process the\n pixel data within that superset region. If set to \nREGION\n then the components in the next stage will process the\n region associated with each detection in the track passed forward, which may vary in size and position from frame to\n frame.\n\n\nThe optional \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n property can be set to a number to limit the number of detections\n passed forward in a track. For example, if set to \"5\", then only the top 5 detections in the track will be passed\n forward and processed by the next stage. The top detections are defined as those with the highest confidence values,\n or if the confidence values are the same, those with the lowest frame index.\n\n\nNote that setting the feed forward properties has no effect on the first pipeline stage because there is no prior\n stage that can pass tracks to it.\n\n\n\n\nCaffe Component\n\n\n\n\n\nUpdated the Caffe component to process images in the BGR color space instead of the RGB color space. This addresses a\n bug found in OpenCV. Refer to the Bug Fixes section below.\n\n\nAdded support for processing videos.\n\n\nAdded support for an optional \nACTIVATION_LAYER_LIST\n property. For each network layer specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is an encoded string of the\n JSON representation of an OpenCV matrix of the activation values for that layer. The activation values are obtained\n after the Caffe network has processed the frame data.\n\n\nAdded support for an optional \nSPECTRAL_HASH_FILE_LIST\n property. For each JSON file specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is a string of 0's and 1's\n representing the spectral hash calculated using the information in the spectral hash JSON file. The spectral hash is\n calculated using activation values after the Caffe network has processed the frame data.\n\n\nAdded a pipeline to showcase the above two features for the GoogLeNet Caffe model.\n\n\nRemoved the \nTRANSPOSE\n property from the Caffe component since it was not necessary.\n\n\nAdded red, green, and blue mean subtraction values to the GoogLeNet pipeline.\n\n\n\n\nUse Key Frames\n\n\n\n\n\nAdded support for an optional \nUSE_KEY_FRAMES\n property to each video component. When true the component will only\n look at key frames (I-frames) from the input video. Can be used in conjunction with \nFRAME_INTERVAL\n. For example,\n when \nUSE_KEY_FRAMES\n is true, and \nFRAME_INTERVAL\n is set to \"2\", then every other key frame will be processed.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\n\n\n\nUpdated the MPFVideoCapture and MPFImageReader tools to handle feed forward properties.\n\n\nUpdated the MPFVideoCapture tool to handle \nFRAME_INTERVAL\n and \nUSE_KEY_FRAMES\n properties.\n\n\nUpdated all existing components to leverage these tools as much as possible.\n\n\nWe encourage component developers to use these tools to automatically take care of common frame grabbing and frame\n manipulation behaviors, and not to reinvent the wheel.\n\n\n\n\nDead Letter Queue\n\n\n\n\n\nIf for some reason a sub-job request that should have gone to a component ends up on the ActiveMQ Dead Letter Queue (\n DLQ), then the WFM will now process that failed request so that the job can complete. The ActiveMQ management page\n will now show that \nActiveMQ.DLQ\n has 1 consumer. It will also show unconsumed messages\n in \nMPF.PROCESSED_DLQ_MESSAGES\n. Those are left for auditing purposes. The \"Message Detail\" for these shows the string\n representation of the original job request protobuf message.\n\n\n\n\nUpgrade Path\n\n\n\n\n\nRemoved the Release 0.8 to Release 0.9 upgrade path in the deployment scripts.\n\n\nAdded support for a Release 0.9 to Release 1.0.0 upgrade path, and a Release 0.10.0 to Release 1.0.0 upgrade path.\n\n\n\n\nMarkup\n\n\n\n\n\nBounding boxes are now drawn along the interpolated path between detection regions whenever there are one or more\n frames in a track which do not have detections associated with them.\n\n\nFor each track, the color of the bounding box is now a randomly selected hue in the HSV color space. The colors are\n evenly distributed using the golden ratio.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a \nbug in OpenCV\n where the Caffe example code was processing\n images in the RGB color space instead of the BGR color space. Updated the OpenMPF Caffe component accordingly.\n\n\nFixed a bug in the OpenCV person detection component that caused bounding boxes to be too large for detections near\n the edge of a frame.\n\n\nResubmitting jobs now properly carries over configured job properties.\n\n\nFixed a bug in the build order of the OpenMPF project so that test modules that the WFM depends on are built before\n the WFM itself.\n\n\nThe Markup component draws bounding boxes between detections when a \nFRAME_INTERVAL\n is specified. This is so that the\n bounding box in the marked-up video appears in every frame. Fixed a bug where the bounding boxes drawn on\n non-detection frames appeared to stand still rather than move along the interpolated path between detection regions.\n\n\nFixed a bug on the OALPR license plate detection component where it was not properly handling the \nSEARCH_REGION_*\n\n properties.\n\n\nSupport for the \nMIN_GAP_BETWEEN_SEGMENTS\n property was not implemented properly. When the gap between two segments is\n less than this property value then the segments should be merged; otherwise, the segments should remain separate. In\n some cases, the exact opposite was happening. This bug has been fixed.\n\n\n\n\nKnown Issues\n\n\n\n\n\nBecause of the number of additional ActiveMQ messages involved, enabling feed forward for low resolution video may\n take longer than the non-feed-forward behavior.\n\n\n\n\nOpenMPF 0.x.x\n\n\n0.10.0: July 2017\n\n\n\n\n\nWARNING:\n There is no longer a \nDEFAULT CAFFE ACTION\n, \nDEFAULT CAFFE TASK\n, or \nDEFAULT CAFFE PIPELINE\n. There is now a \nCAFFE GOOGLENET DETECTION PIPELINE\n and \nCAFFE YAHOO NSFW DETECTION PIPELINE\n, which each have a respective action and task.\n\n\nNOTE:\n MPFImageReader has been re-enabled in this version of OpenMPF since we upgraded to OpenCV 3.2, which addressed the known issues with \nimread()\n, auto-orientation, and jpeg files in OpenCV 3.1.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nContributor Guide\n that provides guidelines for contributing to the OpenMPF\n codebase.\n\n\nUpdated the \nJava Batch Component API\n with links to the example Java components.\n\n\nUpdated the \nBuild Guide\n with instructions for OpenCV 3.2.\n\n\n\n\nUpgrade to OpenCV 3.2\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.1 to OpenCV 3.2.\n\n\n\n\nSupport for Animated gifs\n\n\n\n\n\nAll gifs are now treated as videos. Each gif will be handled as an MPFVideoJob.\n\n\nUnanimated gifs are treated as 1-frame videos.\n\n\nThe WFM Media Inspector now populates the \nmedia_properties\n map with a \nFRAME_COUNT\n entry (in addition to\n the \nDURATION\n and \nFPS\n entries).\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for the Yahoo Not Suitable for Work (NSFW) Caffe model for explicit material detection.\n\n\nUpdated the Caffe component to support the OpenCV 3.2 Deep Neural Network (DNN) module.\n\n\n\n\nFuture Support for Streaming Video\n\n\n\n\n\nNOTE:\n At this time, OpenMPF does not support streaming video. This section details what's being / has been done so far to prepare for that feature.\n\n\n\n\n\n\nThe codebase is being updated / refactored to support both the current \"batch\" job functionality and new \"streaming\"\n job functionality.\n\n\nbatch job: complete video files are written to disk before they are processed\n\n\nstreaming job: video frames are read from a streaming endpoint (such as RTSP) and processed in near real time\n\n\n\n\n\n\nThe REST API is being updated with endpoints for streaming jobs:\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job\n\n\n\n\n\n\nThe Redis and mySQL databases are being updated to support streaming video jobs.\n\n\nA batch job will never have the same id as a streaming job. The integer ids will always be unique.\n\n\n\n\n\n\n\n\nBug Fixes\n\n\n\n\n\nThe MOG and SuBSENSE component services could segfault and terminate if the \nUSE_MOTION_TRACKING\n property was set to\n \u201c1\u201d and a detection was found close to the edge of the frame. Specifically, this would only happen if the video had a\n width and/or height dimension that was not an exact power of two.\n\n\nThe reason was because the code downsamples each frame by a power of two and rounds the value of the width and\n height up to the nearest integer. Later on when upscaling detection rectangles back to a size that\u2019s relative to\n the original image, the resized rectangle sometimes extended beyond the bounds of the original frame.\n\n\n\n\n\n\n\n\nKnown Issues\n\n\n\n\n\nIf a job is submitted through the REST API, and a user to logged into the web UI and looking at the job status page,\n the WFM may generate \"Error retrieving the SingleJobInfo model for the job with id\" messages.\n\n\nThis is because the job status is only added to the HTTP session object if the job is submitted through the web\n UI. When the UI queries the job status it inspects this object.\n\n\nThis message does not appear if job status is obtained using the \n[GET] /rest/jobs/{id}\n endpoint.\n\n\n\n\n\n\nThe \n[GET] /rest/jobs/stats\n endpoint aggregates information about all of the jobs ever run on the system. If\n thousands of jobs have been run, this call could take minutes to complete. The code should be improved to execute a\n direct mySQL query.\n\n\n\n\n0.9.0: April 2017\n\n\n\n\n\nWARNING:\n MPFImageReader has been disabled in this version of OpenMPF. Component developers should use MPFVideoCapture instead. This affects components developed against previous versions of OpenMPF and components developed against this version of OpenMPF. Please refer to the Known Issues section for more information.\n\n\nWARNING:\n The OALPR Text Detection Component has been renamed to OALPR \nLicense Plate\n Text Detection Component. This affects the name of the component package and the name of the actions, tasks, and pipelines. When upgrading from R0.8 to R0.9, if the old OALPR Text Detection Component is installed in R0.8 then you will be prompted to install it again at the end of the upgrade path script. We recommend declining this prompt because the old component will conflict with the new component.\n\n\nWARNING:\n Action, task, and pipeline names that started with \nMOTION DETECTION PREPROCESSOR\n have been renamed \nMOG MOTION DETECTION PREPROCESSOR\n. Similarly, \nWITH MOTION PREPROCESSOR\n has changed to \nWITH MOG MOTION PREPROCESSOR\n.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n to reflect job properties, algorithm-specific properties, and\n media-specific properties.\n\n\nStreamlined the \nC++ Batch Component API\n document for clarity and simplicity.\n\n\nCompleted the \nJava Batch Component API\n document.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to reflect web UI changes.\n\n\nUpdated the \nBuild Guide\n with instructions for GitHub repositories.\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nAdded support for job properties, which will override pre-defined pipeline properties.\n\n\nAdded support for algorithm-specific properties, which will apply to a single stage of the pipeline and will override\n job properties and pre-defined pipeline properties.\n\n\nAdded support for media-specific properties, which will apply to a single piece and media and will override job\n properties, algorithm-specific properties, and pre-defined pipeline properties.\n\n\nComponents can now be automatically registered and installed when the web application starts in Tomcat.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nThe \"Close All\" button on pop-up notifications now dismisses all notifications from the queue, not just the visible\n ones.\n\n\nJob completion notifications now only appear for jobs created during the current login session instead of all jobs.\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties can be set using the web interface when creating a\n job. Once files are selected for a job, these properties can be set individually or by groups of files.\n\n\nThe Node and Process Status page has been merged into the Node Configuration page for simplicity and ease of use.\n\n\nThe Media Markup results page has been merged into the Job Status page for simplicity and ease of use.\n\n\nThe File Manager UI has been improved to handle large numbers of files and symbolic links.\n\n\nThe side navigation menu is now replaced by a top navigation bar.\n\n\n\n\nREST API\n\n\n\n\n\nAdded an optional jobProperties object to the \n/rest/jobs/\n request which contains String key-value pairs which\n override the pipeline's pre-configured job properties.\n\n\nAdded an optional algorithmProperties object to the \n/rest/jobs/\n request which can be used to configure properties\n for specific algorithms in the pipeline. These properties override the pipeline's pre-configured job properties. They\n also override the values in the jobProperties object.\n\n\nUpdated the \n/rest/jobs/\n request to add more detail to media, replacing a list of mediaUri Strings with a list of\n media objects, each of which contains a mediaUri and an optional mediaProperties map. The mediaProperties map can be\n used to configure properties for the specific piece of media. These properties override the pipeline's pre-configured\n job properties, values in the jobProperties object, and values in the algorithmProperties object.\n\n\nStreamlined the actions, tasks, and pipelines endpoints that are used by the web UI.\n\n\n\n\nFlipping, Rotation, and Region of Interest\n\n\n\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties will no longer appear in the detectionProperties\n map in the JSON detection output object. When applied to an algorithm these properties now appear in the\n pipeline.stages.actions.properties element. When applied to a piece of media these properties will now appear in the\n the media.mediaProperties element.\n\n\nThe OpenMPF now supports multiple regions of interest in a single media file. Each region will produce tracks\n separately, and the tracks for each region will be listed in the JSON output as if from a separate media file.\n\n\n\n\nComponent API\n\n\n\n\n\nJava Batch Component API is functionally complete for third-party development, with the exception of Component Adapter\n and frame transformation utilities classes.\n\n\nRe-architected the Java Batch Component API to use a more traditional Java method structure of returning track lists\n and throwing exceptions (rather than modifying input track lists and returning statuses), and encapsulating job\n properties into MPFJob objects:\n\n\nList getDetections(MPFVideoJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFAudioJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFImageJob job) throws MPFComponentDetectionError\n\n\n\n\n\n\nCreated examples for the Java Batch Component API.\n\n\nReorganized the Java and C++ component source code to enable component development without the OpenMPF core, which\n will simplify component development and streamline the code base.\n\n\n\n\nJSON Output Objects\n\n\n\n\n\nThe JSON output object for the job now contains a jobProperties map which contains all properties defined for the job\n in the job request. For example, if the job request specifies a \nCONFIDENCE_THRESHOLD\n of then the jobProperties map\n in the output will also list a \nCONFIDENCE_THRESHOLD\n of 5.\n\n\nThe JSON output object for the job now contains a algorithmProperties element which contains all algorithm-specific\n properties defined for the job in the job request. For example, if the job request specifies a \nFRAME_INTERVAL\n of 2\n for FACECV then the algorithmProperties element in the output will contain an entry for \"FACECV\" and that entry will\n list a \nFRAME_INTERVAL\n of 2.\n\n\nEach JSON media output object now contains a mediaProperties map which contains all media-specific properties defined\n by the job request. For example, if the job request specifies a \nROTATION\n of 90 degrees for a single piece of media\n then the mediaProperties map for that piece of piece will list a \nROTATION\n of 90.\n\n\nThe content of JSON output objects are now organized by detection type (e.g. MOTION, FACE, PERSON, TEXT, etc.) rather\n than action type.\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for flip, rotation, and cropping to regions of interest.\n\n\nAdded support for returning multiple classifications per detection based on user-defined settings. The classification\n list is in order of decreasing confidence value.\n\n\n\n\nNew Pipelines\n\n\n\n\n\nNew SuBSENSE motion preprocessor pipelines have been added to components that perform detection on video.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nActions.xml\n, \nAlgorithms.xml\n, \nnodeManagerConfig.xml\n, \nnodeServicesPalette.json\n, \nPipelines.xml\n, and \nTasks.xml\n\n are no longer stored within the Workflow Manager WAR file. They are now stored under \n$MPF_HOME/data\n. This makes it\n easier to upgrade the Workflow Manager and makes it easier for users to access these files.\n\n\nEach component can now be optionally installed and registered during deployment. Components not registered are set to\n the \nUPLOADED\n state. They can then be removed or registered through the Component Registration page.\n\n\nJava components are now packaged as tar.gz files instead of RPMs, bringing them into alignment with C++ components.\n\n\nOpenMPF R0.9 can be installed over OpenMPF R0.8. The deployment scripts will determine that an upgrade should take\n place.\n\n\nAfter the upgrade, user-defined actions, tasks, and pipelines will have \"CUSTOM\" prepended to their name.\n\n\nThe job_request table in the mySQL database will have a new \"output_object_version\" column. This column will\n have \"1.0\" for jobs created using OpenMPF R0.8 and \"2.0\" for jobs created using OpenMPF R0.9. The JSON output\n object schema has changed between these versions.\n\n\n\n\n\n\nReorganized source code repositories so that component SDKs can be downloaded separately from the OpenMPF core and so\n that components are grouped by license and maturity. Build scripts have been created to streamline and simplify the\n build process across the various repositories.\n\n\n\n\nUpgrade to OpenCV 3.1\n\n\n\n\n\nThe OpenMPF software has been ported to use OpenCV 3.1, including all of the C++ detection components and the markup\n component. For the OpenALPR license plate detection component, the versions of the openalpr, tesseract, and leptonica\n libraries were also upgraded to openalpr-2.3.0, tesseract-3.0.4, and leptonica-1.7.2. For the SuBSENSE motion\n component, the version of the SuBSENSE library was upgraded to use the code found at this\n location: \nhttps://bitbucket.org/pierre_luc_st_charles/subsense/src\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\nMOG motion detection always detected motion in frame 0 of a video. Because motion can only be detected between two\n adjacent frames, frame 1 is now the first frame in which motion can be detected.\n\n\nMOG motion detection never detected motion in the first frame of a video segment (other than the first video segment\n because of the frame 0 bug described above). Now, motion is detected using the first frame before the start of a\n segment, rather than the first frame of the segment.\n\n\nThe above bugs were also present in SuBSENSE motion detection and have been fixed.\n\n\nSuBSENSE motion detection generated tracks where the frame numbers were off by one. Corrected the frame index logic.\n\n\nVery large video files caused an out of memory error in the system during Workflow Manager media inspection.\n\n\nA job would fail when processing images with an invalid metadata tag for the camera flash setting.\n\n\nUsers were permitted to select invalid file types using the File Manager UI.\n\n\n\n\nKnown Issues\n\n\n\n\n\nMPFImageReader does not work reliably with the current release version of OpenCV 3.1\n: In OpenCV 3.1, new\n functionality was introduced to interpret EXIF information when reading jpeg files.\n\n\nThere are two issues with this new functionality that impact our ability to use the OpenCV \nimread()\n function with\n MPFImageReader:\n\n\nFirst, because of a bug in the OpenCV code, reading a jpeg file that contains exif information could cause it to\n hang. (See \nhttps://github.com/opencv/opencv/issues/6665\n.)\n\n\nSecond, it is not possible to tell the \nimread()\nfunction to ignore the EXIF data, so the image it returns is\n automatically rotated. (See \nhttps://github.com/opencv/opencv/issues/6348\n.) This results in the MPFImageReader\n applying a second rotation to the image due to the EXIF information.\n\n\n\n\n\n\nTo address these issues, we developed the following workarounds:\n\n\nCreated a version of the MPFVideoCapture that works with an MPFImageJob. The new MPFVideoCapture can pull frames\n from both video files and images. MPFVideoCapture leverages cv::VideoCapture, which does not have the two issues\n described above.\n\n\nDisabled the use of MPFImageReader to prevent new users from trying to develop code leveraging this previous\n functionality.", "title": "Release Notes" }, { @@ -107,7 +107,7 @@ }, { "location": "/License-And-Distribution/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nLicense Considerations\n\n\nWe are not lawyers and provide this information to the best of our ability in an attempt to honor all licensing\nagreements and clarify the potential responsibilities of OpenMPF users.\n\n\nOpen Source\n\n\nThe Open Media Processing Framework (OpenMPF) source code is publicly available on \nGitHub\n.\nBy distributing the OpenMPF software as raw source code the development team is able to keep most of the software clean\nfrom copyleft and patent issues so that it can be published under a more open\n\nApache 2.0\n license and freely distributed to interested parties.\n\n\n\n\nIMPORTANT:\n It is the responsibility of the end users who build the OpenMPF software to abide by all of the\nnon-commercial and re-distribution restrictions imposed by the dependencies that the OpenMPF software uses. Building\nOpenMPF and linking in these dependencies at build time or run time may result in creating a derivative work under the\nterms of the GNU General Public License. Refer to \nAcknowledgements\n for more information\nabout these dependencies.\n\n\n\n\n\nDocker Distribution\n\n\nThe OpenMPF Docker images are released under \nGPLv2\n, unless\notherwise stated.\n\n\nffmpeg-devel Integration\n\n\nThe software in the Workflow Manager image, and most C++ component images, is dynamically linked with a version of\nOpenCV that is in turn linked with a version of ffmpeg-devel built with\n\n--enable-gpl --enable-nonfree --enable-libx264 --enable-libx265\n.\nDistribution of software that includes the latter two encoders must be released under GPLv2 and\ncannot be used commercially without obtaining the appropriate licenses from \nx264 LLC / CoreCodec\n or\n\nMulticoreWare\n. See \nhere\n for more information.\n\n\nNote that the OpenMPF core is built with, but does not require, the x264 or x265 encoders. In some cases, such as when\ngenerating video markup, users have the option to use x264, or an alternative encoder such as vp9 or mjpeg.\n\n\nUsage Royalties\n\n\nx264 and x256 Encoders\n\n\nIf someone uses a component that makes use of the x264 or x256 encoders in FFmpeg for commercial applications, then that\nperson should obtain the appropriate licenses from \nx264 LLC / CoreCodec\n or\n\nMulticoreWare\n, respectively.\n\n\n\"h264\" and \"hevc\" Decoders\n\n\nFFmpeg comes bundled with its own native \"h264\" and \"hevc\" decoders, which OpenMPF may use depending on the media types\nprovided when creating jobs. Although released under LGPL, use of these decoders for commercial applications may still\nrequire the payment of royalties to patent holders. The FFmpeg group states on their \nLegal\npage\n:\n\n\n\n\nQ: Does FFmpeg use patented algorithms?\n\n\nA: We do not know, we are not lawyers so we are not qualified to answer this. Also we have never read patents to\nimplement any part of FFmpeg, so even if we were qualified we could not answer it as we do not know what is patented.\n\n\nThere have been cases where companies have used FFmpeg in their products. These companies found out that once you\nstart trying to make money from patented technologies, the owners of the patents will come after their licensing fees.\nNotably, MPEG LA is vigilant and diligent about collecting for MPEG-related technologies.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nLicense Considerations\n\n\nWe are not lawyers and provide this information to the best of our ability in an attempt to honor all licensing\nagreements and clarify the potential responsibilities of OpenMPF users.\n\n\nOpen Source\n\n\nThe Open Media Processing Framework (OpenMPF) source code is publicly available on \nGitHub\n.\nBy distributing the OpenMPF software as raw source code the development team is able to keep most of the software clean\nfrom copyleft and patent issues so that it can be published under a more open\n\nApache 2.0\n license and freely distributed to interested parties.\n\n\n\n\nIMPORTANT:\n It is the responsibility of the end users who build the OpenMPF software to abide by all of the\nnon-commercial and re-distribution restrictions imposed by the dependencies that the OpenMPF software uses. Building\nOpenMPF and linking in these dependencies at build time or run time may result in creating a derivative work under the\nterms of the GNU General Public License. Refer to \nAcknowledgements\n for more information\nabout these dependencies.\n\n\n\n\n\nDocker Distribution\n\n\nThe OpenMPF Docker images are released under \nGPLv2\n, unless\notherwise stated.\n\n\nffmpeg-devel Integration\n\n\nThe software in the Workflow Manager image, and most C++ component images, is dynamically linked with a version of\nOpenCV that is in turn linked with a version of ffmpeg-devel built with\n\n--enable-gpl --enable-nonfree --enable-libx264 --enable-libx265\n.\nDistribution of software that includes the latter two encoders must be released under GPLv2 and\ncannot be used commercially without obtaining the appropriate licenses from \nx264 LLC / CoreCodec\n or\n\nMulticoreWare\n. See \nhere\n for more information.\n\n\nNote that the OpenMPF core is built with, but does not require, the x264 or x265 encoders. In some cases, such as when\ngenerating video markup, users have the option to use x264, or an alternative encoder such as vp9 or mjpeg.\n\n\nUsage Royalties\n\n\nx264 and x256 Encoders\n\n\nIf someone uses a component that makes use of the x264 or x256 encoders in FFmpeg for commercial applications, then that\nperson should obtain the appropriate licenses from \nx264 LLC / CoreCodec\n or\n\nMulticoreWare\n, respectively.\n\n\n\"h264\" and \"hevc\" Decoders\n\n\nFFmpeg comes bundled with its own native \"h264\" and \"hevc\" decoders, which OpenMPF may use depending on the media types\nprovided when creating jobs. Although released under LGPL, use of these decoders for commercial applications may still\nrequire the payment of royalties to patent holders. The FFmpeg group states on their \nLegal\npage\n:\n\n\n\n\nQ: Does FFmpeg use patented algorithms?\n\n\nA: We do not know, we are not lawyers so we are not qualified to answer this. Also we have never read patents to\nimplement any part of FFmpeg, so even if we were qualified we could not answer it as we do not know what is patented.\n\n\nThere have been cases where companies have used FFmpeg in their products. These companies found out that once you\nstart trying to make money from patented technologies, the owners of the patents will come after their licensing fees.\nNotably, MPEG LA is vigilant and diligent about collecting for MPEG-related technologies.", "title": "License and Distribution" }, { @@ -152,7 +152,7 @@ }, { "location": "/Install-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nDocker\n\n\nOpenMPF is installed using the Docker container platform.\n\n\nTo use prebuilt Docker images, refer to the \"Quick Start\" section of the documentation for the OpenMPF Workflow Manager image on \nDockerHub\n.\n\n\nFor more information, including how to setup Docker, and build and deploy OpenMPF Docker images, refer to the openmpf-docker \nREADME\n.\n\n\nAdditionally, if you would like to install OpenMPF across multiple physical or virtual machines, then refer to the openmpf-docker \nSwarm Deployment Guide\n. \n\n\nPlease review the page on \nLicense and Distribution\n.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nDocker\n\n\nOpenMPF is installed using the Docker container platform.\n\n\nTo use prebuilt Docker images, refer to the \"Quick Start\" section of the documentation for the OpenMPF Workflow Manager image on \nDockerHub\n.\n\n\nFor more information, including how to setup Docker, and build and deploy OpenMPF Docker images, refer to the openmpf-docker \nREADME\n.\n\n\nAdditionally, if you would like to install OpenMPF across multiple physical or virtual machines, then refer to the openmpf-docker \nSwarm Deployment Guide\n. \n\n\nPlease review the page on \nLicense and Distribution\n.", "title": "Install Guide" }, { @@ -162,7 +162,7 @@ }, { "location": "/Admin-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n Please refer to the \nUser Configuration\n section for changing the default user passwords.\n\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nWeb UI\n\n\nThe login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the \nUser Guide\n for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\".\n\n\nWe highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the \nUser Configuration\n section below.\n\n\nThis document will cover the additional functionality permitted to admin users through the Admin Console pages.\n\n\nDashboard\n\n\nThe landing page for an admin user is the Job Status page:\n\n\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\nProperties Settings\n\n\nThis page allows an admin user to view and edit various OpenMPF properties:\n\n\n\n\nAn admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name.\n\n\nNote that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration.\n\n\nWARNING:\n Changing the value of these properties can prevent the Workflow Manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!\n\n\n\nAt the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the Workflow Manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:\n\n\n\n\nHawtio\n\n\nThe \nHawtio\n web console can be accessed by selecting \"Hawtio\" from the\n\"Configuration\" dropdown menu in the top menu bar. Hawtio exposes various management information\nand settings. It can be used to monitor the state of the ActiveMQ queues used for communication\nbetween the Workflow Manager and the components.\n\n\nUser Configuration\n\n\nEvery time the Workflow Manager starts it will attempt to create accounts for the users listed in the \nuser.properties\n file. At runtime this file is extracted to \n$MPF_HOME/config\n on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password.\n\n\nWe highly recommend modifying the \nuser.properties\n file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created.\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker \nREADME\n that explain how to use a \ndocker secret\n for your custom \nuser.properties\n file.\n\n\n(Optional) Configure HTTPS\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform.\nIf you are using Docker, please follow the instructions in the openmpf-docker\n\nREADME\n\nthat explain how to configure HTTPS.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n Please refer to the \nUser Configuration\n section for changing the default user passwords.\n\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nWeb UI\n\n\nThe login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the \nUser Guide\n for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\".\n\n\nWe highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the \nUser Configuration\n section below.\n\n\nThis document will cover the additional functionality permitted to admin users through the Admin Console pages.\n\n\nDashboard\n\n\nThe landing page for an admin user is the Job Status page:\n\n\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\nProperties Settings\n\n\nThis page allows an admin user to view and edit various OpenMPF properties:\n\n\n\n\nAn admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name.\n\n\nNote that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration.\n\n\nWARNING:\n Changing the value of these properties can prevent the Workflow Manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!\n\n\n\nAt the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the Workflow Manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:\n\n\n\n\nHawtio\n\n\nThe \nHawtio\n web console can be accessed by selecting \"Hawtio\" from the\n\"Configuration\" dropdown menu in the top menu bar. Hawtio exposes various management information\nand settings. It can be used to monitor the state of the ActiveMQ queues used for communication\nbetween the Workflow Manager and the components.\n\n\nUser Configuration\n\n\nEvery time the Workflow Manager starts it will attempt to create accounts for the users listed in the \nuser.properties\n file. At runtime this file is extracted to \n$MPF_HOME/config\n on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password.\n\n\nWe highly recommend modifying the \nuser.properties\n file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created.\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker \nREADME\n that explain how to use a \ndocker secret\n for your custom \nuser.properties\n file.\n\n\n(Optional) Configure HTTPS\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform.\nIf you are using Docker, please follow the instructions in the openmpf-docker\n\nREADME\n\nthat explain how to configure HTTPS.", "title": "Admin Guide" }, { @@ -197,7 +197,7 @@ }, { "location": "/User-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the Workflow Manager only runs on the master node.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the Workflow Manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal Workflow Manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the Workflow Manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the Workflow Manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the Workflow Manager only runs on the master node.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the Workflow Manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal Workflow Manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the Workflow Manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the Workflow Manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.", "title": "User Guide" }, { @@ -277,7 +277,7 @@ }, { "location": "/OpenID-Connect-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nOpenID Connect Overview\n\n\nWorkflow Manager can use an OpenID Connect (OIDC) provider to handle authentication for users of\nthe web UI and clients of the REST API.\n\n\nConfiguration\n\n\nIn order to use OIDC, Workflow Manager must first be registered with OIDC provider. The exact\nprocess for this varies by provider. As part of the registration process, a client ID and client\nsecret should be provided. Those values should be set in the \nOIDC_CLIENT_ID\n and\n\nOIDC_CLIENT_SECRET\n environment variables. During the registration process the provider will\nlikely request a redirect URI. The redirect URI should be set to the base URI for Workflow Manager\nwith \n/login/oauth2/code/provider\n appended.\n\n\nThe documentation for the OIDC provider should specify the base URI a client should use to\nauthenticate users. The URI should be set in the \nOIDC_ISSUER_URI\n environment variable. To verify\nthe URI is correct, check that the JSON discovery document is returned when sending an HTTP GET\nrequest to the URI with \n/.well-known/openid-configuration\n appended.\n\n\nAfter a user or REST client authenticates with the OIDC provider, Workflow Manager will check for a\nclaim with a specific value to determine if the user is authorized to access Workflow Manager and\nwith what role. The \nOIDC_USER_CLAIM_NAME\n and \nOIDC_ADMIN_CLAIM_NAME\n environment variables\nspecify the name of the claim that must be present. The \nOIDC_USER_CLAIM_VALUE\n and\n\nOIDC_ADMIN_CLAIM_VALUE\n environment variables specify the required value of the claim.\n\n\nIf Workflow Manager is configured to use OIDC, then the component services must also be configured\nto use OIDC. The component services will use OIDC if either the \nOIDC_JWT_ISSUER_URI\n or\n\nOIDC_ISSUER_URI\n environment variables are set on the component service. When a component service\nis configured to use OIDC, the \nOIDC_CLIENT_ID\n and \nOIDC_CLIENT_SECRET\n environment variables are\nused to specify the client ID and secret that will be used during component registration.\n\n\nWorkflow Manager Environment Variables\n\n\n\n\nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used to authenticate users\n through the web UI. If \nOIDC_JWT_ISSUER_URI\n is not set, \nOIDC_ISSUER_URI\n will also be used to\n authenticate REST clients. The OIDC configuration endpoint must exist at the value of\n \nOIDC_ISSUER_URI\n with \n/.well-known/openid-configuration\n appended.\n\n\nOIDC_JWT_ISSUER_URI\n (Optional): Works the same way as \nOIDC_ISSUER_URI\n, except that the\n configuration will only be used to authenticate REST clients. When not provided,\n \nOIDC_ISSUER_URI\n will be used. This would be used when the authentication provider's endpoint\n for user authentication is different from the endpoint for authentication of REST clients.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that Workflow Manager will use to authenticate with\n the OIDC provider.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret Workflow Manager will use to authenticate\n with the OIDC provider.\n\n\nOIDC_USER_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nUSER\n role.\n\n\nOIDC_USER_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_USER_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_ADMIN_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nADMIN\n role.\n\n\nOIDC_ADMIN_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_ADMIN_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_SCOPES\n (Optional): A comma-separated list of the scopes to be requested from the OIDC\n provider when authenticating a user through the web UI. The OIDC specification requires one of\n the scopes to be \nopenid\n, so if this environment variable is omitted or \nopenid\n is not in the\n list, it will be automatically added.\n\n\nOIDC_USER_NAME_ATTR\n (Optional): The name of the claim containing the user name. Defaults to\n \nsub\n.\n\n\nOIDC_REDIRECT_URI\n (Optional): Specifies the URL the user's browser will be redirected to after\n logging in to the OIDC provider. If provided, the URL must end in \n/login/oauth2/code/provider\n.\n This would generally be used when the host name that Workflow Manager uses to connect to the\n OIDC provider is different from the OIDC provider's public host name. The value can use the\n \ntemplate variables supported by Spring.\n\n\n\n\nComponent Environment Variables\n\n\n\n\nOIDC_JWT_ISSUER_URI\n or \nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used\n to authenticate REST clients. The OIDC configuration endpoint must exist at the value of this\n environment variable with \n/.well-known/openid-configuration\n appended. If both environment\n variables are provided, \nOIDC_JWT_ISSUER_URI\n will be used. If \nOIDC_JWT_ISSUER_URI\n is set on\n Workflow Manager, it should be set to the same value on the component services. If\n \nOIDC_JWT_ISSUER_URI\n is not set on Workflow Manager, \nOIDC_ISSUER_URI\n should be set to the\n same value on Workflow Manager and the component services. When either environment variable is\n set, the \nWFM_USER\n and \nWFM_PASSWORD\n environment variables are ignored.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that the component service will use when registering\n the component with Workflow Manager.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret that the component service will use when\n registering the component with Workflow Manager.\n\n\n\n\nExample with Keycloak\n\n\nThe following example explains how to test Workflow Manager with Keycloak as the OIDC provider.\nIt is just an example and should not be used in production.\n\n\n1. Get the Docker gateway IP address by running the command below. It will be used in later steps.\n\n\ndocker network inspect --format '{{(index .IPAM.Config 0).Gateway}}' bridge\n\n\n\n2. Start Keycloak in development mode using the command below. Do not start Workflow Manager yet.\n The values for the OIDC environment variables are dependent on how you set up Keycloak in the\n following steps.\n\n\ndocker run -p 9090:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \\\n quay.io/keycloak/keycloak:21.1.1 start-dev\n\n\n\n3. Go to \nhttp://localhost:9090/admin\n in a browser and login with username \nadmin\n and\n password \nadmin\n.\n\n\n4. Create a new realm:\n\n\n\n\nCreate a new realm using the drop down box in upper left that says \"master\".\n\n\nUse the realm name you entered and the gateway IP address from step 1 to set Workflow\n Manager and the component services' \nOIDC_ISSUER_URI\n environment variable to:\n \nhttp://:9090/realms/\n\n\n\n\n5. Create the client that Workflow Manager will use to authenticate users:\n\n\n\n\nUse the \"Clients\" link in the left menu to create a new client.\n\n\nGeneral Settings:\n\n\nThe \"Client type\" needs to be set to \"OpenID Connect\".\n\n\nEnter a \"Client ID\".\n\n\nSet Workflow Manager's \nOIDC_CLIENT_ID\n environment variable to the client ID you entered.\n\n\n\n\n\n\nCapability config:\n\n\n\"Client authentication\" must be enabled.\n\n\n\"Standard flow\" must be enabled.\n\n\n\"Service accounts roles\" must be enabled so that Workflow Manager can include an OAuth token\n in job completion callbacks and when communicating with TiesDb.\n\n\n\n\n\n\nLogin settings:\n\n\nSet \"Valid redirect URIs\" to\n \nhttp://localhost:8080/login/oauth2/code/provider\n\n\nSet \"Valid post logout redirect URIs\" to \nhttp://localhost:8080\n\n\n\n\n\n\nSet Workflow Manager's \nOIDC_CLIENT_SECRET\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n6. Create a Keycloak role that maps to a Workflow Manager role:\n\n\n\n\nUse the \"Realm roles\" link in the left menu to create a new role.\n\n\nIf the Keycloak role should make the user an \nADMIN\n in Workflow Manager, set Workflow\n Manager's \nOIDC_ADMIN_CLAIM_VALUE\n to the role name you just entered. If it should be a\n \nUSER\n, then set the \nOIDC_USER_CLAIM_VALUE\n environment variable.\n\n\nOnly one of \nOIDC_ADMIN_CLAIM_VALUE\n and \nOIDC_USER_CLAIM_VALUE\n need to be set. If you would\n like to set up both roles repeat this step.\n\n\n\n\n7. Include the Keycloak role(s) in the access token:\n\n\n\n\nIn the \"Client scopes\" menu add a mapper to the \"roles\" scope.\n\n\nUse the \"groups\" predefined mapper.\n\n\nThe default name \"Token Claim Name\" is \"groups\". This can be changed.\n\n\nIf you created an \nADMIN\n role in step 6 set \nOIDC_ADMIN_CLAIM_NAME\n to the value in\n \"Token Claim Name\". If you created a \nUSER\n role, do the same for \nOIDC_USER_CLAIM_NAME\n.\n\n\n\n\n8. Optionally, set Workflow Manager's \nOIDC_USER_NAME_ATTR\n to \npreferred_username\n to display the\n user name instead of the ID.\n\n\n9. Create Users:\n\n\n\n\nAfter creating a user, set a password in the \"Credentials\" tab.\n\n\nUse the \"Role mapping\" tab to add the user to one of roles created in step 6.\n\n\n\n\n10. Add Component Registration REST client:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\nSet the component services' \nWFM_USER\n environment variable to the client ID you entered.\n\n\nSet component services' \nWFM_PASSWORD\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n11. Add external REST clients:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\n\n\n12. Start Workflow Manager. When you initially navigate to Workflow Manager, you will be\n redirected to the Keycloak log in page. You can log in using the users created in step 9.\n\n\nTest REST authentication\n\n\nUsing the Docker gateway IP address from step 1, the client ID and secret from step 11, and the\nrealm name from step 4, run the following command:\n\n\ncurl -d grant_type=client_credentials -u ':' 'http://:9090/realms//protocol/openid-connect/token'\n\n\n\nThe response JSON will contain a token in the \n\"access_token\"\n property. That token needs to be\nincluded as a bearer token in REST requests to Workflow Manager. For example:\n\n\ncurl -H \"Authorization: Bearer \" http://localhost:8080/rest/actions\n\n\n\nUse OAuth when sending job complete callbacks and when posting to TiesDb.\n\n\n1. Create a client for the callback receiver or TiesDb:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\n\n\n\n\nConfigure the callback receiver or TiesDb with the client ID and secret.\n\n\n\n\n2. Create a client role:\n\n\n\n\nUse the \"Roles\" tab to add a role to the client that was just created.\n\n\n\n\n3. Add the role to the Workflow Manager's client:\n\n\n\n\nGo to the client details page for the client created for Workflow Manager.\n\n\nGo to the \"Service accounts roles\" tab.\n\n\nClick \"Assign role\".\n\n\nChange \"Filter by realm roles\" to \"Filter by clients\".\n\n\nAssign the role created in step 2.\n\n\n\n\n4. Run jobs with the \nCALLBACK_USE_OIDC\n or \nTIES_DB_USE_OIDC\n job properties set to \nTRUE\n.\n\n\nTest callback authentication\n\n\nThe Python script below can be used to test callback authentication. Before running the script you\nmust run \npip install Flask-pyoidc==3.14.2\n. To run the script, you must set the \nOIDC_ISSUER_URI\n,\n\nOIDC_CLIENT_ID\n, and \nOIDC_CLIENT_SECRET\n environment variables. Note that the script configures\nthe \nFlask-pyoidc\n package to authenticate Web users, as required by the package, but we are only\ntesting the authentication of REST clients.\n\n\nOnce the script is running, a user can submit a job via the Workflow Manager Swagger page with the\nfollowing fields to test callbacks:\n\n\n{\n \"callbackMethod\": \"POST\",\n \"callbackURL\": \"http://localhost:5000/api\",\n \"jobProperties\": {\n \"CALLBACK_USE_OIDC\": \"TRUE\"\n }\n}\n\n\n\nimport json\nimport logging\nimport os\n\nfrom flask import Flask, jsonify\nfrom flask_pyoidc.provider_configuration import ProviderConfiguration, ClientMetadata\nfrom flask_pyoidc import OIDCAuthentication\n\nlogging.basicConfig(level=logging.INFO)\n\napp = Flask(__name__)\napp.config.update(\n OIDC_REDIRECT_URI='http://localhost:5000/redirect_uri',\n SECRET_KEY='secret',\n DEBUG=True\n)\n\nauth = OIDCAuthentication({\n 'default': ProviderConfiguration(\n os.getenv('OIDC_ISSUER_URI'),\n client_metadata=ClientMetadata(\n os.getenv('OIDC_CLIENT_ID'), os.getenv('OIDC_CLIENT_SECRET'))\n )\n}, app)\n\n@app.route('/api', methods = ('GET', 'POST'))\n@auth.token_auth('default')\ndef api():\n print(type(auth.current_token_identity))\n print(json.dumps(auth.current_token_identity, sort_keys=True, indent=4))\n return jsonify({'message': 'test message'})\n\nif __name__ == '__main__':\n app.run()", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024\nThe MITRE Corporation. All Rights Reserved.\n\n\nOpenID Connect Overview\n\n\nWorkflow Manager can use an OpenID Connect (OIDC) provider to handle authentication for users of\nthe web UI and clients of the REST API.\n\n\nConfiguration\n\n\nIn order to use OIDC, Workflow Manager must first be registered with OIDC provider. The exact\nprocess for this varies by provider. As part of the registration process, a client ID and client\nsecret should be provided. Those values should be set in the \nOIDC_CLIENT_ID\n and\n\nOIDC_CLIENT_SECRET\n environment variables. During the registration process the provider will\nlikely request a redirect URI. The redirect URI should be set to the base URI for Workflow Manager\nwith \n/login/oauth2/code/provider\n appended.\n\n\nThe documentation for the OIDC provider should specify the base URI a client should use to\nauthenticate users. The URI should be set in the \nOIDC_ISSUER_URI\n environment variable. To verify\nthe URI is correct, check that the JSON discovery document is returned when sending an HTTP GET\nrequest to the URI with \n/.well-known/openid-configuration\n appended.\n\n\nAfter a user or REST client authenticates with the OIDC provider, Workflow Manager will check for a\nclaim with a specific value to determine if the user is authorized to access Workflow Manager and\nwith what role. The \nOIDC_USER_CLAIM_NAME\n and \nOIDC_ADMIN_CLAIM_NAME\n environment variables\nspecify the name of the claim that must be present. The \nOIDC_USER_CLAIM_VALUE\n and\n\nOIDC_ADMIN_CLAIM_VALUE\n environment variables specify the required value of the claim.\n\n\nIf Workflow Manager is configured to use OIDC, then the component services must also be configured\nto use OIDC. The component services will use OIDC if either the \nOIDC_JWT_ISSUER_URI\n or\n\nOIDC_ISSUER_URI\n environment variables are set on the component service. When a component service\nis configured to use OIDC, the \nOIDC_CLIENT_ID\n and \nOIDC_CLIENT_SECRET\n environment variables are\nused to specify the client ID and secret that will be used during component registration.\n\n\nWorkflow Manager Environment Variables\n\n\n\n\nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used to authenticate users\n through the web UI. If \nOIDC_JWT_ISSUER_URI\n is not set, \nOIDC_ISSUER_URI\n will also be used to\n authenticate REST clients. The OIDC configuration endpoint must exist at the value of\n \nOIDC_ISSUER_URI\n with \n/.well-known/openid-configuration\n appended.\n\n\nOIDC_JWT_ISSUER_URI\n (Optional): Works the same way as \nOIDC_ISSUER_URI\n, except that the\n configuration will only be used to authenticate REST clients. When not provided,\n \nOIDC_ISSUER_URI\n will be used. This would be used when the authentication provider's endpoint\n for user authentication is different from the endpoint for authentication of REST clients.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that Workflow Manager will use to authenticate with\n the OIDC provider.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret Workflow Manager will use to authenticate\n with the OIDC provider.\n\n\nOIDC_USER_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nUSER\n role.\n\n\nOIDC_USER_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_USER_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_ADMIN_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nADMIN\n role.\n\n\nOIDC_ADMIN_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_ADMIN_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_SCOPES\n (Optional): A comma-separated list of the scopes to be requested from the OIDC\n provider when authenticating a user through the web UI. The OIDC specification requires one of\n the scopes to be \nopenid\n, so if this environment variable is omitted or \nopenid\n is not in the\n list, it will be automatically added.\n\n\nOIDC_USER_NAME_ATTR\n (Optional): The name of the claim containing the user name. Defaults to\n \nsub\n.\n\n\nOIDC_REDIRECT_URI\n (Optional): Specifies the URL the user's browser will be redirected to after\n logging in to the OIDC provider. If provided, the URL must end in \n/login/oauth2/code/provider\n.\n This would generally be used when the host name that Workflow Manager uses to connect to the\n OIDC provider is different from the OIDC provider's public host name. The value can use the\n \ntemplate variables supported by Spring.\n\n\n\n\nComponent Environment Variables\n\n\n\n\nOIDC_JWT_ISSUER_URI\n or \nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used\n to authenticate REST clients. The OIDC configuration endpoint must exist at the value of this\n environment variable with \n/.well-known/openid-configuration\n appended. If both environment\n variables are provided, \nOIDC_JWT_ISSUER_URI\n will be used. If \nOIDC_JWT_ISSUER_URI\n is set on\n Workflow Manager, it should be set to the same value on the component services. If\n \nOIDC_JWT_ISSUER_URI\n is not set on Workflow Manager, \nOIDC_ISSUER_URI\n should be set to the\n same value on Workflow Manager and the component services. When either environment variable is\n set, the \nWFM_USER\n and \nWFM_PASSWORD\n environment variables are ignored.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that the component service will use when registering\n the component with Workflow Manager.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret that the component service will use when\n registering the component with Workflow Manager.\n\n\n\n\nExample with Keycloak\n\n\nThe following example explains how to test Workflow Manager with Keycloak as the OIDC provider.\nIt is just an example and should not be used in production.\n\n\n1. Get the Docker gateway IP address by running the command below. It will be used in later steps.\n\n\ndocker network inspect --format '{{(index .IPAM.Config 0).Gateway}}' bridge\n\n\n\n2. Start Keycloak in development mode using the command below. Do not start Workflow Manager yet.\n The values for the OIDC environment variables are dependent on how you set up Keycloak in the\n following steps.\n\n\ndocker run -p 9090:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \\\n quay.io/keycloak/keycloak:21.1.1 start-dev\n\n\n\n3. Go to \nhttp://localhost:9090/admin\n in a browser and login with username \nadmin\n and\n password \nadmin\n.\n\n\n4. Create a new realm:\n\n\n\n\nCreate a new realm using the drop down box in upper left that says \"master\".\n\n\nUse the realm name you entered and the gateway IP address from step 1 to set Workflow\n Manager and the component services' \nOIDC_ISSUER_URI\n environment variable to:\n \nhttp://:9090/realms/\n\n\n\n\n5. Create the client that Workflow Manager will use to authenticate users:\n\n\n\n\nUse the \"Clients\" link in the left menu to create a new client.\n\n\nGeneral Settings:\n\n\nThe \"Client type\" needs to be set to \"OpenID Connect\".\n\n\nEnter a \"Client ID\".\n\n\nSet Workflow Manager's \nOIDC_CLIENT_ID\n environment variable to the client ID you entered.\n\n\n\n\n\n\nCapability config:\n\n\n\"Client authentication\" must be enabled.\n\n\n\"Standard flow\" must be enabled.\n\n\n\"Service accounts roles\" must be enabled so that Workflow Manager can include an OAuth token\n in job completion callbacks and when communicating with TiesDb.\n\n\n\n\n\n\nLogin settings:\n\n\nSet \"Valid redirect URIs\" to\n \nhttp://localhost:8080/login/oauth2/code/provider\n\n\nSet \"Valid post logout redirect URIs\" to \nhttp://localhost:8080\n\n\n\n\n\n\nSet Workflow Manager's \nOIDC_CLIENT_SECRET\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n6. Create a Keycloak role that maps to a Workflow Manager role:\n\n\n\n\nUse the \"Realm roles\" link in the left menu to create a new role.\n\n\nIf the Keycloak role should make the user an \nADMIN\n in Workflow Manager, set Workflow\n Manager's \nOIDC_ADMIN_CLAIM_VALUE\n to the role name you just entered. If it should be a\n \nUSER\n, then set the \nOIDC_USER_CLAIM_VALUE\n environment variable.\n\n\nOnly one of \nOIDC_ADMIN_CLAIM_VALUE\n and \nOIDC_USER_CLAIM_VALUE\n need to be set. If you would\n like to set up both roles repeat this step.\n\n\n\n\n7. Include the Keycloak role(s) in the access token:\n\n\n\n\nIn the \"Client scopes\" menu add a mapper to the \"roles\" scope.\n\n\nUse the \"groups\" predefined mapper.\n\n\nThe default name \"Token Claim Name\" is \"groups\". This can be changed.\n\n\nIf you created an \nADMIN\n role in step 6 set \nOIDC_ADMIN_CLAIM_NAME\n to the value in\n \"Token Claim Name\". If you created a \nUSER\n role, do the same for \nOIDC_USER_CLAIM_NAME\n.\n\n\n\n\n8. Optionally, set Workflow Manager's \nOIDC_USER_NAME_ATTR\n to \npreferred_username\n to display the\n user name instead of the ID.\n\n\n9. Create Users:\n\n\n\n\nAfter creating a user, set a password in the \"Credentials\" tab.\n\n\nUse the \"Role mapping\" tab to add the user to one of roles created in step 6.\n\n\n\n\n10. Add Component Registration REST client:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\nSet the component services' \nWFM_USER\n environment variable to the client ID you entered.\n\n\nSet component services' \nWFM_PASSWORD\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n11. Add external REST clients:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\n\n\n12. Start Workflow Manager. When you initially navigate to Workflow Manager, you will be\n redirected to the Keycloak log in page. You can log in using the users created in step 9.\n\n\nTest REST authentication\n\n\nUsing the Docker gateway IP address from step 1, the client ID and secret from step 11, and the\nrealm name from step 4, run the following command:\n\n\ncurl -d grant_type=client_credentials -u ':' 'http://:9090/realms//protocol/openid-connect/token'\n\n\n\nThe response JSON will contain a token in the \n\"access_token\"\n property. That token needs to be\nincluded as a bearer token in REST requests to Workflow Manager. For example:\n\n\ncurl -H \"Authorization: Bearer \" http://localhost:8080/rest/actions\n\n\n\nUse OAuth when sending job complete callbacks and when posting to TiesDb.\n\n\n1. Create a client for the callback receiver or TiesDb:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\n\n\n\n\nConfigure the callback receiver or TiesDb with the client ID and secret.\n\n\n\n\n2. Create a client role:\n\n\n\n\nUse the \"Roles\" tab to add a role to the client that was just created.\n\n\n\n\n3. Add the role to the Workflow Manager's client:\n\n\n\n\nGo to the client details page for the client created for Workflow Manager.\n\n\nGo to the \"Service accounts roles\" tab.\n\n\nClick \"Assign role\".\n\n\nChange \"Filter by realm roles\" to \"Filter by clients\".\n\n\nAssign the role created in step 2.\n\n\n\n\n4. Run jobs with the \nCALLBACK_USE_OIDC\n or \nTIES_DB_USE_OIDC\n job properties set to \nTRUE\n.\n\n\nTest callback authentication\n\n\nThe Python script below can be used to test callback authentication. Before running the script you\nmust run \npip install Flask-pyoidc==3.14.2\n. To run the script, you must set the \nOIDC_ISSUER_URI\n,\n\nOIDC_CLIENT_ID\n, and \nOIDC_CLIENT_SECRET\n environment variables. Note that the script configures\nthe \nFlask-pyoidc\n package to authenticate Web users, as required by the package, but we are only\ntesting the authentication of REST clients.\n\n\nOnce the script is running, a user can submit a job via the Workflow Manager Swagger page with the\nfollowing fields to test callbacks:\n\n\n{\n \"callbackMethod\": \"POST\",\n \"callbackURL\": \"http://localhost:5000/api\",\n \"jobProperties\": {\n \"CALLBACK_USE_OIDC\": \"TRUE\"\n }\n}\n\n\n\nimport json\nimport logging\nimport os\n\nfrom flask import Flask, jsonify\nfrom flask_pyoidc.provider_configuration import ProviderConfiguration, ClientMetadata\nfrom flask_pyoidc import OIDCAuthentication\n\nlogging.basicConfig(level=logging.INFO)\n\napp = Flask(__name__)\napp.config.update(\n OIDC_REDIRECT_URI='http://localhost:5000/redirect_uri',\n SECRET_KEY='secret',\n DEBUG=True\n)\n\nauth = OIDCAuthentication({\n 'default': ProviderConfiguration(\n os.getenv('OIDC_ISSUER_URI'),\n client_metadata=ClientMetadata(\n os.getenv('OIDC_CLIENT_ID'), os.getenv('OIDC_CLIENT_SECRET'))\n )\n}, app)\n\n@app.route('/api', methods = ('GET', 'POST'))\n@auth.token_auth('default')\ndef api():\n print(type(auth.current_token_identity))\n print(json.dumps(auth.current_token_identity, sort_keys=True, indent=4))\n return jsonify({'message': 'test message'})\n\nif __name__ == '__main__':\n app.run()", "title": "OpenID Connect Guide" }, { @@ -322,7 +322,7 @@ }, { "location": "/Media-Segmentation-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nDetection Chaining\n\n\nThe OpenMPF has the ability to chain detection tasks together in a detection pipeline. As each detection stage in the pipeline completes, the volume of data to be processed in the next stage may be reduced. Generally, any detection tasks executed prior to the final detection task in the pipeline are referred to as preprocessors or filters. For example, consider the following pipeline which demonstrates the use of a motion preprocessor:\n\n\n\n\nIn the pipeline above, the motion preprocessor reduces the volume of data which is passed to the face detector. This is particularly useful when the input media collection contains videos captured by a fixed-location camera. For example, a camera targeting a chokepoint such as a hallway door. The motion preprocessor will filter the input media so that only regions of video containing motion are passed on to the face detector.\n\n\nDetection pipelines may be created with, or without, preprocessors and filters using the Create Custom Pipelines view.\n\n\n\n\nWARNING: Preprocessors and filters may ultimately eliminate the entirety of a media file. When an entire media file is eliminated, none of the subsequent stages in the pipeline will operate on that file. Therefore, it is important to consider the consequences of using preprocessors/filters. For example, when the motion detection receives an image or audio file, its default behavior is to return a response indicating that the file did not contain any motion tracks. If the pipeline continued to face detection then none of the image files would be eligible for that kind of detection.\n\n\n\n\n\"USE_PREPROCESSOR\" Property\n\n\nIn order to mitigate the risk of eliminating useful media files simply because they are not supported by a detector using its default settings, some algorithms expose a \"USE_PREPROCESSOR\" property. When a user creates an action based on a detector with this property, the user may assign this property a nonzero value in order to indicate that the detector should behave as a preprocessor as opposed to a filter. When acting as a preprocessor, a detector will not emit an empty detection set when provided with an unsupported media type, rather it will return a single track spanning the duration of the media file. Thus, when configured with the \"USE_PREPROCESSOR\" setting, the motion detector will not prevent images from passing on to the next stage in the pipeline, for example.\n\n\nSegmenting Media\n\n\nThe OpenMPF allows users to configure video segmenting properties for actions in a pipeline. Audio files (which do not have the concept of \"frames\") and image files (which are treated like single-frame videos) are not affected by these properties.\n\n\nSegmenting is performed before a detection action in order to split work across the available detection services running on the various nodes in the OpenMPF cluster. In general, each instance of a detection service can process one video segment at a time. Multiple services can process separate segments at the same time, thus enabling parallel processing. There are two fundamental segmenting scenarios:\n\n\n\n\nSegmenting must be performed on a video which has not passed through a preprocessor or filter.\n\n\nSegmenting must be performed on a video which has passed through a preprocessor or filter.\n\n\n\n\nIn the first scenario the segmenting logic is less complex. The segmenter will create a supersegment corresponding to the entire length of the video (in frames), and it will then divide the supersegment into segments which respect the provided \"TARGET_SEGMENT_LENGTH\" and \"MIN_SEGMENT_LENGTH\" properties.\n\n\nIn the second scenario the segmenting logic is more complex. The segmenter first examines the start and stop times associated with all of the overlapping tracks produced by the previous detection action in the pipeline and proceeds to merge those intervals and segment the result. The goal is to generate a minimum number of segments that don't include unnecessary frames (frames that don't belong to any tracks). For example:\n\n\n\n\n\"TARGET_SEGMENT_LENGTH\" Property\n\n\nThis property indicates the preferred number of frames which will be provided to the detection component. For example, a value of \"100\" indicates that the input video should be split into 100-frame segments. Note that the properties \"MIN_SEGMENT_LENGTH\" and \"MIN_GAP_BETWEEN_SEGMENTS\" may ultimately cause segments to vary from the preferred segment size.\n\n\n\"MIN_SEGMENT_LENGTH\" Property\n\n\nIf a segment length is less than this value, the segment will be merged into the segment that precedes it. If no segment precedes it, the short segment will stand on its own. Short segments are not discarded.\n\n\nExample 1: Adjacent Segment Present\n\n\n\n\n\n\nIn this example, a preprocessor has completed and produced a single track.\n\n\nThe next detection action specifies the following parameters:\n\n\n\"TARGET_SEGMENT_LENGTH\" = 100\n\n\n\"MIN_SEGMENT_LENGTH\" = 75\n\n\n\n\n\n\nThree segments are initially produced from the input track with lengths corresponding to 100 frames, 100 frames, and 50 frames.\n\n\nSince segment 3 is not at least the minimum specified segment length, it is merged with segment 2.\n\n\nUltimately, two segments are produced.\n\n\n\n\nExample 2: No Adjacent Segment\n\n\n\n\n\n\nIn this example, a preprocessor has completed and produced two non-overlapping tracks.\n\n\nThe next detection action specifies the following parameters:\n\n\n\"TARGET_SEGMENT_LENGTH\" = 100\n\n\n\"MIN_SEGMENT_LENGTH\" = 75\n\n\n\"MIN_GAP_BETWEEN_SEGMENTS\" = 50\n\n\n\n\n\n\nThe segmenter begins by merging any segments which are less than \"MIN_GAP_BETWEEN_SEGMENTS\" apart. There are none.\n\n\nThe segmenter then splits the existing segments using the \"MIN_SEGMENT_LENGTH\" and \"TARGET_SEGMENT_LENGTH\" values.\n\n\nThe segmenter iterates through each segment produced. If the segment satisfies the minimum length constraint, it moves to the next segment.\n\n\nWhen it reaches the third segment and finds the length of 50 frames is not at least the minimum length, it merges that segment with the previous adjacent segment.\n\n\nWhen it reaches the final segment and finds that the length of 25 frames is not at least the minimum length, it creates a short segment since there is no adjacent preceding segment to merge it with.\n\n\n\n\n\n\nUltimately, three segments are produced.\n\n\n\n\n\"MIN_GAP_BETWEEN_SEGMENTS\" Property\n\n\nThis property is important to pipelines which contain preprocessors or filters and controls the minimum gap which must appear between consecutive segments. The purpose of this property is to prevent scenarios where a preprocessor or filter produces a large number of short segments separated by only a few frames. By merging the segments together prior to performing further segmentation, the number of work units produced by the segmenting plan can be reduced, thereby reducing pipeline execution time.\n\n\nConsider the following diagram, which further illustrates the purpose of this property:\n\n\n\n\n\n\nThe user submits a video to a pipeline containing a motion preprocessor followed by another extractor (e.g., face).\n\n\nThe video is initially split into segments using the properties provided by the motion preprocessor. Specifically, the preprocessor action specifies the following parameters and four segments are produced:\n\n\n\"TARGET_SEGMENT_LENGTH\" = \"250\"\n\n\n\"MIN_SEGMENT_LENGTH\" = \"150\"\n\n\n\"MERGE_TRACKS\" = \"true\"\n\n\n\n\n\n\nThe segments are submitted to the motion preprocessor, and five distinct and non-overlapping tracks are returned based on the frames of the segments in which motion is detected.\n\n\nBecause the \"MERGE_TRACKS\" property is set to \"true\", tracks are merged across segment boundaries if applicable.\n This rule is applied to each pair of tracks that are only one frame apart (adjacent). Consequently, only three\n tracks are ultimately derived from the video. (The number of tracks is reduced from five to three between the\n \"Preprocessor\" and \"Track Merger\" phases of the diagram.) When two tracks are merged, the confidence value will be\n set to the maximum confidence value of the two tracks and their track properties will be merged. If the two tracks\n both have a track property with the same name but different values, the values will be concatenated with a\n semicolon as the separator.\n\n\nThe non-overlapping tracks are then used to form the video segments for the next detection action. This action specifies the following parameters:\n\n\n\"TARGET_SEGMENT_LENGTH\" = \"75\"\n\n\n\"MIN_SEGMENT_LENGTH\" = \"26\"\n\n\n\"MIN_GAP_BETWEEN_SEGMENTS\" = \"100\"\n\n\n\n\n\n\nThe segmenting logic merges tracks which are less than \"MIN_GAP_BETWEEN_SEGMENTS\" frames apart into one long segment. Once all tracks have been merged, each track is segmented with respect to the provided \"TARGET_SEGMENT_LENGTH\" and \"MIN_SEGMENT_LENGTH\" properties. Ultimately, ten segments are produced. (Track #1 and Track #2 in the \"Track Merger\" phase of the diagram are combined, which is why Segment #3 in the \"Segmenter\" phase of the diagram includes the 25 frames that span the gap between those two tracks.)", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nDetection Chaining\n\n\nThe OpenMPF has the ability to chain detection tasks together in a detection pipeline. As each detection stage in the pipeline completes, the volume of data to be processed in the next stage may be reduced. Generally, any detection tasks executed prior to the final detection task in the pipeline are referred to as preprocessors or filters. For example, consider the following pipeline which demonstrates the use of a motion preprocessor:\n\n\n\n\nIn the pipeline above, the motion preprocessor reduces the volume of data which is passed to the face detector. This is particularly useful when the input media collection contains videos captured by a fixed-location camera. For example, a camera targeting a chokepoint such as a hallway door. The motion preprocessor will filter the input media so that only regions of video containing motion are passed on to the face detector.\n\n\nDetection pipelines may be created with, or without, preprocessors and filters using the Create Custom Pipelines view.\n\n\n\n\nWARNING: Preprocessors and filters may ultimately eliminate the entirety of a media file. When an entire media file is eliminated, none of the subsequent stages in the pipeline will operate on that file. Therefore, it is important to consider the consequences of using preprocessors/filters. For example, when the motion detection receives an image or audio file, its default behavior is to return a response indicating that the file did not contain any motion tracks. If the pipeline continued to face detection then none of the image files would be eligible for that kind of detection.\n\n\n\n\n\"USE_PREPROCESSOR\" Property\n\n\nIn order to mitigate the risk of eliminating useful media files simply because they are not supported by a detector using its default settings, some algorithms expose a \"USE_PREPROCESSOR\" property. When a user creates an action based on a detector with this property, the user may assign this property a nonzero value in order to indicate that the detector should behave as a preprocessor as opposed to a filter. When acting as a preprocessor, a detector will not emit an empty detection set when provided with an unsupported media type, rather it will return a single track spanning the duration of the media file. Thus, when configured with the \"USE_PREPROCESSOR\" setting, the motion detector will not prevent images from passing on to the next stage in the pipeline, for example.\n\n\nSegmenting Media\n\n\nThe OpenMPF allows users to configure video segmenting properties for actions in a pipeline. Audio files (which do not have the concept of \"frames\") and image files (which are treated like single-frame videos) are not affected by these properties.\n\n\nSegmenting is performed before a detection action in order to split work across the available detection services running on the various nodes in the OpenMPF cluster. In general, each instance of a detection service can process one video segment at a time. Multiple services can process separate segments at the same time, thus enabling parallel processing. There are two fundamental segmenting scenarios:\n\n\n\n\nSegmenting must be performed on a video which has not passed through a preprocessor or filter.\n\n\nSegmenting must be performed on a video which has passed through a preprocessor or filter.\n\n\n\n\nIn the first scenario the segmenting logic is less complex. The segmenter will create a supersegment corresponding to the entire length of the video (in frames), and it will then divide the supersegment into segments which respect the provided \"TARGET_SEGMENT_LENGTH\" and \"MIN_SEGMENT_LENGTH\" properties.\n\n\nIn the second scenario the segmenting logic is more complex. The segmenter first examines the start and stop times associated with all of the overlapping tracks produced by the previous detection action in the pipeline and proceeds to merge those intervals and segment the result. The goal is to generate a minimum number of segments that don't include unnecessary frames (frames that don't belong to any tracks). For example:\n\n\n\n\n\"TARGET_SEGMENT_LENGTH\" Property\n\n\nThis property indicates the preferred number of frames which will be provided to the detection component. For example, a value of \"100\" indicates that the input video should be split into 100-frame segments. Note that the properties \"MIN_SEGMENT_LENGTH\" and \"MIN_GAP_BETWEEN_SEGMENTS\" may ultimately cause segments to vary from the preferred segment size.\n\n\n\"MIN_SEGMENT_LENGTH\" Property\n\n\nIf a segment length is less than this value, the segment will be merged into the segment that precedes it. If no segment precedes it, the short segment will stand on its own. Short segments are not discarded.\n\n\nExample 1: Adjacent Segment Present\n\n\n\n\n\n\nIn this example, a preprocessor has completed and produced a single track.\n\n\nThe next detection action specifies the following parameters:\n\n\n\"TARGET_SEGMENT_LENGTH\" = 100\n\n\n\"MIN_SEGMENT_LENGTH\" = 75\n\n\n\n\n\n\nThree segments are initially produced from the input track with lengths corresponding to 100 frames, 100 frames, and 50 frames.\n\n\nSince segment 3 is not at least the minimum specified segment length, it is merged with segment 2.\n\n\nUltimately, two segments are produced.\n\n\n\n\nExample 2: No Adjacent Segment\n\n\n\n\n\n\nIn this example, a preprocessor has completed and produced two non-overlapping tracks.\n\n\nThe next detection action specifies the following parameters:\n\n\n\"TARGET_SEGMENT_LENGTH\" = 100\n\n\n\"MIN_SEGMENT_LENGTH\" = 75\n\n\n\"MIN_GAP_BETWEEN_SEGMENTS\" = 50\n\n\n\n\n\n\nThe segmenter begins by merging any segments which are less than \"MIN_GAP_BETWEEN_SEGMENTS\" apart. There are none.\n\n\nThe segmenter then splits the existing segments using the \"MIN_SEGMENT_LENGTH\" and \"TARGET_SEGMENT_LENGTH\" values.\n\n\nThe segmenter iterates through each segment produced. If the segment satisfies the minimum length constraint, it moves to the next segment.\n\n\nWhen it reaches the third segment and finds the length of 50 frames is not at least the minimum length, it merges that segment with the previous adjacent segment.\n\n\nWhen it reaches the final segment and finds that the length of 25 frames is not at least the minimum length, it creates a short segment since there is no adjacent preceding segment to merge it with.\n\n\n\n\n\n\nUltimately, three segments are produced.\n\n\n\n\n\"MIN_GAP_BETWEEN_SEGMENTS\" Property\n\n\nThis property is important to pipelines which contain preprocessors or filters and controls the minimum gap which must appear between consecutive segments. The purpose of this property is to prevent scenarios where a preprocessor or filter produces a large number of short segments separated by only a few frames. By merging the segments together prior to performing further segmentation, the number of work units produced by the segmenting plan can be reduced, thereby reducing pipeline execution time.\n\n\nConsider the following diagram, which further illustrates the purpose of this property:\n\n\n\n\n\n\nThe user submits a video to a pipeline containing a motion preprocessor followed by another extractor (e.g., face).\n\n\nThe video is initially split into segments using the properties provided by the motion preprocessor. Specifically, the preprocessor action specifies the following parameters and four segments are produced:\n\n\n\"TARGET_SEGMENT_LENGTH\" = \"250\"\n\n\n\"MIN_SEGMENT_LENGTH\" = \"150\"\n\n\n\"MERGE_TRACKS\" = \"true\"\n\n\n\n\n\n\nThe segments are submitted to the motion preprocessor, and five distinct and non-overlapping tracks are returned based on the frames of the segments in which motion is detected.\n\n\nBecause the \"MERGE_TRACKS\" property is set to \"true\", tracks are merged across segment boundaries if applicable.\n This rule is applied to each pair of tracks that are only one frame apart (adjacent). Consequently, only three\n tracks are ultimately derived from the video. (The number of tracks is reduced from five to three between the\n \"Preprocessor\" and \"Track Merger\" phases of the diagram.) When two tracks are merged, the confidence value will be\n set to the maximum confidence value of the two tracks and their track properties will be merged. If the two tracks\n both have a track property with the same name but different values, the values will be concatenated with a\n semicolon as the separator.\n\n\nThe non-overlapping tracks are then used to form the video segments for the next detection action. This action specifies the following parameters:\n\n\n\"TARGET_SEGMENT_LENGTH\" = \"75\"\n\n\n\"MIN_SEGMENT_LENGTH\" = \"26\"\n\n\n\"MIN_GAP_BETWEEN_SEGMENTS\" = \"100\"\n\n\n\n\n\n\nThe segmenting logic merges tracks which are less than \"MIN_GAP_BETWEEN_SEGMENTS\" frames apart into one long segment. Once all tracks have been merged, each track is segmented with respect to the provided \"TARGET_SEGMENT_LENGTH\" and \"MIN_SEGMENT_LENGTH\" properties. Ultimately, ten segments are produced. (Track #1 and Track #2 in the \"Track Merger\" phase of the diagram are combined, which is why Segment #3 in the \"Segmenter\" phase of the diagram includes the 25 frames that span the gap between those two tracks.)", "title": "Media Segmentation Guide" }, { @@ -367,7 +367,7 @@ }, { "location": "/Feed-Forward-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have two algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n and \nFEED_FORWARD_TOP_QUALITY_COUNT\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have two algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n and \nFEED_FORWARD_TOP_QUALITY_COUNT\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.", "title": "Feed Forward Guide" }, { @@ -412,7 +412,7 @@ }, { "location": "/Derivative-Media-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nThis guide covers the derivative media feature, which allows users to create pipelines where a component in one of\nthe initial stages of the pipeline generates one or more derivative (aka child) media from the source (aka parent)\nmedia. A common scenario is to extract images from PDFs or other document formats. Once extracted, the Workflow Manager\n(WFM) can perform the subsequent pipeline stages on the source media (if necessary) as well as the derivative media.\nThis differs from typical pipeline execution, which only acts on one or more pieces of source media.\n\n\nComponent actions can be configured to only be performed on source media or derivative media. This is often necessary\nbecause the source media has a different media type than the derivative media, and therefore different actions are\nrequired to process each type of media. For example, PDFs are assigned the \nUNKNOWN\n media type (since the WFM is not\ndesigned to handle them in any special way), while the images extracted from a PDF are assigned the \nIMAGE\n media type.\nAn action for the TikaTextDetection component can process the \nUNKNOWN\n source media to generate \nTEXT\n tracks by\ndetecting the embedded raw character data in the PDF itself, while an action for the TesseractOCRTextDetection component\ncan process the \nIMAGE\n derivative media to generate \nTEXT\n tracks by detecting text in the image data.\n\n\nText Detection Example\n\n\nConsider the following diagram which depicts a pipeline to accomplish generating \nTEXT\n tracks for PDFs which contain\nembedded raw character data and embedded images with text:\n\n\n\n\nEach block represents a single action performed in that stage of the pipeline. (Technically, a pipeline consists of\ntasks executed in sequence, but in this case each task consists of only one action, so we just show the actions.)\nActions that have \nSOURCE MEDIA ONLY\n in their name have the \nSOURCE_MEDIA_ONLY\n property set to \nTRUE\n, which will\nresult in completely skipping that action for derivative media. The component associated with the action will not\nreceive sub-job messages and there will be no representation of the action being executed on derivative media in the\nJSON output object.\n\n\nSimilarly, actions that have \nDERIVATIVE MEDIA ONLY\n in their name have the \nDERIVATIVE_MEDIA_ONLY\n property set\nto \nTRUE\n, which will result in completely skipping that action for source media. Note that setting both properties\nto \nTRUE\n will result in skipping the action for both derivative and source media, which means it will never be\nexecuted. Not setting either property will result in executing the action on both source and derivative media, as you\nsee in the diagram with the \nKEYWORD TAGGING\n action.\n\n\nNote that the actions shown in the source media flow and derivative media flow are \nnot\n executed at the same time.\nThe flows are shown in different rows in the diagram to illustrate the logical separation, not to illustrate\nconcurrency. To be clear, each action in the pipeline is executed sequentially. If an action is missing from a flow it\njust means that no sub-job messages are generated for that kind of media during that stage of the pipeline. If an action\nis shown in both flows then sub-jobs will be performed on both the source and derivative media during that stage.\n\n\nTo break down each stage of this pipeline:\n\n\n\n\nTIKA IMAGE DETECTION ACTION\n: The TikaImageDetection component will extract images from PDFs (or other document\n formats) and place them in \n$MPF_HOME/share/tmp/derivative-media/\n. One \nMEDIA\n track will be generated for\n each image and it will have \nDERIVATIVE_MEDIA_TEMP_PATH\n and \nPAGE_NUM\n track properties.\n\n\nIf remote storage is enabled, the WFM will upload the objects to the object store after this action is performed.\n Refer to the \nObject Storage Guide\n for more information.\n\n\nThe WFM will perform media inspection on the images at this time.\n\n\nEach piece of derivative media will have a parent media id set to the media id value of the source media. It will\n appear as \nmedia.parentMediaId\n in the JSON output object. For source media the value will be -1.\n\n\nEach piece of derivative media will have a \nmedia.mediaMetadata\n property of \nIS_DERIVATIVE_MEDIA\n set to \nTRUE\n.\n The metadata will also contain the \nPAGE_NUM\n property.\n \n\n\nTIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\n: The TikaTextDetection component will generate \nTEXT\n tracks by\n detecting the embedded raw character data in the PDF.\n \n\n\nEAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\n: The EastTextDetection component will generate \nTEXT REGION\n tracks\n for each text region in the extracted images.\n \n\n\nTESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\n: The TesseractOCRTextDetection component\n will generate \nTEXT\n tracks by performing OCR on the text regions passed forward from the previous EAST action.\n \n\n\nKEYWORD TAGGING (WITH FF REGIONS) ACTION\n: The KeywordTagging component will take the \nTEXT\n tracks from the\n previous \nTIKA TEXT\n and \nTESSERACT OCR\n actions and perform keyword tagging. This will add the \nTAGS\n\n , \nTRIGGER_WORDS\n, and \nTRIGGER_WORDS_OFFSET\n properties to each track.\n \n\n\nOCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\n: The Markup component will take the keyword-tagged \nTEXT\n tracks for\n the derivative media and draw bounding boxes on the extracted images.\n\n\n\n\nTask Merging\n\n\nThe large blue rectangles in the diagram represent tasks that are merged together. The purpose of task merging is to\nconsolidate how tracks are represented in the JSON output object by hiding redundant track information, and to make it\nappear that the behaviors of two or more actions are the result of a single algorithm.\n\n\nFor example, keyword tagging behavior is supplemental to the text detection behavior. It's more important that \nTEXT\n\ntracks are associated with the algorithm that performed text detection than the \nKEYWORDTAGGING\n algorithm. Note that in\nour pipeline only the \nKEYWORD TAGGING\n action has the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n property set to \nTRUE\n. It has\na similar effect in the source media flow and derivative media flow.\n\n\nIn the source media flow the \nTIKA TEXT\n action is at the start of the merge chain while the \nKEYWORD TAGGING\n action is\nat the end of the merge chain. The tracks generated by the action at the end of the merge chain inherit the algorithm\nand track type from the tracks at the beginning of the merge chain. The effect is that in the JSON output object the\ntracks from the \nTIKA TEXT\n action will not be shown. Instead that action will be listed under \nTRACKS MERGED\n. The\ntracks from the \nKEYWORD TAGGING\n action will be shown with the \nTIKATEXT\n algorithm and \nTEXT\n track type:\n\n\n\"output\": {\n \"TRACKS MERGED\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\",\n \"algorithm\": \"TIKATEXT\"\n }\n ],\n \"MEDIA\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION\",\n \"algorithm\": \"TIKAIMAGE\",\n \"tracks\": [ ... ]\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TIKATEXT\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nIn the derivative media flow the \nTESSERACT OCR\n action is at the start of the merge chain while the \nKEYWORD TAGGING\n\naction is at the end of the merge chain. The effect is that in the JSON output object the tracks from\nthe \nTESSERACT OCR\n action will not be shown. The tracks from the \nKEYWORD TAGGING\n action will be shown with\nthe \nTESSERACTOCR\n algorithm and \nTEXT\n track type:\n\n\n\"output\": {\n \"NO TRACKS\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION#OCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"MARKUPCV\"\n }\n ],\n \"TRACKS MERGED\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [ ... ]\n }\n ],\n \"TEXT REGION\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"EAST\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nNote that a \nMARKUP\n action will never generate new tracks. It simply fills out the \nmedia.markupResult\n field in the\nJSON output object (not shown above).\n\n\nOutput Last Task Only\n\n\nIf you want to omit all tracks from the JSON output object but the respective \nTEXT\n tracks for the source and\nderivative media, then in you can also set the \nOUTPUT_LAST_TASK_ONLY\n job property to \nTRUE\n. Note that the WFM only\nconsiders tasks that use \nDETECTION\n algorithms as the final task, so \nMARKUP\n is ignored. Setting this property will\nresult in the following JSON for the source media:\n\n\n\"output\": {\n \"TRACKS SUPPRESSED\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION\",\n \"algorithm\": \"TIKAIMAGE\"\n },\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\",\n \"algorithm\": \"TIKATEXT\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TIKATEXT\", \n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nAnd the following JSON for the derivative media:\n\n\n\"output\": {\n \"NO TRACKS\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION#OCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"MARKUPCV\"\n }\n ],\n \"TRACKS SUPPRESSED\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"EAST\"\n },\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nDeveloping Media Extraction Components\n\n\nThe WFM is not limited to working only with the TikaImageDetection component. Any component can be designed to generate\nderivative media. The requirement is that it must generate \nMEDIA\n tracks, one piece of derivative media per track.\nMinimally, each track must have a \nDERIVATIVE_MEDIA_TEMP_PATH\n property set to the location of the media. By convention,\nthe media should be placed in a top-level directory of the form \n$MPF_HOME/share/tmp/derivative-media/\n. When\nthe job is done running, the media will be moved to persistent storage in \n$MPF_HOME/share/derivative-media/\n if\nremote storage is not enabled.\n\n\nSpecifically, TikaImageDetection uses paths of the\nform \n$MPF_HOME/share/tmp/derivative-media//tika-extracted//image.\n. The \n\n part ensures\nthat the results of two different actions executed within the same job on the same source media, or actions executed\nwithin the same job on different source media files, do not conflict with each other. A new \n\n is generated for\neach invocation of \nGetDetections()\n on the component.\n\n\nYour media extraction component can optionally include other track properties. These will get added to the derivative\nmedia metadata. For example, TikaImageDetection adds the \nPAGE_NUM\n property.\n\n\nNote that although this guide only talks about derivative images, your component can generate any kind of media. Be sure\nthat components in the subsequent pipeline stages can handle the media type detected by WFM media inspection.\n\n\nDefault Pipelines\n\n\nOpenMPF comes with some default pipelines for detecting text in documents and other pipelines for detecting faces in documents. Refer to the TikaImageDetection \ndescriptor.json\n.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nThis guide covers the derivative media feature, which allows users to create pipelines where a component in one of\nthe initial stages of the pipeline generates one or more derivative (aka child) media from the source (aka parent)\nmedia. A common scenario is to extract images from PDFs or other document formats. Once extracted, the Workflow Manager\n(WFM) can perform the subsequent pipeline stages on the source media (if necessary) as well as the derivative media.\nThis differs from typical pipeline execution, which only acts on one or more pieces of source media.\n\n\nComponent actions can be configured to only be performed on source media or derivative media. This is often necessary\nbecause the source media has a different media type than the derivative media, and therefore different actions are\nrequired to process each type of media. For example, PDFs are assigned the \nUNKNOWN\n media type (since the WFM is not\ndesigned to handle them in any special way), while the images extracted from a PDF are assigned the \nIMAGE\n media type.\nAn action for the TikaTextDetection component can process the \nUNKNOWN\n source media to generate \nTEXT\n tracks by\ndetecting the embedded raw character data in the PDF itself, while an action for the TesseractOCRTextDetection component\ncan process the \nIMAGE\n derivative media to generate \nTEXT\n tracks by detecting text in the image data.\n\n\nText Detection Example\n\n\nConsider the following diagram which depicts a pipeline to accomplish generating \nTEXT\n tracks for PDFs which contain\nembedded raw character data and embedded images with text:\n\n\n\n\nEach block represents a single action performed in that stage of the pipeline. (Technically, a pipeline consists of\ntasks executed in sequence, but in this case each task consists of only one action, so we just show the actions.)\nActions that have \nSOURCE MEDIA ONLY\n in their name have the \nSOURCE_MEDIA_ONLY\n property set to \nTRUE\n, which will\nresult in completely skipping that action for derivative media. The component associated with the action will not\nreceive sub-job messages and there will be no representation of the action being executed on derivative media in the\nJSON output object.\n\n\nSimilarly, actions that have \nDERIVATIVE MEDIA ONLY\n in their name have the \nDERIVATIVE_MEDIA_ONLY\n property set\nto \nTRUE\n, which will result in completely skipping that action for source media. Note that setting both properties\nto \nTRUE\n will result in skipping the action for both derivative and source media, which means it will never be\nexecuted. Not setting either property will result in executing the action on both source and derivative media, as you\nsee in the diagram with the \nKEYWORD TAGGING\n action.\n\n\nNote that the actions shown in the source media flow and derivative media flow are \nnot\n executed at the same time.\nThe flows are shown in different rows in the diagram to illustrate the logical separation, not to illustrate\nconcurrency. To be clear, each action in the pipeline is executed sequentially. If an action is missing from a flow it\njust means that no sub-job messages are generated for that kind of media during that stage of the pipeline. If an action\nis shown in both flows then sub-jobs will be performed on both the source and derivative media during that stage.\n\n\nTo break down each stage of this pipeline:\n\n\n\n\nTIKA IMAGE DETECTION ACTION\n: The TikaImageDetection component will extract images from PDFs (or other document\n formats) and place them in \n$MPF_HOME/share/tmp/derivative-media/\n. One \nMEDIA\n track will be generated for\n each image and it will have \nDERIVATIVE_MEDIA_TEMP_PATH\n and \nPAGE_NUM\n track properties.\n\n\nIf remote storage is enabled, the WFM will upload the objects to the object store after this action is performed.\n Refer to the \nObject Storage Guide\n for more information.\n\n\nThe WFM will perform media inspection on the images at this time.\n\n\nEach piece of derivative media will have a parent media id set to the media id value of the source media. It will\n appear as \nmedia.parentMediaId\n in the JSON output object. For source media the value will be -1.\n\n\nEach piece of derivative media will have a \nmedia.mediaMetadata\n property of \nIS_DERIVATIVE_MEDIA\n set to \nTRUE\n.\n The metadata will also contain the \nPAGE_NUM\n property.\n \n\n\nTIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\n: The TikaTextDetection component will generate \nTEXT\n tracks by\n detecting the embedded raw character data in the PDF.\n \n\n\nEAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\n: The EastTextDetection component will generate \nTEXT REGION\n tracks\n for each text region in the extracted images.\n \n\n\nTESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\n: The TesseractOCRTextDetection component\n will generate \nTEXT\n tracks by performing OCR on the text regions passed forward from the previous EAST action.\n \n\n\nKEYWORD TAGGING (WITH FF REGIONS) ACTION\n: The KeywordTagging component will take the \nTEXT\n tracks from the\n previous \nTIKA TEXT\n and \nTESSERACT OCR\n actions and perform keyword tagging. This will add the \nTAGS\n\n , \nTRIGGER_WORDS\n, and \nTRIGGER_WORDS_OFFSET\n properties to each track.\n \n\n\nOCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\n: The Markup component will take the keyword-tagged \nTEXT\n tracks for\n the derivative media and draw bounding boxes on the extracted images.\n\n\n\n\nTask Merging\n\n\nThe large blue rectangles in the diagram represent tasks that are merged together. The purpose of task merging is to\nconsolidate how tracks are represented in the JSON output object by hiding redundant track information, and to make it\nappear that the behaviors of two or more actions are the result of a single algorithm.\n\n\nFor example, keyword tagging behavior is supplemental to the text detection behavior. It's more important that \nTEXT\n\ntracks are associated with the algorithm that performed text detection than the \nKEYWORDTAGGING\n algorithm. Note that in\nour pipeline only the \nKEYWORD TAGGING\n action has the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n property set to \nTRUE\n. It has\na similar effect in the source media flow and derivative media flow.\n\n\nIn the source media flow the \nTIKA TEXT\n action is at the start of the merge chain while the \nKEYWORD TAGGING\n action is\nat the end of the merge chain. The tracks generated by the action at the end of the merge chain inherit the algorithm\nand track type from the tracks at the beginning of the merge chain. The effect is that in the JSON output object the\ntracks from the \nTIKA TEXT\n action will not be shown. Instead that action will be listed under \nTRACKS MERGED\n. The\ntracks from the \nKEYWORD TAGGING\n action will be shown with the \nTIKATEXT\n algorithm and \nTEXT\n track type:\n\n\n\"output\": {\n \"TRACKS MERGED\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\",\n \"algorithm\": \"TIKATEXT\"\n }\n ],\n \"MEDIA\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION\",\n \"algorithm\": \"TIKAIMAGE\",\n \"tracks\": [ ... ]\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TIKATEXT\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nIn the derivative media flow the \nTESSERACT OCR\n action is at the start of the merge chain while the \nKEYWORD TAGGING\n\naction is at the end of the merge chain. The effect is that in the JSON output object the tracks from\nthe \nTESSERACT OCR\n action will not be shown. The tracks from the \nKEYWORD TAGGING\n action will be shown with\nthe \nTESSERACTOCR\n algorithm and \nTEXT\n track type:\n\n\n\"output\": {\n \"NO TRACKS\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION#OCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"MARKUPCV\"\n }\n ],\n \"TRACKS MERGED\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [ ... ]\n }\n ],\n \"TEXT REGION\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"EAST\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nNote that a \nMARKUP\n action will never generate new tracks. It simply fills out the \nmedia.markupResult\n field in the\nJSON output object (not shown above).\n\n\nOutput Last Task Only\n\n\nIf you want to omit all tracks from the JSON output object but the respective \nTEXT\n tracks for the source and\nderivative media, then in you can also set the \nOUTPUT_LAST_TASK_ONLY\n job property to \nTRUE\n. Note that the WFM only\nconsiders tasks that use \nDETECTION\n algorithms as the final task, so \nMARKUP\n is ignored. Setting this property will\nresult in the following JSON for the source media:\n\n\n\"output\": {\n \"TRACKS SUPPRESSED\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION\",\n \"algorithm\": \"TIKAIMAGE\"\n },\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\",\n \"algorithm\": \"TIKATEXT\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TIKATEXT\", \n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nAnd the following JSON for the derivative media:\n\n\n\"output\": {\n \"NO TRACKS\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION#OCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"MARKUPCV\"\n }\n ],\n \"TRACKS SUPPRESSED\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"EAST\"\n },\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nDeveloping Media Extraction Components\n\n\nThe WFM is not limited to working only with the TikaImageDetection component. Any component can be designed to generate\nderivative media. The requirement is that it must generate \nMEDIA\n tracks, one piece of derivative media per track.\nMinimally, each track must have a \nDERIVATIVE_MEDIA_TEMP_PATH\n property set to the location of the media. By convention,\nthe media should be placed in a top-level directory of the form \n$MPF_HOME/share/tmp/derivative-media/\n. When\nthe job is done running, the media will be moved to persistent storage in \n$MPF_HOME/share/derivative-media/\n if\nremote storage is not enabled.\n\n\nSpecifically, TikaImageDetection uses paths of the\nform \n$MPF_HOME/share/tmp/derivative-media//tika-extracted//image.\n. The \n\n part ensures\nthat the results of two different actions executed within the same job on the same source media, or actions executed\nwithin the same job on different source media files, do not conflict with each other. A new \n\n is generated for\neach invocation of \nGetDetections()\n on the component.\n\n\nYour media extraction component can optionally include other track properties. These will get added to the derivative\nmedia metadata. For example, TikaImageDetection adds the \nPAGE_NUM\n property.\n\n\nNote that although this guide only talks about derivative images, your component can generate any kind of media. Be sure\nthat components in the subsequent pipeline stages can handle the media type detected by WFM media inspection.\n\n\nDefault Pipelines\n\n\nOpenMPF comes with some default pipelines for detecting text in documents and other pipelines for detecting faces in documents. Refer to the TikaImageDetection \ndescriptor.json\n.", "title": "Derivative Media Guide" }, { @@ -447,7 +447,7 @@ }, { "location": "/Object-Storage-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nObject Storage Overview\n\n\nBy default, OpenMPF will write markup files, JSON output objects, and extracted artifacts to directories in\n\n$MPF_HOME/share\n. For multi-node deployments, \n$MPF_HOME/share\n points to a directory on a network share.\nMost often, the share is managed by the Network File System (NFS) protocol, although using NFS is not a requirement.\n\n\nAlternatively, OpenMPF supports writing these files to an object storage server. That may be desirable in cloud\ndeployments to better support integration between systems, and/or to consolidate file storage as a cost-saving measure.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n.\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field.\nIf the job completes without other issues, the final status will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nCommon Object Storage Properties\n\n\nThe following system properties are common to the various types of object storage solutions that\nOpenMPF supports:\n\n\n\n\nhttp.object.storage.upload.retry.count\n\n\nThe number of times OpenMPF will attempt to upload an object to the storage server after the\n first failed attempt.\n\n\nWhen using S3, the AWS SDK's default retry strategy is used.\n\n\nWhen using NGINX, exponential back off is used between retry attempts. There is a 500ms\n delay before the first retry. The delay doubles for each subsequent retry.\n\n\n\n\n\n\n\n\nS3 Object Storage\n\n\nOpenMPF supports downloading media and uploading results to an S3 compatible\nserver such as Ceph or Minio. The use of S3 is controlled through the\nfollowing job properties and system properties:\n\n\n\n\nS3_ACCESS_KEY\n job property or \ns3.access.key\n system property\n\n\nThe access key that will be used when downloading and uploading to S3.\n\n\nWhen provided with \nS3_SECRET_KEY\n, media will be downloaded with S3\n authentication unless \nS3_UPLOAD_ONLY\n is true.\n\n\n\n\n\n\nS3_SECRET_KEY\n job property or \ns3.secret.key\n system property\n\n\nThe secret key that will be used when downloading and uploading to S3.\n\n\n\n\n\n\nS3_SESSION_TOKEN\n job property or \ns3.session.token\n system property\n\n\nOnly required when the S3 bucket is configured to require a session key.\n This generally occurs when multi-factor authentication is required.\n\n\nOpenMPF does not handle generating the session key.\n\n\n\n\n\n\nS3_USE_VIRTUAL_HOST\n job property or \ns3.use.virtual.host\n system property\n\n\nWhen false or not provided,\n \npath-style requests\n\n will be used.\n\n\nWhen true,\n \nvirtual hosted-style\n\n access will be used.\n\n\nWhen true, \nS3_HOST\n must also be provided.\n\n\nThe CNAME configuration described\n \nhere\n\n is not supported.\n\n\n\n\n\n\nS3_HOST\n job property or \ns3.host\n system property\n\n\nThe host of the S3 server without the bucket name.\n\n\nIf \nS3_RESULTS_BUCKET=https://bucket.s3.amazonaws.com\n, \nS3_HOST\n should be\n set to \ns3.amazonaws.com\n\n\nOnly used when \nS3_USE_VIRTUAL_HOST=true\n.\n\n\n\n\n\n\nS3_RESULTS_BUCKET\n job property or \ns3.results.bucket\n system property\n\n\nURI to bucket where result objects should be stored.\n\n\nTo disable the upload of result objects, do not provide a value for this property.\n\n\nExample when \nS3_USE_VIRTUAL_HOST=false\n: \nhttps://s3host/results_bucket\n\n\nExample when \nS3_USE_VIRTUAL_HOST=true\n: \nhttps://results_buckets.s3host\n\n\n\n\n\n\nS3_UPLOAD_ONLY\n job property or \ns3.upload.only\n system property\n\n\nWhen true, media will not be downloaded using S3 authentication.\n If \nS3_RESULTS_BUCKET\n is set, S3 authentication will be used to upload result objects.\n\n\nWhen false or not provided, S3 authentication will be used to download remote media.\n S3 authentication will also be used to upload result objects if \nS3_RESULTS_BUCKET\n is set.\n\n\nIf you want to run a job where some media is in S3 and some is hosted elsewhere,\n you can set \nS3_UPLOAD_ONLY\n to \ntrue\n as a media specific property on the media that is hosted elsewhere.\n\n\n\n\n\n\nS3_REGION\n job property or \ns3.region\n system property\n\n\nThe \nS3 region\n\n to use when accessing S3. For example: \nus-east-1\n\n\nSome S3 compatible servers like Minio ignore this.\n\n\n\n\n\n\nS3_UPLOAD_OBJECT_KEY_PREFIX\n job property or \ns3.upload.object.key.prefix\n system property\n\n\nSpecifies a prefix to prepend to object keys when uploading to S3.\n\n\n\n\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\nOpenMPF supports a custom NGINX object storage server solution. If you're interested, please contact us.\nWe can make the server-side code available upon request.\n\n\nFor those who choose to run their own custom NGINX object storage server, please configure OpenMPF by setting\nthe \nhttp.object.storage.nginx.service.uri\n property to the URI of the NGINX server.\nThe following system properties are unique to the custom NGINX object storage solution:\n\n\n\n\nhttp.object.storage.nginx.service.uri\n\n\nEnables use of NGINX when provided.\n\n\nThe URI to the custom NGINX object storage server. For example: \nhttps://somehost:123543/somepath\n.\n\n\nYou must provide a valid value.\n\n\n\n\n\n\nhttp.object.storage.nginx.upload.thread.count\n\n\nThe number of threads used to upload objects to the storage server.\n\n\nIn general, the default value is sufficient.\n\n\n\n\n\n\nhttp.object.storage.nginx.upload.segment.size\n\n\nThe chunk size, in bytes, that is used to upload objects to the storage server.\n\n\nIn general, the default value is sufficient.\n\n\n\n\n\n\n\n\nThe NGINX object storage server will determine the sha256 hash for the file once it's been uploaded.\nIt then uses that hash to name the file and returns the file URI to OpenMPF.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nObject Storage Overview\n\n\nBy default, OpenMPF will write markup files, JSON output objects, and extracted artifacts to directories in\n\n$MPF_HOME/share\n. For multi-node deployments, \n$MPF_HOME/share\n points to a directory on a network share.\nMost often, the share is managed by the Network File System (NFS) protocol, although using NFS is not a requirement.\n\n\nAlternatively, OpenMPF supports writing these files to an object storage server. That may be desirable in cloud\ndeployments to better support integration between systems, and/or to consolidate file storage as a cost-saving measure.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n.\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field.\nIf the job completes without other issues, the final status will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nCommon Object Storage Properties\n\n\nThe following system properties are common to the various types of object storage solutions that\nOpenMPF supports:\n\n\n\n\nhttp.object.storage.upload.retry.count\n\n\nThe number of times OpenMPF will attempt to upload an object to the storage server after the\n first failed attempt.\n\n\nWhen using S3, the AWS SDK's default retry strategy is used.\n\n\nWhen using NGINX, exponential back off is used between retry attempts. There is a 500ms\n delay before the first retry. The delay doubles for each subsequent retry.\n\n\n\n\n\n\n\n\nS3 Object Storage\n\n\nOpenMPF supports downloading media and uploading results to an S3 compatible\nserver such as Ceph or Minio. The use of S3 is controlled through the\nfollowing job properties and system properties:\n\n\n\n\nS3_ACCESS_KEY\n job property or \ns3.access.key\n system property\n\n\nThe access key that will be used when downloading and uploading to S3.\n\n\nWhen provided with \nS3_SECRET_KEY\n, media will be downloaded with S3\n authentication unless \nS3_UPLOAD_ONLY\n is true.\n\n\n\n\n\n\nS3_SECRET_KEY\n job property or \ns3.secret.key\n system property\n\n\nThe secret key that will be used when downloading and uploading to S3.\n\n\n\n\n\n\nS3_SESSION_TOKEN\n job property or \ns3.session.token\n system property\n\n\nOnly required when the S3 bucket is configured to require a session key.\n This generally occurs when multi-factor authentication is required.\n\n\nOpenMPF does not handle generating the session key.\n\n\n\n\n\n\nS3_USE_VIRTUAL_HOST\n job property or \ns3.use.virtual.host\n system property\n\n\nWhen false or not provided,\n \npath-style requests\n\n will be used.\n\n\nWhen true,\n \nvirtual hosted-style\n\n access will be used.\n\n\nWhen true, \nS3_HOST\n must also be provided.\n\n\nThe CNAME configuration described\n \nhere\n\n is not supported.\n\n\n\n\n\n\nS3_HOST\n job property or \ns3.host\n system property\n\n\nThe host of the S3 server without the bucket name.\n\n\nIf \nS3_RESULTS_BUCKET=https://bucket.s3.amazonaws.com\n, \nS3_HOST\n should be\n set to \ns3.amazonaws.com\n\n\nOnly used when \nS3_USE_VIRTUAL_HOST=true\n.\n\n\n\n\n\n\nS3_RESULTS_BUCKET\n job property or \ns3.results.bucket\n system property\n\n\nURI to bucket where result objects should be stored.\n\n\nTo disable the upload of result objects, do not provide a value for this property.\n\n\nExample when \nS3_USE_VIRTUAL_HOST=false\n: \nhttps://s3host/results_bucket\n\n\nExample when \nS3_USE_VIRTUAL_HOST=true\n: \nhttps://results_buckets.s3host\n\n\n\n\n\n\nS3_UPLOAD_ONLY\n job property or \ns3.upload.only\n system property\n\n\nWhen true, media will not be downloaded using S3 authentication.\n If \nS3_RESULTS_BUCKET\n is set, S3 authentication will be used to upload result objects.\n\n\nWhen false or not provided, S3 authentication will be used to download remote media.\n S3 authentication will also be used to upload result objects if \nS3_RESULTS_BUCKET\n is set.\n\n\nIf you want to run a job where some media is in S3 and some is hosted elsewhere,\n you can set \nS3_UPLOAD_ONLY\n to \ntrue\n as a media specific property on the media that is hosted elsewhere.\n\n\n\n\n\n\nS3_REGION\n job property or \ns3.region\n system property\n\n\nThe \nS3 region\n\n to use when accessing S3. For example: \nus-east-1\n\n\nSome S3 compatible servers like Minio ignore this.\n\n\n\n\n\n\nS3_UPLOAD_OBJECT_KEY_PREFIX\n job property or \ns3.upload.object.key.prefix\n system property\n\n\nSpecifies a prefix to prepend to object keys when uploading to S3.\n\n\n\n\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\nOpenMPF supports a custom NGINX object storage server solution. If you're interested, please contact us.\nWe can make the server-side code available upon request.\n\n\nFor those who choose to run their own custom NGINX object storage server, please configure OpenMPF by setting\nthe \nhttp.object.storage.nginx.service.uri\n property to the URI of the NGINX server.\nThe following system properties are unique to the custom NGINX object storage solution:\n\n\n\n\nhttp.object.storage.nginx.service.uri\n\n\nEnables use of NGINX when provided.\n\n\nThe URI to the custom NGINX object storage server. For example: \nhttps://somehost:123543/somepath\n.\n\n\nYou must provide a valid value.\n\n\n\n\n\n\nhttp.object.storage.nginx.upload.thread.count\n\n\nThe number of threads used to upload objects to the storage server.\n\n\nIn general, the default value is sufficient.\n\n\n\n\n\n\nhttp.object.storage.nginx.upload.segment.size\n\n\nThe chunk size, in bytes, that is used to upload objects to the storage server.\n\n\nIn general, the default value is sufficient.\n\n\n\n\n\n\n\n\nThe NGINX object storage server will determine the sha256 hash for the file once it's been uploaded.\nIt then uses that hash to name the file and returns the file URI to OpenMPF.", "title": "Object Storage Guide" }, { @@ -472,7 +472,7 @@ }, { "location": "/Markup-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nOpenMPF provides a Markup component that can be used to draw bounding boxes and labels on images and videos. The\ncomponent provides one task called \nOCV GENERIC MARKUP TASK\n that can be added to the end of any image and/or video\npipeline. By default, many other OpenMPF components provide \n* (WITH MARKUP)\n pipelines that use this task. Note that\nthe Markup component will not appear in the list of components in the Component Registration web UI because it's a core\nfeature of OpenMPF.\n\n\nConfiguration\n\n\nThe following properties can be set as job properties or algorithm properties on the \nMARKUPCV\n algorithm. Also, the\ndefault values can be changed be setting the system property listed for each:\n\n\n\n\nMARKUP_LABELS_ENABLED\n\n\nSystem property: \nmarkup.labels.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, add a label to each detection box.\n\n\n\n\n\n\nMARKUP_LABELS_ALPHA\n\n\nSystem property: \nmarkup.labels.alpha\n\n\nDefault value: \n0.5\n\n\nValue in range [0.0, 1.0] that specifies how transparent the labels and frame number overlay should be. 0.0 is invisible (not recommended) and 1.0 is fully opaque.\n\n\n\n\n\n\nMARKUP_LABELS_FROM_DETECTIONS\n\n\nSystem property: \nmarkup.labels.from.detections\n\n\nDefault value: \nfalse\n\n\nIf true, use detection-level details to populate the bounding box labels. Otherwise, use track-level details.\n\n\n\n\n\n\nMARKUP_LABELS_TRACK_INDEX_ENABLED\n\n\nSystem property: \nmarkup.labels.track.index.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, add the track index to the start of every bounding box label.\n\n\n\n\n\n\nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n\n\nSystem property: \nmarkup.labels.text.prop.to.show\n\n\nDefault value: \nCLASSIFICATION\n\n\nName of the text property to show in the label before the numeric property. If using track-level details, and this property is not present at the track level, then the detection property for the track's exemplar will be used. Leave empty to omit.\n\n\n\n\n\n\nMARKUP_TEXT_LABEL_MAX_LENGTH\n\n\nSystem property: \nmarkup.labels.text.max.length\n\n\nDefault value: \n10\n\n\nThe maximum length of the label selected by \nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n.\n If the label is longer than the limit, characters after limit will not be displayed.\n\n\n\n\n\n\nMARKUP_LABELS_NUMERIC_PROP_TO_SHOW\n\n\nSystem property: \nmarkup.labels.numeric.prop.to.show\n\n\nDefault value: \nCONFIDENCE\n\n\nName of the numeric property to show in the label after the text property. If using track-level details, and this property is not present at the track level, then the detection property for the track's exemplar will be used. Leave empty to omit. Set to CONFIDENCE to use the confidence value.\n\n\nNumeric values are displayed with a precision of 3 decimal places in the label.\n\n\n\n\n\n\nMARKUP_LABELS_CHOOSE_SIDE_ENABLED\n\n\nSystem property: \nmarkup.labels.choose.side.enabled\n\n\nDefault value: \ntrue\n\n\nLabels will always snap to the top-most corner of the box. If true, snap the label to the side of the corner that produces the least amount of overhang. If false, always show the label on the right side of the corner.\n\n\n\n\n\n\nMARKUP_BORDER_ENABLED\n\n\nSystem property: \nmarkup.border.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, generate the marked-up frame with a black border. Can be useful if boxes or labels extend beyond frame boundaries.\n\n\n\n\n\n\nMARKUP_VIDEO_EXEMPLAR_ICONS_ENABLED\n\n\nSystem property: \nmarkup.video.exemplar.icons.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, and labels are enabled, use an icon to indicate the exemplar detection for each track.\n\n\nThe icons are only used in video markup. This is because every detection is an exemplar in image markup.\n\n\n\n\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\nSystem property: \nmarkup.video.box.source.icons.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, and labels are enabled, use icons to indicate the source of each bounding box. For example, if the box is the result of an algorithm detection, tracking performing gap fill, or Workflow Manager animation.\n\n\n\n\n\n\nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED\n\n\nSystem property: \nmarkup.video.moving.object.icons.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, and labels are enabled, use icons to indicate if the object is considered moving or stationary. If using track-level details, and the \nMOVING\n property is not present at the track level, then the property for the track's exemplar will be used.\n\n\n\n\n\n\nMARKUP_VIDEO_FRAME_NUMBERS_ENABLED\n\n\nSystem property: \nmarkup.video.frame.numbers.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, add the frame number to each marked-up frame. This setting is independent of \nMARKUP_LABELS_ENABLED\n.\n\n\n\n\n\n\nMARKUP_VIDEO_ENCODER\n\n\nSystem property: \nmarkup.video.encoder\n\n\nDefault value: \nvp9\n\n\nUse \nvp9\n to generate VP9-encoded \n.webm\n video files. Use \nh264\n to generate H.264-encoded \n.mp4\n files. Use \nmjpeg\n to generate MJPEG-encoded \n.avi\n files. The \n.webm\n and \n.mp4\n files can display in most browsers, and are higher quality, but take longer to generate.\n\n\nPlease review the \nUsage Royalties\n section of the License and Distribution page before using the H.264 encoder for commercial purposes.\n\n\n\n\n\n\nMARKUP_VIDEO_VP9_CRF\n\n\nSystem property: \nmarkup.video.vp9.crf\n\n\nDefault value: \n31\n\n\nThe CRF value can be from 0-63. Lower values mean better quality. Recommended values range from 15-35, with 31 being recommended for 1080p HD video. This property is only used if generating VP9-encoded \n.webm\n files\n\n\n\n\n\n\nMARKUP_ANIMATION_ENABLED\n\n\nSystem property: \nmarkup.video.animation.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, draw bounding boxes to fill in the gaps between detections in each track. Interpolate size and position.\n\n\n\n\n\n\n\n\nVideo Markup Icons\n\n\n\n\n\n\n\n\nIcon\n\n\nMeaning\n\n\nSetting\n\n\n\n\n\n\n\n\n\n\n\n\nTrack exemplar\n\n\nMARKUP_VIDEO_EXEMPLAR_ICONS_ENABLED\n\n\n\n\n\n\n\n\nTrack or detection is moving\n\n\nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED\n\n\n\n\n\n\n\n\nTrack or detection is stationary\n\n\nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED\n\n\n\n\n\n\n\n\nDetection is the direct result of a component detection algorithm\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\n\n\n\n\n\n\nDetection is the result of a component performing tracking in an attempt to fill in the gaps between algorithm detections\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\n\n\n\n\n\n\nDetection is the result of the Workflow Manager interpolating (animating) the size and position of the bounding box to fill gaps between detections in the track\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\n\n\n\n\n\n\nVideo Markup Examples\n\n\n\n\nAbove we show frame 94 of a marked up video. Frame numbers are enabled so the frame number is shown in the top-right corner. Exemplar icons are enabled, and since this detection is the track exemplar a star icon is shown in the label. Also, the label shows the track's \nCLASSIFICATION\n property followed by the track confidence. All of the examples shown in this section will display track-level information because \nMARKUP_LABELS_FROM_DETECTIONS=false\n. The circle represents the top-left corner of the detection. See \nthis section\n of the C++ Batch Component API for more information on flip and rotation.\n\n\n\n\nAbove we show frame 25 of the marked up video. This time we configured markup to show a black border around the video frame. This is useful when the label extends beyond the edge of the original video frame, as shown here. Also, this time we configured markup to show icons indicating if the track is moving or stationary. The fast-forward icon at the start of the label indicates that this track is moving. Additionally, this time we configured markup to show icons indicating the bounding box source. The magnifying glass icon after the fast-forward icon indicates that this detection is a direct result of the component's detection algorithm. Note that the magnifying glass icon will be replaced with the star icon for exemplars.\n\n\n\n\nThe frame above shows a movie camera icon to indicate that the detection is the result of the Workflow Manager (WFM) interpolating (animating) the size and position of the bounding box to fill gaps between detections in the track. Considering how blurry the person appears in this frame, it's not surprising that the algorithm could not detect him. If you perform a job with \nFRAME_INTERVAL\n greater than one, or otherwise perform frame skipping, then all bounding boxes in skipped frames will be the result of WFM animation. Note that the classification and confidence values are simply carried over from the last detection that was not the result of WFM animation.\n\n\n\n\nThe frame above shows a paper clip icon to indicate that the detection is the result of the component performing tracking in an attempt to fill in the gaps between algorithm detections. In general, these detections are more trustworthy than the ones resulting from WFM animation, but not as trustworthy as the ones directly resulting from the detection algorithm.\n\n\n\n\nThe frame above shows the person detection in addition to a new skis detection. The confidence for the latter is lower, which is good considering the algorithm misclassified the person's shadow as skis. The skis track is only a few frames long, so the WFM determined it was a non-moving (stationary) track. This is represented by the anchor icon at the start of the label. Also, notice that the labels are semi-transparent. This allows you to read labels and see frame content that would otherwise be hidden if the labels were completely opaque. Note that you may want to set \nMARKUP_LABELS_ALPHA\n to \n0.75\n or greater when using the \nmjpeg\n encoder.\n\n\nVideo Encoder Considerations\n\n\nPerforming markup on an image will always generate a \n.png\n file. Performing markup on a video will generate a video file based on the value of \nMARKUP_VIDEO_ENCODER\n. The \nvp9\n, \nh264\n, and \nmjpg\n encoders are supported.\n\n\nThe \nvp9\n and \nh264\n encoders serve the same purpose in that both formats can be played in the WFM web UI in most web browsers, while the \n.avi\n files resulting from the \nmjpeg\n format must be downloaded and played using a separate program like \nVLC\n or \nmpv\n. In general, \nh264\n encoding is much faster than \nvp9\n encoding, so you may want to use it instead of \nvp9\n. Please be aware that you may be required to pay \nUsage Royalties\n when using the \nh264\n encoder for commercial purposes.\n\n\nThe \nmjpeg\n encoder is faster than both the \nvp9\n and \nh264\n encoders, but produces lower quality video. Specifically, the label text is not as clear. You may want to use it when developing components or marking up large video files.\n\n\nTo give you a sense of performance, here are the results of a very limited batch of tests. Note that if you choose to use the \nvp9\n encoder, you can increase the CRF value to reduce processing time at the cost of reduced video quality.\n\n\ninput media: 23 frames @ 3840x2160:\n\n\n\n\n\n\n\n\nEncoder\n\n\nCRF\n\n\nTime (secs)\n\n\nNotes\n\n\n\n\n\n\n\n\n\n\nmjpeg\n\n\n\n\n6.94\n\n\n\n\n\n\n\n\nh264\n\n\n\n\n9.478\n\n\n\n\n\n\n\n\nvp9\n\n\n60\n\n\n13.194\n\n\n\n\n\n\n\n\nvp9\n\n\n31\n\n\n21.431\n\n\n\n\n\n\n\n\n\n\ninput media: 509 frames @ 640x480:\n\n\n\n\n\n\n\n\nEncoder\n\n\nCRF\n\n\nTime (secs)\n\n\nNotes\n\n\n\n\n\n\n\n\n\n\nmjpeg\n\n\n\n\n6.927\n\n\nalpha 0.5 is hard to read when blended with dark background; 0.75 does better\n\n\n\n\n\n\nh264\n\n\n\n\n11.259\n\n\n\n\n\n\n\n\nvp9\n\n\n60\n\n\n35.945\n\n\ntext not acceptable due to low resolution\n\n\n\n\n\n\nvp9\n\n\n31\n\n\n52.178", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nOpenMPF provides a Markup component that can be used to draw bounding boxes and labels on images and videos. The\ncomponent provides one task called \nOCV GENERIC MARKUP TASK\n that can be added to the end of any image and/or video\npipeline. By default, many other OpenMPF components provide \n* (WITH MARKUP)\n pipelines that use this task. Note that\nthe Markup component will not appear in the list of components in the Component Registration web UI because it's a core\nfeature of OpenMPF.\n\n\nConfiguration\n\n\nThe following properties can be set as job properties or algorithm properties on the \nMARKUPCV\n algorithm. Also, the\ndefault values can be changed be setting the system property listed for each:\n\n\n\n\nMARKUP_LABELS_ENABLED\n\n\nSystem property: \nmarkup.labels.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, add a label to each detection box.\n\n\n\n\n\n\nMARKUP_LABELS_ALPHA\n\n\nSystem property: \nmarkup.labels.alpha\n\n\nDefault value: \n0.5\n\n\nValue in range [0.0, 1.0] that specifies how transparent the labels and frame number overlay should be. 0.0 is invisible (not recommended) and 1.0 is fully opaque.\n\n\n\n\n\n\nMARKUP_LABELS_FROM_DETECTIONS\n\n\nSystem property: \nmarkup.labels.from.detections\n\n\nDefault value: \nfalse\n\n\nIf true, use detection-level details to populate the bounding box labels. Otherwise, use track-level details.\n\n\n\n\n\n\nMARKUP_LABELS_TRACK_INDEX_ENABLED\n\n\nSystem property: \nmarkup.labels.track.index.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, add the track index to the start of every bounding box label.\n\n\n\n\n\n\nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n\n\nSystem property: \nmarkup.labels.text.prop.to.show\n\n\nDefault value: \nCLASSIFICATION\n\n\nName of the text property to show in the label before the numeric property. If using track-level details, and this property is not present at the track level, then the detection property for the track's exemplar will be used. Leave empty to omit.\n\n\n\n\n\n\nMARKUP_TEXT_LABEL_MAX_LENGTH\n\n\nSystem property: \nmarkup.labels.text.max.length\n\n\nDefault value: \n10\n\n\nThe maximum length of the label selected by \nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n.\n If the label is longer than the limit, characters after limit will not be displayed.\n\n\n\n\n\n\nMARKUP_LABELS_NUMERIC_PROP_TO_SHOW\n\n\nSystem property: \nmarkup.labels.numeric.prop.to.show\n\n\nDefault value: \nCONFIDENCE\n\n\nName of the numeric property to show in the label after the text property. If using track-level details, and this property is not present at the track level, then the detection property for the track's exemplar will be used. Leave empty to omit. Set to CONFIDENCE to use the confidence value.\n\n\nNumeric values are displayed with a precision of 3 decimal places in the label.\n\n\n\n\n\n\nMARKUP_LABELS_CHOOSE_SIDE_ENABLED\n\n\nSystem property: \nmarkup.labels.choose.side.enabled\n\n\nDefault value: \ntrue\n\n\nLabels will always snap to the top-most corner of the box. If true, snap the label to the side of the corner that produces the least amount of overhang. If false, always show the label on the right side of the corner.\n\n\n\n\n\n\nMARKUP_BORDER_ENABLED\n\n\nSystem property: \nmarkup.border.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, generate the marked-up frame with a black border. Can be useful if boxes or labels extend beyond frame boundaries.\n\n\n\n\n\n\nMARKUP_VIDEO_EXEMPLAR_ICONS_ENABLED\n\n\nSystem property: \nmarkup.video.exemplar.icons.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, and labels are enabled, use an icon to indicate the exemplar detection for each track.\n\n\nThe icons are only used in video markup. This is because every detection is an exemplar in image markup.\n\n\n\n\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\nSystem property: \nmarkup.video.box.source.icons.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, and labels are enabled, use icons to indicate the source of each bounding box. For example, if the box is the result of an algorithm detection, tracking performing gap fill, or Workflow Manager animation.\n\n\n\n\n\n\nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED\n\n\nSystem property: \nmarkup.video.moving.object.icons.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, and labels are enabled, use icons to indicate if the object is considered moving or stationary. If using track-level details, and the \nMOVING\n property is not present at the track level, then the property for the track's exemplar will be used.\n\n\n\n\n\n\nMARKUP_VIDEO_FRAME_NUMBERS_ENABLED\n\n\nSystem property: \nmarkup.video.frame.numbers.enabled\n\n\nDefault value: \ntrue\n\n\nIf true, add the frame number to each marked-up frame. This setting is independent of \nMARKUP_LABELS_ENABLED\n.\n\n\n\n\n\n\nMARKUP_VIDEO_ENCODER\n\n\nSystem property: \nmarkup.video.encoder\n\n\nDefault value: \nvp9\n\n\nUse \nvp9\n to generate VP9-encoded \n.webm\n video files. Use \nh264\n to generate H.264-encoded \n.mp4\n files. Use \nmjpeg\n to generate MJPEG-encoded \n.avi\n files. The \n.webm\n and \n.mp4\n files can display in most browsers, and are higher quality, but take longer to generate.\n\n\nPlease review the \nUsage Royalties\n section of the License and Distribution page before using the H.264 encoder for commercial purposes.\n\n\n\n\n\n\nMARKUP_VIDEO_VP9_CRF\n\n\nSystem property: \nmarkup.video.vp9.crf\n\n\nDefault value: \n31\n\n\nThe CRF value can be from 0-63. Lower values mean better quality. Recommended values range from 15-35, with 31 being recommended for 1080p HD video. This property is only used if generating VP9-encoded \n.webm\n files\n\n\n\n\n\n\nMARKUP_ANIMATION_ENABLED\n\n\nSystem property: \nmarkup.video.animation.enabled\n\n\nDefault value: \nfalse\n\n\nIf true, draw bounding boxes to fill in the gaps between detections in each track. Interpolate size and position.\n\n\n\n\n\n\n\n\nVideo Markup Icons\n\n\n\n\n\n\n\n\nIcon\n\n\nMeaning\n\n\nSetting\n\n\n\n\n\n\n\n\n\n\n\n\nTrack exemplar\n\n\nMARKUP_VIDEO_EXEMPLAR_ICONS_ENABLED\n\n\n\n\n\n\n\n\nTrack or detection is moving\n\n\nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED\n\n\n\n\n\n\n\n\nTrack or detection is stationary\n\n\nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED\n\n\n\n\n\n\n\n\nDetection is the direct result of a component detection algorithm\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\n\n\n\n\n\n\nDetection is the result of a component performing tracking in an attempt to fill in the gaps between algorithm detections\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\n\n\n\n\n\n\nDetection is the result of the Workflow Manager interpolating (animating) the size and position of the bounding box to fill gaps between detections in the track\n\n\nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED\n\n\n\n\n\n\n\n\nVideo Markup Examples\n\n\n\n\nAbove we show frame 94 of a marked up video. Frame numbers are enabled so the frame number is shown in the top-right corner. Exemplar icons are enabled, and since this detection is the track exemplar a star icon is shown in the label. Also, the label shows the track's \nCLASSIFICATION\n property followed by the track confidence. All of the examples shown in this section will display track-level information because \nMARKUP_LABELS_FROM_DETECTIONS=false\n. The circle represents the top-left corner of the detection. See \nthis section\n of the C++ Batch Component API for more information on flip and rotation.\n\n\n\n\nAbove we show frame 25 of the marked up video. This time we configured markup to show a black border around the video frame. This is useful when the label extends beyond the edge of the original video frame, as shown here. Also, this time we configured markup to show icons indicating if the track is moving or stationary. The fast-forward icon at the start of the label indicates that this track is moving. Additionally, this time we configured markup to show icons indicating the bounding box source. The magnifying glass icon after the fast-forward icon indicates that this detection is a direct result of the component's detection algorithm. Note that the magnifying glass icon will be replaced with the star icon for exemplars.\n\n\n\n\nThe frame above shows a movie camera icon to indicate that the detection is the result of the Workflow Manager (WFM) interpolating (animating) the size and position of the bounding box to fill gaps between detections in the track. Considering how blurry the person appears in this frame, it's not surprising that the algorithm could not detect him. If you perform a job with \nFRAME_INTERVAL\n greater than one, or otherwise perform frame skipping, then all bounding boxes in skipped frames will be the result of WFM animation. Note that the classification and confidence values are simply carried over from the last detection that was not the result of WFM animation.\n\n\n\n\nThe frame above shows a paper clip icon to indicate that the detection is the result of the component performing tracking in an attempt to fill in the gaps between algorithm detections. In general, these detections are more trustworthy than the ones resulting from WFM animation, but not as trustworthy as the ones directly resulting from the detection algorithm.\n\n\n\n\nThe frame above shows the person detection in addition to a new skis detection. The confidence for the latter is lower, which is good considering the algorithm misclassified the person's shadow as skis. The skis track is only a few frames long, so the WFM determined it was a non-moving (stationary) track. This is represented by the anchor icon at the start of the label. Also, notice that the labels are semi-transparent. This allows you to read labels and see frame content that would otherwise be hidden if the labels were completely opaque. Note that you may want to set \nMARKUP_LABELS_ALPHA\n to \n0.75\n or greater when using the \nmjpeg\n encoder.\n\n\nVideo Encoder Considerations\n\n\nPerforming markup on an image will always generate a \n.png\n file. Performing markup on a video will generate a video file based on the value of \nMARKUP_VIDEO_ENCODER\n. The \nvp9\n, \nh264\n, and \nmjpg\n encoders are supported.\n\n\nThe \nvp9\n and \nh264\n encoders serve the same purpose in that both formats can be played in the WFM web UI in most web browsers, while the \n.avi\n files resulting from the \nmjpeg\n format must be downloaded and played using a separate program like \nVLC\n or \nmpv\n. In general, \nh264\n encoding is much faster than \nvp9\n encoding, so you may want to use it instead of \nvp9\n. Please be aware that you may be required to pay \nUsage Royalties\n when using the \nh264\n encoder for commercial purposes.\n\n\nThe \nmjpeg\n encoder is faster than both the \nvp9\n and \nh264\n encoders, but produces lower quality video. Specifically, the label text is not as clear. You may want to use it when developing components or marking up large video files.\n\n\nTo give you a sense of performance, here are the results of a very limited batch of tests. Note that if you choose to use the \nvp9\n encoder, you can increase the CRF value to reduce processing time at the cost of reduced video quality.\n\n\ninput media: 23 frames @ 3840x2160:\n\n\n\n\n\n\n\n\nEncoder\n\n\nCRF\n\n\nTime (secs)\n\n\nNotes\n\n\n\n\n\n\n\n\n\n\nmjpeg\n\n\n\n\n6.94\n\n\n\n\n\n\n\n\nh264\n\n\n\n\n9.478\n\n\n\n\n\n\n\n\nvp9\n\n\n60\n\n\n13.194\n\n\n\n\n\n\n\n\nvp9\n\n\n31\n\n\n21.431\n\n\n\n\n\n\n\n\n\n\ninput media: 509 frames @ 640x480:\n\n\n\n\n\n\n\n\nEncoder\n\n\nCRF\n\n\nTime (secs)\n\n\nNotes\n\n\n\n\n\n\n\n\n\n\nmjpeg\n\n\n\n\n6.927\n\n\nalpha 0.5 is hard to read when blended with dark background; 0.75 does better\n\n\n\n\n\n\nh264\n\n\n\n\n11.259\n\n\n\n\n\n\n\n\nvp9\n\n\n60\n\n\n35.945\n\n\ntext not acceptable due to low resolution\n\n\n\n\n\n\nvp9\n\n\n31\n\n\n52.178", "title": "Markup Guide" }, { @@ -502,7 +502,7 @@ }, { "location": "/TiesDb-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nTiesDb Overview\n\n\nRefer to \nhttps://github.com/Noblis/ties-lib\n for more information on the Triage Import Export\nSchema (TIES). For each piece of media, we create one or more TIES\n\"supplementalDescription (Data Object)\" entries in the database, one for each\nanalytic (algorithm) run on the media. In general, a \"supplementalDescription\" is a kind of TIES\n\"assertion\", which is used to represent metadata about the media object. In our case it\nrepresents the detection and track information in the OpenMPF JSON output object. Workflow Manager\ncan be configured to check TiesDb for a\n\nsupplemental it previously created\n and\n\nuse the results from that previous job to avoid re-running the job.\n\nWorkflow Manager can also be configured to \ncopy results to a different S3 bucket\n\nwhen a matching job is found.\n\n\nConfiguration\n\n\n\n\nTIES_DB_URL\n job property or \nties.db.url\n system property\n\n\nWhen provided, information about completed jobs will be sent to the specified TiesDb server.\n\n\nFor example: \nhttps://tiesdb.example.com\n\n\n\n\n\n\nSKIP_TIES_DB_CHECK\n job property or \nties.db.skip.check\n system property\n\n\nWhen true, TiesDb won't be checked for a compatible job before processing media.\n\n\n\n\n\n\nTIES_DB_S3_COPY_ENABLED\n job property or \nties.db.s3.copy.enabled\n system property\n\n\nWhen true and a job is skipped because a compatible job is found in TiesDb, the results\n from the previous job will be copied to a different S3 bucket. Copying results will always\n result in a new JSON output object, even if using the same S3 location as the previous job.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_ACCESS_KEY\n job property or \nties.db.copy.src.s3.access.key\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 access key\n that will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_ACCESS_KEY\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_SECRET_KEY\n job property or \nties.db.copy.src.s3.secret.key\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 secret key\n that will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_SECRET_KEY\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_SESSION_TOKEN\n job property or \nties.db.copy.src.s3.session.token\n system\n property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 session\n token that will be used when getting the results from S3. If not provided, defaults to the\n value of \nS3_SESSION_TOKEN\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_REGION\n job property or \nties.db.copy.src.s3.region\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 region that\n will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_REGION\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_USE_VIRTUAL_HOST\n job property or \nties.db.copy.src.s3.use.virtual.host\n\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this enables virtual host\n style bucket URIs. If not provided, defaults to the value of \nS3_USE_VIRTUAL_HOST\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_HOST\n job property or \nties.db.copy.src.s3.host\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 host that\n will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_HOST\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_UPLOAD_OBJECT_KEY_PREFIX\n job property or\n \nties.db.copy.src.s3.upload.object.key.prefix\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 object key\n prefix that will be used when getting the results from S3. If not provided, defaults to the\n value of \nS3_UPLOAD_OBJECT_KEY_PREFIX\n.\n\n\n\n\n\n\ndata.ties.db.check.ignorable.properties.file\n system property\n\n\nPath to the\n \nJSON file containing the list of properties that should not be considered\n\n when checking for a compatible job in TiesDb.\n\n\n\n\n\n\nLINKED_MEDIA_HASH\n media property\n\n\nWhen this property is set, interactions with TiesDb will use the value of \nLINKED_MEDIA_HASH\n\n instead of the media's actual SHA-256 hash.\n\n\n\n\n\n\n\n\nAfter Job Supplemental Creation\n\n\nWhen a URL is provided for the \nTIES_DB_URL\n job property or \nties.db.url\n system property,\nWorkflow Manager will post a supplemental to TiesDb at the end of the job. The full URL that\nWorkflow Manager will post to is created by taking the provided URL and appending\n\n/api/db/supplementals?sha256Hash=\n to it. If, for example, the provided TiesDb URL\nis \nhttps://tiesdb.example.com/path\n and the SHA-256 hash of the media is\n\nd1bc8d3ba4afc7e109612cb73acbdd\n, Workflow Manager will post to \n\n\nhttps://tiesdb.example.com/path/api/db/supplementals?sha256Hash=d1bc8d3ba4afc7e109612cb73acbdd\n\n\nThis is an example of what Workflow Manager will post to TiesDb:\n\n\n{\n \"assertionId\": \"87298cc2f31fba73181ea2a9e6ef10dce21ed95e98bdac9c4e1504ea16f486e4\",\n \"dataObject\": {\n \"algorithm\": \"MOG\",\n \"pipeline\": \"MOG MOTION PIPELINE\",\n \"jobId\": \"mpf.example.com-13\",\n \"jobStatus\": \"COMPLETE\",\n \"outputType\": \"MOTION\",\n \"outputUri\": \"https://s3.example.com/2c/f2/2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824\",\n \"processDate\": \"2021-10-08T15:24:04.168448Z\",\n \"sha256OutputHash\": \"2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824\",\n \"systemHostname\": \"mpf.example.com\",\n \"systemVersion\": \"6.0.2\",\n \"trackCount\": 100,\n \"jobConfigHash\": \"d52ad13e6e2db75e780b858e92b89df18c021674c24fd6c84dd151dcd28f5c56\"\n },\n \"informationType\": \"OpenMPF_MOTION\",\n \"securityTag\": \"UNCLASSIFIED\",\n \"system\": \"OpenMPF\"\n}\n\n\n\nBefore Job Check\n\n\nWorkflow Manager can be configured to check TiesDb for a supplemental produced as a result of\na previously run OpenMPF job. In order for Workflow Manager to do this check, the \nTIES_DB_URL\n\njob property or the \nties.db.url\n system property must be set to the TiesDb server's URL and\nthe \nSKIP_TIES_DB_CHECK\n job property or the \nties.db.skip.check\n system property must be set to\nfalse. If Workflow Manager finds a supplemental with a \njobConfigHash\n that matches the job that\nWorkflow Manager is currently running, Workflow Manager will not process the media in the current\njob. Workflow Manager will use the output of the previous job to determine the output for the\ncurrent job. When Workflow Manager is configured to copy the results of the previous job to a\ndifferent S3 bucket, a new output object will be created. Otherwise, the previous job's output\nobject will be used.\n\n\nIt is possible for there to be multiple matching supplementals in TiesDb. In that case,\nWorkflow Manager will first pick the supplementals with the best job status. The job statuses\nfrom best to worst are \nCOMPLETE\n, \nCOMPLETE_WITH_WARNINGS\n, and \nCOMPLETE_WITH_ERRORS\n. If no jobs\nwith those statuses exist, all other statuses are considered equally bad. If there are multiple\nsupplementals with the same status, the most recently created supplemental will be used.\n\n\nIn order to determine if a previous job was similar enough to a current job, a hash of the\nimportant parts of the jobs is computed. The parts of the job that are included in the hash are:\n\n\n\n\nNames of the algorithms used in the pipeline and their order\n\n\nNon-\nignorable job properties\n\n\noutput.changed.counter\n system property\n\n\nThis is an integer that is incremented when there is a change to the Workflow Manager after\n which previous TiesDb records should not be used. Because this number is part of the job\n configuration hash, changing this number invalidates all previous TiesDb records.\n\n\n\n\n\n\nComponent descriptor's \noutputChangedCounter\n property.\n\n\nThis works the same way as \noutput.changed.counter\n, except that it only invalidates TiesDb\n records for jobs that used the component.\n\n\n\n\n\n\nFrame ranges and time ranges\n\n\nJSON output object major and minor version\n\n\nSHA-256 hashes of all input media\n\n\nAs a result of this, in order to find a matching job, both jobs must have been run on all\n of the same media. To improve the chances that a matching job is found in TiesDb, a user\n can choose to only submit jobs for a single piece of media.\n\n\n\n\n\n\n\n\nIgnorable Properties\n\n\nThere are certain job properties, that when changed, do not change the output. There are also job\nproperties that only affect certain types of media. To make it more likely that a matching job will\nbe found in TiesDb, Workflow Manager can be configured to ignore the previously mentioned job\nproperties when computing the job configuration hash.\n\n\nThe properties that should be ignored are specified in a JSON file. The\n\ndata.ties.db.check.ignorable.properties.file\n system property contains the path to the JSON file.\nThe JSON file must contain a list of objects with two properties: \nignorableForMediaTypes\n and\n\nnames\n. \nignorableForMediaTypes\n is a list of strings specifying which media types are able\nto ignore the properties listed in the \nnames\n list.\n\n\nIn the example below, the \nVERBOSE\n job property is never included in the job hash because all\nmedia types are present in the \nignorableForMediaTypes\n list. \nARTIFACT_EXTRACTION_POLICY\n\nis ignored when the media is audio or unknown. \nFRAME_INTERVAL\n appears in both the second\nand third object, so it is ignorable when the media is audio, unknown, or image.\n\n\n[\n {\n \"ignorableForMediaTypes\": [\"VIDEO\", \"IMAGE\", \"AUDIO\", \"UNKNOWN\"],\n \"names\": [\"VERBOSE\"]\n },\n {\n \"ignorableForMediaTypes\": [\"AUDIO\", \"UNKNOWN\"],\n \"names\": [\"ARTIFACT_EXTRACTION_POLICY\", \"FRAME_INTERVAL\"]\n },\n {\n \"ignorableForMediaTypes\": [\"IMAGE\"],\n \"names\": [\"FRAME_INTERVAL\"]\n }\n]\n\n\n\nAvoid Downloading Media\n\n\nThe SHA-256 hash of the job's media is also included when computing the job configuration hash.\nIf the job request contains the media's hash and MIME type, Workflow Manager can avoid downloading\nthe media if a match is found in TiesDB. If the media's hash and MIME type are not included in the\njob request Workflow Manager will use the normal media inspection process to get that information.\nIf the media is not a local path, this will require Workflow Manager to download the media.\n\n\nBelow is an example of a job creation request that includes the media's hash and MIME type:\n\n\n{\n \"algorithmProperties\": {},\n \"buildOutput\": true,\n \"jobProperties\": {\n \"S3_ACCESS_KEY\": \"xxxxxx\",\n \"S3_SECRET_KEY\": \"xxxxxx\",\n \"TIES_DB_URL\": \"https://tiesdb.example.com\",\n \"SKIP_TIES_DB_CHECK\": \"false\"\n },\n \"media\": [\n {\n \"mediaUri\": \"https://s3.example.com/bucket/my-video.mp4\",\n \"metadata\": {\n \"MEDIA_HASH\": \"2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824\",\n \"MIME_TYPE\": \"video/mp4\"\n }\n }\n ],\n \"pipelineName\": \"OCV FACE DETECTION PIPELINE\",\n \"priority\": 4\n}\n\n\n\nS3 Copy\n\n\nWhen the \nTIES_DB_S3_COPY_ENABLED\n job property or \nties.db.s3.copy.enabled\n system property is\ntrue and a matching job is found in TiesDb, Workflow Manager will copy the artifacts, markup,\nand derivative media to the bucket specified in the current job's \nS3_RESULTS_BUCKET\n job property\nor \ns3.results.bucket\n system property. Since the job's artifacts, markup, and derivative media\nare in a new location, the output object must be updated before it is uploaded to the new S3 bucket.\nIn the updated output object, the \ntiesDbSourceJobId\n property will be set to the previous job's ID\nand \ntiesDbSourceMediaPath\n will be set to the path of the previous job's media. When the S3 copy\nis enabled and the results bucket is the same as the previous job, a new output object is created,\nbut copies of the artifacts, markup, and derivative media are not created. If the S3 copy is\ndisabled, \ntiesDbSourceJobId\n and \ntiesDbSourceMediaPath\n are not added because the original job's\noutput object is used without changes. If the copy fails, a link to the old JSON output object will\nbe provided.\n\n\nWhen performing the S3 copy, the \nS3 job properties\n like\n\nS3_ACCESS_KEY\n and \nS3_SECRET_KEY\n use the values from the current job and apply to the\ndestination of the copy. If the values for the S3 properties should be different for the source of\nthe copy, the properties prefixed with \nTIES_DB_COPY_SRC_\n can be set. If for a given property the\n\nTIES_DB_COPY_SRC_\n prefixed version is not set, the non-prefixed version will be used.\n\n\nFor example, if a job is received with the following properties set:\n\n\n\n\nS3_SECRET_KEY\n=\nnew-secret-key\n\n\nS3_ACCESS_KEY\n=\naccess-key\n\n\nTIES_DB_COPY_SRC_S3_SECRET_KEY\n=\nold-secret-key\n\n\n\n\nthen, when accessing the previous job's results \naccess-key\n will be used for the access key and\n\nold-secret-key\n will be used for the secret key. When uploading the results to the new bucket,\n\naccess-key\n will be used for the access key and \nnew-secret-key\n will be used for the secret key.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024\nThe MITRE Corporation. All Rights Reserved.\n\n\nTiesDb Overview\n\n\nRefer to \nhttps://github.com/Noblis/ties-lib\n for more information on the Triage Import Export\nSchema (TIES). For each piece of media, we create one or more TIES\n\"supplementalDescription (Data Object)\" entries in the database, one for each\nanalytic (algorithm) run on the media. In general, a \"supplementalDescription\" is a kind of TIES\n\"assertion\", which is used to represent metadata about the media object. In our case it\nrepresents the detection and track information in the OpenMPF JSON output object. Workflow Manager\ncan be configured to check TiesDb for a\n\nsupplemental it previously created\n and\n\nuse the results from that previous job to avoid re-running the job.\n\nWorkflow Manager can also be configured to \ncopy results to a different S3 bucket\n\nwhen a matching job is found.\n\n\nConfiguration\n\n\n\n\nTIES_DB_URL\n job property or \nties.db.url\n system property\n\n\nWhen provided, information about completed jobs will be sent to the specified TiesDb server.\n\n\nFor example: \nhttps://tiesdb.example.com\n\n\n\n\n\n\nSKIP_TIES_DB_CHECK\n job property or \nties.db.skip.check\n system property\n\n\nWhen true, TiesDb won't be checked for a compatible job before processing media.\n\n\n\n\n\n\nTIES_DB_S3_COPY_ENABLED\n job property or \nties.db.s3.copy.enabled\n system property\n\n\nWhen true and a job is skipped because a compatible job is found in TiesDb, the results\n from the previous job will be copied to a different S3 bucket. Copying results will always\n result in a new JSON output object, even if using the same S3 location as the previous job.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_ACCESS_KEY\n job property or \nties.db.copy.src.s3.access.key\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 access key\n that will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_ACCESS_KEY\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_SECRET_KEY\n job property or \nties.db.copy.src.s3.secret.key\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 secret key\n that will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_SECRET_KEY\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_SESSION_TOKEN\n job property or \nties.db.copy.src.s3.session.token\n system\n property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 session\n token that will be used when getting the results from S3. If not provided, defaults to the\n value of \nS3_SESSION_TOKEN\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_REGION\n job property or \nties.db.copy.src.s3.region\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 region that\n will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_REGION\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_USE_VIRTUAL_HOST\n job property or \nties.db.copy.src.s3.use.virtual.host\n\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this enables virtual host\n style bucket URIs. If not provided, defaults to the value of \nS3_USE_VIRTUAL_HOST\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_HOST\n job property or \nties.db.copy.src.s3.host\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 host that\n will be used when getting the results from S3. If not provided, defaults to the value of\n \nS3_HOST\n.\n\n\n\n\n\n\nTIES_DB_COPY_SRC_S3_UPLOAD_OBJECT_KEY_PREFIX\n job property or\n \nties.db.copy.src.s3.upload.object.key.prefix\n system property\n\n\nIf a job is skipped because a compatible job was found in TiesDb, this is the S3 object key\n prefix that will be used when getting the results from S3. If not provided, defaults to the\n value of \nS3_UPLOAD_OBJECT_KEY_PREFIX\n.\n\n\n\n\n\n\ndata.ties.db.check.ignorable.properties.file\n system property\n\n\nPath to the\n \nJSON file containing the list of properties that should not be considered\n\n when checking for a compatible job in TiesDb.\n\n\n\n\n\n\nLINKED_MEDIA_HASH\n media property\n\n\nWhen this property is set, interactions with TiesDb will use the value of \nLINKED_MEDIA_HASH\n\n instead of the media's actual SHA-256 hash.\n\n\n\n\n\n\n\n\nAfter Job Supplemental Creation\n\n\nWhen a URL is provided for the \nTIES_DB_URL\n job property or \nties.db.url\n system property,\nWorkflow Manager will post a supplemental to TiesDb at the end of the job. The full URL that\nWorkflow Manager will post to is created by taking the provided URL and appending\n\n/api/db/supplementals?sha256Hash=\n to it. If, for example, the provided TiesDb URL\nis \nhttps://tiesdb.example.com/path\n and the SHA-256 hash of the media is\n\nd1bc8d3ba4afc7e109612cb73acbdd\n, Workflow Manager will post to \n\n\nhttps://tiesdb.example.com/path/api/db/supplementals?sha256Hash=d1bc8d3ba4afc7e109612cb73acbdd\n\n\nThis is an example of what Workflow Manager will post to TiesDb:\n\n\n{\n \"assertionId\": \"87298cc2f31fba73181ea2a9e6ef10dce21ed95e98bdac9c4e1504ea16f486e4\",\n \"dataObject\": {\n \"algorithm\": \"MOG\",\n \"pipeline\": \"MOG MOTION PIPELINE\",\n \"jobId\": \"mpf.example.com-13\",\n \"jobStatus\": \"COMPLETE\",\n \"outputType\": \"MOTION\",\n \"outputUri\": \"https://s3.example.com/2c/f2/2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824\",\n \"processDate\": \"2021-10-08T15:24:04.168448Z\",\n \"sha256OutputHash\": \"2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824\",\n \"systemHostname\": \"mpf.example.com\",\n \"systemVersion\": \"6.0.2\",\n \"trackCount\": 100,\n \"jobConfigHash\": \"d52ad13e6e2db75e780b858e92b89df18c021674c24fd6c84dd151dcd28f5c56\"\n },\n \"informationType\": \"OpenMPF_MOTION\",\n \"securityTag\": \"UNCLASSIFIED\",\n \"system\": \"OpenMPF\"\n}\n\n\n\nBefore Job Check\n\n\nWorkflow Manager can be configured to check TiesDb for a supplemental produced as a result of\na previously run OpenMPF job. In order for Workflow Manager to do this check, the \nTIES_DB_URL\n\njob property or the \nties.db.url\n system property must be set to the TiesDb server's URL and\nthe \nSKIP_TIES_DB_CHECK\n job property or the \nties.db.skip.check\n system property must be set to\nfalse. If Workflow Manager finds a supplemental with a \njobConfigHash\n that matches the job that\nWorkflow Manager is currently running, Workflow Manager will not process the media in the current\njob. Workflow Manager will use the output of the previous job to determine the output for the\ncurrent job. When Workflow Manager is configured to copy the results of the previous job to a\ndifferent S3 bucket, a new output object will be created. Otherwise, the previous job's output\nobject will be used.\n\n\nIt is possible for there to be multiple matching supplementals in TiesDb. In that case,\nWorkflow Manager will first pick the supplementals with the best job status. The job statuses\nfrom best to worst are \nCOMPLETE\n, \nCOMPLETE_WITH_WARNINGS\n, and \nCOMPLETE_WITH_ERRORS\n. If no jobs\nwith those statuses exist, all other statuses are considered equally bad. If there are multiple\nsupplementals with the same status, the most recently created supplemental will be used.\n\n\nIn order to determine if a previous job was similar enough to a current job, a hash of the\nimportant parts of the jobs is computed. The parts of the job that are included in the hash are:\n\n\n\n\nNames of the algorithms used in the pipeline and their order\n\n\nNon-\nignorable job properties\n\n\noutput.changed.counter\n system property\n\n\nThis is an integer that is incremented when there is a change to the Workflow Manager after\n which previous TiesDb records should not be used. Because this number is part of the job\n configuration hash, changing this number invalidates all previous TiesDb records.\n\n\n\n\n\n\nComponent descriptor's \noutputChangedCounter\n property.\n\n\nThis works the same way as \noutput.changed.counter\n, except that it only invalidates TiesDb\n records for jobs that used the component.\n\n\n\n\n\n\nFrame ranges and time ranges\n\n\nJSON output object major and minor version\n\n\nSHA-256 hashes of all input media\n\n\nAs a result of this, in order to find a matching job, both jobs must have been run on all\n of the same media. To improve the chances that a matching job is found in TiesDb, a user\n can choose to only submit jobs for a single piece of media.\n\n\n\n\n\n\n\n\nIgnorable Properties\n\n\nThere are certain job properties, that when changed, do not change the output. There are also job\nproperties that only affect certain types of media. To make it more likely that a matching job will\nbe found in TiesDb, Workflow Manager can be configured to ignore the previously mentioned job\nproperties when computing the job configuration hash.\n\n\nThe properties that should be ignored are specified in a JSON file. The\n\ndata.ties.db.check.ignorable.properties.file\n system property contains the path to the JSON file.\nThe JSON file must contain a list of objects with two properties: \nignorableForMediaTypes\n and\n\nnames\n. \nignorableForMediaTypes\n is a list of strings specifying which media types are able\nto ignore the properties listed in the \nnames\n list.\n\n\nIn the example below, the \nVERBOSE\n job property is never included in the job hash because all\nmedia types are present in the \nignorableForMediaTypes\n list. \nARTIFACT_EXTRACTION_POLICY\n\nis ignored when the media is audio or unknown. \nFRAME_INTERVAL\n appears in both the second\nand third object, so it is ignorable when the media is audio, unknown, or image.\n\n\n[\n {\n \"ignorableForMediaTypes\": [\"VIDEO\", \"IMAGE\", \"AUDIO\", \"UNKNOWN\"],\n \"names\": [\"VERBOSE\"]\n },\n {\n \"ignorableForMediaTypes\": [\"AUDIO\", \"UNKNOWN\"],\n \"names\": [\"ARTIFACT_EXTRACTION_POLICY\", \"FRAME_INTERVAL\"]\n },\n {\n \"ignorableForMediaTypes\": [\"IMAGE\"],\n \"names\": [\"FRAME_INTERVAL\"]\n }\n]\n\n\n\nAvoid Downloading Media\n\n\nThe SHA-256 hash of the job's media is also included when computing the job configuration hash.\nIf the job request contains the media's hash and MIME type, Workflow Manager can avoid downloading\nthe media if a match is found in TiesDB. If the media's hash and MIME type are not included in the\njob request Workflow Manager will use the normal media inspection process to get that information.\nIf the media is not a local path, this will require Workflow Manager to download the media.\n\n\nBelow is an example of a job creation request that includes the media's hash and MIME type:\n\n\n{\n \"algorithmProperties\": {},\n \"buildOutput\": true,\n \"jobProperties\": {\n \"S3_ACCESS_KEY\": \"xxxxxx\",\n \"S3_SECRET_KEY\": \"xxxxxx\",\n \"TIES_DB_URL\": \"https://tiesdb.example.com\",\n \"SKIP_TIES_DB_CHECK\": \"false\"\n },\n \"media\": [\n {\n \"mediaUri\": \"https://s3.example.com/bucket/my-video.mp4\",\n \"metadata\": {\n \"MEDIA_HASH\": \"2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824\",\n \"MIME_TYPE\": \"video/mp4\"\n }\n }\n ],\n \"pipelineName\": \"OCV FACE DETECTION PIPELINE\",\n \"priority\": 4\n}\n\n\n\nS3 Copy\n\n\nWhen the \nTIES_DB_S3_COPY_ENABLED\n job property or \nties.db.s3.copy.enabled\n system property is\ntrue and a matching job is found in TiesDb, Workflow Manager will copy the artifacts, markup,\nand derivative media to the bucket specified in the current job's \nS3_RESULTS_BUCKET\n job property\nor \ns3.results.bucket\n system property. Since the job's artifacts, markup, and derivative media\nare in a new location, the output object must be updated before it is uploaded to the new S3 bucket.\nIn the updated output object, the \ntiesDbSourceJobId\n property will be set to the previous job's ID\nand \ntiesDbSourceMediaPath\n will be set to the path of the previous job's media. When the S3 copy\nis enabled and the results bucket is the same as the previous job, a new output object is created,\nbut copies of the artifacts, markup, and derivative media are not created. If the S3 copy is\ndisabled, \ntiesDbSourceJobId\n and \ntiesDbSourceMediaPath\n are not added because the original job's\noutput object is used without changes. If the copy fails, a link to the old JSON output object will\nbe provided.\n\n\nWhen performing the S3 copy, the \nS3 job properties\n like\n\nS3_ACCESS_KEY\n and \nS3_SECRET_KEY\n use the values from the current job and apply to the\ndestination of the copy. If the values for the S3 properties should be different for the source of\nthe copy, the properties prefixed with \nTIES_DB_COPY_SRC_\n can be set. If for a given property the\n\nTIES_DB_COPY_SRC_\n prefixed version is not set, the non-prefixed version will be used.\n\n\nFor example, if a job is received with the following properties set:\n\n\n\n\nS3_SECRET_KEY\n=\nnew-secret-key\n\n\nS3_ACCESS_KEY\n=\naccess-key\n\n\nTIES_DB_COPY_SRC_S3_SECRET_KEY\n=\nold-secret-key\n\n\n\n\nthen, when accessing the previous job's results \naccess-key\n will be used for the access key and\n\nold-secret-key\n will be used for the secret key. When uploading the results to the new bucket,\n\naccess-key\n will be used for the access key and \nnew-secret-key\n will be used for the secret key.", "title": "TiesDb Guide" }, { @@ -542,7 +542,7 @@ }, { "location": "/Trigger-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nTrigger Overview\n\n\nThe \nTRIGGER\n property enables pipelines that use \nfeed forward\n to have\npipeline stages that only process certain tracks based on their track properties. It can be used\nto select the best algorithm when there are multiple similar algorithms that each perform better\nunder certain circumstances. It can also be used to iteratively filter down tracks at each stage of\na pipeline.\n\n\nSyntax\n\n\nThe syntax for the \nTRIGGER\n property is: \n=[;...]\n.\nThe left hand side of the equals sign is the name of track property that will be used to determine\nif a track matches the trigger. The right hand side specifies the required value for the specified\ntrack property. More than one value can be specified by separating them with a semicolon. When\nmultiple properties are specified the track property must match any one of the specified values.\nIf the value should match a track property that contains a semicolon or backslash,\nthey must be escaped with a leading backslash. For example, \nCLASSIFICATION=dog;cat\n will match\n\"dog\" or \"cat\". \nCLASSIFICATION=dog\\;cat\n will match \"dog;cat\". \nCLASSIFICATION=dog\\\\cat\n will\nmatch \"dog\\cat\". When specifying a trigger in JSON it will need to \ndoubly escaped\n.\n\n\nAlgorithm Selection Using Triggers\n\n\nThe example pipeline below will be used to describe the way that the Workflow Manager uses the\n\nTRIGGER\n property. Each task in the pipeline is composed of one action, so only the actions are\nshown. Note that this is a hypothetical pipeline and not intended for use in a real deployment.\n\n\n\n\nWHISPER SPEECH LANGUAGE DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nSPHINX SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=eng\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nWHISPER SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nARGOS TRANSLATION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nKEYWORD TAGGING ACTION\n\n\n(No TRIGGER)\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,\nsays a keyword that the user is interested in. The complication is that the input file could either\nbe in English, Spanish, or another language the user is not interested in. Spanish audio must be\ntranslated to English before looking for keywords.\n\n\nWe are going to pretend that Whisper language detection can return multiple tracks, one per language\ndetected in the audio, although in reality it is limited to detecting one language for the entire\npiece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are\npretending that Sphinx performs better than Whisper on English audio, and the user wants to use\nWhisper for transcribing Spanish audio.\n\n\nThe first stage should not have a trigger condition. If one is set, it will be ignored. The\nWorkflow Manager will take all of the tracks generated by stage 1 and determine if the trigger\ncondition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this\ncase, if stage 1 detected the language as English and set \nISO_LANGUAGE\n to \neng\n, then those\ntracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box.\n\n\nIf any of the Whisper tracks do not meet the condition for the stage 2, they are later considered\nas possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger\ndiamond pointing down to the stage 3 trigger diamond.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 2, the\n\nSPHINX SPEECH DETECTION ACTION\n, as well as the tracks that didn't satisfy the stage 2 trigger, and\ndetermine if the trigger condition for stage 3 is met.\n\n\nNote that the Sphinx component does not generate tracks with the \nISO_LANGUAGE\n property, so\nit's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later\nflow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the\nSphinx tracks cannot satisfy that trigger either.\n\n\nEven if the Sphinx component did generate tracks with the \nISO_LANGUAGE\n property, it would be set\nto \neng\n and would not satisfy the \nspa\n condition (they are mutually exclusive). Either way,\neventually the tracks from stage 2 will flow into stage 5.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 3, the\n\nWHISPER SPEECH DETECTION ACTION\n, as well as the tracks that did not satisfy the stage 2 and 3\ntriggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by\nstage 3 will have the \nISO_LANGUAGE\n property set to \nspa\n, because the stage 3 trigger only\nmatched Spanish tracks and when Whisper performs transcription, it sets the \nISO_LANGUAGE\n property.\nSince the stage 4 trigger, like the stage 3 trigger, is \nISO_LANGUAGE=spa\n, all of the tracks\nproduced by stage 3 will be fed in to stage 4.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 4, the\n\nARGOS TRANSLATION (WITH FF REGION) ACTION\n, as well as the tracks that did not satisfy the stage 2,\n3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger\ncondition, so all of those tracks flow into stage 5 by default.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nIn this diagram the trigger diamonds have been replaced with the orange boxes at the top of each\nstage. Also, all of the arrows for flows that are not logically possible have been removed,\nleaving only arrows that flow from one stage to another.\n\n\nWhat remains shows that this pipeline has three main flows of execution:\n\n\n\n\nEnglish audio is transcribed by the Sphinx component and then processed by keyword tagging.\n\n\nSpanish audio is transcribed by the Whisper component, translated by the Argos component, and\n then processed by keyword tagging.\n\n\nAll other languages are not transcribed and those tracks pass directly to keyword tagging. Since\n there is no transcript to look at, keyword tagging essentially ignores them.\n\n\n\n\nFurther Understanding\n\n\nIn general, triggers work as a mechanism to decide which tracks are passed forward to later stages\nof a pipeline. It is important to note that not only are the tracks from the previous stage\nconsidered, but also tracks from stages that were not fed into any previous stage.\n\n\nFor example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3\nwould never be triggered. This is because Sphinx tracks don't have an \nISO_LANGUAGE\n property. Even\nif they did have that property, it would be set to \neng\n, not \nspa\n, which would not satisfy the\nstage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks\nfrom stage 1 should only be processed by one speech-to-text algorithm (i.e. one \nSPEECH DETECTION\n\nstage). Both algorithms should be considered, but only one should be selected based on the language.\nTo accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs\nto stage 3.\n\n\nAdditionally, it's important to note that when a stage is triggered, the tracks passed into that\nstage are no longer considered for later stages. Instead, the tracks generated by that stage can be\npassed to later stages.\n\n\nFor example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If\nall of the tracks generated in prior stages could be passed to stage 4, then the \nspa\n tracks\ngenerated in stage 1 would trigger stage 4. Since those have not passed through the Whisper\nspeech-to-text stage 3 they would not have a transcript to translate.\n\n\nFiltering Using Triggers\n\n\nThe pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically \nAND\ned together. This allows you to produce\npipelines that search for very specific things.\n\n\nConsider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment:\n\n\n\n\nOCV YOLO OBJECT DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nCAFFE GOOGLENET DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=truck\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nTENSORFLOW VEHICLE COLOR DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nOALPR LICENSE PLATE TEXT DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=blue\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior.\n\n\nStage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates \ntruck\n tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to \nblue\n trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1.\n\n\nTracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused.\n\n\nIt's important to note that the possible \nCLASSIFICATION\n values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a \nblue\n track in stage 1\nthat will later satisfy the trigger for stage 4.\n\n\nAlso, note that stages 1, 2, and 3 can all accept an optional \nALLOW_LIST_FILE\n property that can be\nused to discard tracks with a \nCLASSIFICATION\n not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using allow list files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the allow list approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only \ntruck\n tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where\n\nperson\n tracks from YOLO are fed into OpenCV face. \nperson\n is just an example of one other type of\nYOLO track a user might be interested in.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nRemoving all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially \nAND\ns the\ntrigger conditions together.\n\n\nJSON escaping\n\n\nMany times job properties are defined using JSON and track properties appear in the JSON output\nobject. JSON also uses backslash as its escape character. Since the \nTRIGGER\n property and JSON both\nuse backslash as the escape character, when specifying the \nTRIGGER\n property in JSON, the string\nmust be doubly escaped.\n\n\nIf the job request contains this JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog;cat\"} } }\n\n\n\nit will match either \"dog\" or \"cat\", but not \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\;cat\"} } }\n\n\n\nwould only match \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\\\\\cat\"} } }\n\n\n\nwould only match \"dog\\cat\". The track property in the JSON output object would appear as:\n\n\n{ \"trackProperties\": { \"CLASSIFICATION\": \"dog\\\\cat\" } }", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024\nThe MITRE Corporation. All Rights Reserved.\n\n\nTrigger Overview\n\n\nThe \nTRIGGER\n property enables pipelines that use \nfeed forward\n to have\npipeline stages that only process certain tracks based on their track properties. It can be used\nto select the best algorithm when there are multiple similar algorithms that each perform better\nunder certain circumstances. It can also be used to iteratively filter down tracks at each stage of\na pipeline.\n\n\nSyntax\n\n\nThe syntax for the \nTRIGGER\n property is: \n=[;...]\n.\nThe left hand side of the equals sign is the name of track property that will be used to determine\nif a track matches the trigger. The right hand side specifies the required value for the specified\ntrack property. More than one value can be specified by separating them with a semicolon. When\nmultiple properties are specified the track property must match any one of the specified values.\nIf the value should match a track property that contains a semicolon or backslash,\nthey must be escaped with a leading backslash. For example, \nCLASSIFICATION=dog;cat\n will match\n\"dog\" or \"cat\". \nCLASSIFICATION=dog\\;cat\n will match \"dog;cat\". \nCLASSIFICATION=dog\\\\cat\n will\nmatch \"dog\\cat\". When specifying a trigger in JSON it will need to \ndoubly escaped\n.\n\n\nAlgorithm Selection Using Triggers\n\n\nThe example pipeline below will be used to describe the way that the Workflow Manager uses the\n\nTRIGGER\n property. Each task in the pipeline is composed of one action, so only the actions are\nshown. Note that this is a hypothetical pipeline and not intended for use in a real deployment.\n\n\n\n\nWHISPER SPEECH LANGUAGE DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nSPHINX SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=eng\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nWHISPER SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nARGOS TRANSLATION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nKEYWORD TAGGING ACTION\n\n\n(No TRIGGER)\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,\nsays a keyword that the user is interested in. The complication is that the input file could either\nbe in English, Spanish, or another language the user is not interested in. Spanish audio must be\ntranslated to English before looking for keywords.\n\n\nWe are going to pretend that Whisper language detection can return multiple tracks, one per language\ndetected in the audio, although in reality it is limited to detecting one language for the entire\npiece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are\npretending that Sphinx performs better than Whisper on English audio, and the user wants to use\nWhisper for transcribing Spanish audio.\n\n\nThe first stage should not have a trigger condition. If one is set, it will be ignored. The\nWorkflow Manager will take all of the tracks generated by stage 1 and determine if the trigger\ncondition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this\ncase, if stage 1 detected the language as English and set \nISO_LANGUAGE\n to \neng\n, then those\ntracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box.\n\n\nIf any of the Whisper tracks do not meet the condition for the stage 2, they are later considered\nas possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger\ndiamond pointing down to the stage 3 trigger diamond.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 2, the\n\nSPHINX SPEECH DETECTION ACTION\n, as well as the tracks that didn't satisfy the stage 2 trigger, and\ndetermine if the trigger condition for stage 3 is met.\n\n\nNote that the Sphinx component does not generate tracks with the \nISO_LANGUAGE\n property, so\nit's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later\nflow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the\nSphinx tracks cannot satisfy that trigger either.\n\n\nEven if the Sphinx component did generate tracks with the \nISO_LANGUAGE\n property, it would be set\nto \neng\n and would not satisfy the \nspa\n condition (they are mutually exclusive). Either way,\neventually the tracks from stage 2 will flow into stage 5.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 3, the\n\nWHISPER SPEECH DETECTION ACTION\n, as well as the tracks that did not satisfy the stage 2 and 3\ntriggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by\nstage 3 will have the \nISO_LANGUAGE\n property set to \nspa\n, because the stage 3 trigger only\nmatched Spanish tracks and when Whisper performs transcription, it sets the \nISO_LANGUAGE\n property.\nSince the stage 4 trigger, like the stage 3 trigger, is \nISO_LANGUAGE=spa\n, all of the tracks\nproduced by stage 3 will be fed in to stage 4.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 4, the\n\nARGOS TRANSLATION (WITH FF REGION) ACTION\n, as well as the tracks that did not satisfy the stage 2,\n3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger\ncondition, so all of those tracks flow into stage 5 by default.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nIn this diagram the trigger diamonds have been replaced with the orange boxes at the top of each\nstage. Also, all of the arrows for flows that are not logically possible have been removed,\nleaving only arrows that flow from one stage to another.\n\n\nWhat remains shows that this pipeline has three main flows of execution:\n\n\n\n\nEnglish audio is transcribed by the Sphinx component and then processed by keyword tagging.\n\n\nSpanish audio is transcribed by the Whisper component, translated by the Argos component, and\n then processed by keyword tagging.\n\n\nAll other languages are not transcribed and those tracks pass directly to keyword tagging. Since\n there is no transcript to look at, keyword tagging essentially ignores them.\n\n\n\n\nFurther Understanding\n\n\nIn general, triggers work as a mechanism to decide which tracks are passed forward to later stages\nof a pipeline. It is important to note that not only are the tracks from the previous stage\nconsidered, but also tracks from stages that were not fed into any previous stage.\n\n\nFor example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3\nwould never be triggered. This is because Sphinx tracks don't have an \nISO_LANGUAGE\n property. Even\nif they did have that property, it would be set to \neng\n, not \nspa\n, which would not satisfy the\nstage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks\nfrom stage 1 should only be processed by one speech-to-text algorithm (i.e. one \nSPEECH DETECTION\n\nstage). Both algorithms should be considered, but only one should be selected based on the language.\nTo accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs\nto stage 3.\n\n\nAdditionally, it's important to note that when a stage is triggered, the tracks passed into that\nstage are no longer considered for later stages. Instead, the tracks generated by that stage can be\npassed to later stages.\n\n\nFor example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If\nall of the tracks generated in prior stages could be passed to stage 4, then the \nspa\n tracks\ngenerated in stage 1 would trigger stage 4. Since those have not passed through the Whisper\nspeech-to-text stage 3 they would not have a transcript to translate.\n\n\nFiltering Using Triggers\n\n\nThe pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically \nAND\ned together. This allows you to produce\npipelines that search for very specific things.\n\n\nConsider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment:\n\n\n\n\nOCV YOLO OBJECT DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nCAFFE GOOGLENET DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=truck\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nTENSORFLOW VEHICLE COLOR DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nOALPR LICENSE PLATE TEXT DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=blue\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior.\n\n\nStage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates \ntruck\n tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to \nblue\n trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1.\n\n\nTracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused.\n\n\nIt's important to note that the possible \nCLASSIFICATION\n values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a \nblue\n track in stage 1\nthat will later satisfy the trigger for stage 4.\n\n\nAlso, note that stages 1, 2, and 3 can all accept an optional \nALLOW_LIST_FILE\n property that can be\nused to discard tracks with a \nCLASSIFICATION\n not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using allow list files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the allow list approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only \ntruck\n tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where\n\nperson\n tracks from YOLO are fed into OpenCV face. \nperson\n is just an example of one other type of\nYOLO track a user might be interested in.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nRemoving all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially \nAND\ns the\ntrigger conditions together.\n\n\nJSON escaping\n\n\nMany times job properties are defined using JSON and track properties appear in the JSON output\nobject. JSON also uses backslash as its escape character. Since the \nTRIGGER\n property and JSON both\nuse backslash as the escape character, when specifying the \nTRIGGER\n property in JSON, the string\nmust be doubly escaped.\n\n\nIf the job request contains this JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog;cat\"} } }\n\n\n\nit will match either \"dog\" or \"cat\", but not \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\;cat\"} } }\n\n\n\nwould only match \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\\\\\cat\"} } }\n\n\n\nwould only match \"dog\\cat\". The track property in the JSON output object would appear as:\n\n\n{ \"trackProperties\": { \"CLASSIFICATION\": \"dog\\\\cat\" } }", "title": "Trigger Guide" }, { @@ -577,7 +577,7 @@ }, { "location": "/Roll-Up-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nRoll Up Overview\n\n\nThe Workflow Manager can be configured to replace the values of track and detection properties after\nreceiving tracks and detections from a component. This feature is commonly used to replace specific\nterms with a more general category. For example, the \"CLASSIFICATION\" property may be set to \"car\",\n\"bus\", and \"truck\". Those are all a kind of \"vehicle\". To use this feature, a JSON file in the\nformat described below must be created. Then, the \nROLL_UP_FILE\n job property must be set to the\nfile path where that file is located.\n\n\nRoll Up File\n\n\nThe JSON below is an example of a roll up file.\n\n\n[\n {\n \"propertyToProcess\": \"CLASSIFICATION\",\n \"originalPropertyCopy\": \"ORIGINAL CLASSIFICATION\",\n \"groups\": [\n {\n \"rollUp\": \"vehicle\",\n \"members\": [\n \"truck\",\n \"car\",\n \"bus\"\n ]\n },\n {\n \"rollUp\": \"sandwich\",\n \"members\": [\n \"grilled cheese\",\n \"reuben\",\n \"hamburger\",\n \"hot dog\"\n ]\n }\n ]\n },\n {\n \"propertyToProcess\": \"COLOR\",\n \"groups\": [\n {\n \"rollUp\": \"purple\",\n \"members\": [\n \"indigo\"\n ]\n }\n ]\n },\n {\n \"propertyToProcess\": \"PROP3\",\n \"groups\": [\n {\n \"rollUp\": \"new name\",\n \"members\": [\n \"old name\"\n ]\n }\n ]\n }\n]\n\n\n\nAt the top level, the roll up file contains an array where each element defines a detection property\nthat should be modified. In this example, there is one element for \"CLASSIFICATION\", one for\n\"COLOR\", and one for \"PROP3\". Each element contains the following fields:\n\n\n\n\npropertyToProcess\n: (Required) A detection property key. The value will be modified according to\n the \ngroups\n key.\n\n\noriginalPropertyCopy\n: (Optional) Copies the value of \npropertyToProcess\n prior to roll up to\n another property. The copy is made even if the property is not modified.\n\n\ngroups\n: (Optional) Array containing an element for each roll up name. If the value of the\n detection property specified by \npropertyToProcess\n matches a string listed in \nmembers\n, it\n will be replaced by the content of the \nrollUp\n property.\n\n\n\n\nIn the example above, the value of the \"CLASSIFICATION\" detection property will be copied to\n\"ORIGINAL CLASSIFICATION\" before the roll up is performed. If the \"CLASSIFICATION\" detection\nproperty is set to \"truck\", \"car\", or \"bus\", the value of the detection property will be replaced\nby \"vehicle\".\n\n\nIn a real use case there will generally be multiple roll up groups for a single detection property.\nThe \"sandwich\" group shows how to include an additional mapping for the same \"CLASSIFICATION\"\nproperty. The \"COLOR\" and \"PROP3\" sections show examples of how to apply roll up to different\ndetection properties with different configurations.\n\n\nIf the roll up above was applied to these detection properties:\n\n\n{\n \"CLASSIFICATION\": \"truck\",\n \"COLOR\": \"red\",\n \"PROP3\": \"truck\",\n \"PROP4\": \"other\"\n}\n\n\n\nit would result in:\n\n\n{\n \"CLASSIFICATION\": \"vehicle\",\n \"ORIGINAL CLASSIFICATION\": \"truck\",\n \"COLOR\": \"red\",\n \"PROP3\": \"truck\",\n \"PROP4\": \"other\"\n}\n\n\n\n\"COLOR\" was not modified since it does not define a roll up group with \"red\" as a member. \"PROP3\"\nwas not modified because only the \"CLASSIFICATION\" property has a roll up group with \"truck\" as a\nmember. \"PROP4\" was not modified because it is not in the roll up file.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024\nThe MITRE Corporation. All Rights Reserved.\n\n\nRoll Up Overview\n\n\nThe Workflow Manager can be configured to replace the values of track and detection properties after\nreceiving tracks and detections from a component. This feature is commonly used to replace specific\nterms with a more general category. For example, the \"CLASSIFICATION\" property may be set to \"car\",\n\"bus\", and \"truck\". Those are all a kind of \"vehicle\". To use this feature, a JSON file in the\nformat described below must be created. Then, the \nROLL_UP_FILE\n job property must be set to the\nfile path where that file is located.\n\n\nRoll Up File\n\n\nThe JSON below is an example of a roll up file.\n\n\n[\n {\n \"propertyToProcess\": \"CLASSIFICATION\",\n \"originalPropertyCopy\": \"ORIGINAL CLASSIFICATION\",\n \"groups\": [\n {\n \"rollUp\": \"vehicle\",\n \"members\": [\n \"truck\",\n \"car\",\n \"bus\"\n ]\n },\n {\n \"rollUp\": \"sandwich\",\n \"members\": [\n \"grilled cheese\",\n \"reuben\",\n \"hamburger\",\n \"hot dog\"\n ]\n }\n ]\n },\n {\n \"propertyToProcess\": \"COLOR\",\n \"groups\": [\n {\n \"rollUp\": \"purple\",\n \"members\": [\n \"indigo\"\n ]\n }\n ]\n },\n {\n \"propertyToProcess\": \"PROP3\",\n \"groups\": [\n {\n \"rollUp\": \"new name\",\n \"members\": [\n \"old name\"\n ]\n }\n ]\n }\n]\n\n\n\nAt the top level, the roll up file contains an array where each element defines a detection property\nthat should be modified. In this example, there is one element for \"CLASSIFICATION\", one for\n\"COLOR\", and one for \"PROP3\". Each element contains the following fields:\n\n\n\n\npropertyToProcess\n: (Required) A detection property key. The value will be modified according to\n the \ngroups\n key.\n\n\noriginalPropertyCopy\n: (Optional) Copies the value of \npropertyToProcess\n prior to roll up to\n another property. The copy is made even if the property is not modified.\n\n\ngroups\n: (Optional) Array containing an element for each roll up name. If the value of the\n detection property specified by \npropertyToProcess\n matches a string listed in \nmembers\n, it\n will be replaced by the content of the \nrollUp\n property.\n\n\n\n\nIn the example above, the value of the \"CLASSIFICATION\" detection property will be copied to\n\"ORIGINAL CLASSIFICATION\" before the roll up is performed. If the \"CLASSIFICATION\" detection\nproperty is set to \"truck\", \"car\", or \"bus\", the value of the detection property will be replaced\nby \"vehicle\".\n\n\nIn a real use case there will generally be multiple roll up groups for a single detection property.\nThe \"sandwich\" group shows how to include an additional mapping for the same \"CLASSIFICATION\"\nproperty. The \"COLOR\" and \"PROP3\" sections show examples of how to apply roll up to different\ndetection properties with different configurations.\n\n\nIf the roll up above was applied to these detection properties:\n\n\n{\n \"CLASSIFICATION\": \"truck\",\n \"COLOR\": \"red\",\n \"PROP3\": \"truck\",\n \"PROP4\": \"other\"\n}\n\n\n\nit would result in:\n\n\n{\n \"CLASSIFICATION\": \"vehicle\",\n \"ORIGINAL CLASSIFICATION\": \"truck\",\n \"COLOR\": \"red\",\n \"PROP3\": \"truck\",\n \"PROP4\": \"other\"\n}\n\n\n\n\"COLOR\" was not modified since it does not define a roll up group with \"red\" as a member. \"PROP3\"\nwas not modified because only the \"CLASSIFICATION\" property has a roll up group with \"truck\" as a\nmember. \"PROP4\" was not modified because it is not in the roll up file.", "title": "Roll Up Guide" }, { @@ -592,7 +592,7 @@ }, { "location": "/Health-Check-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nHealth Check Overview\n\n\nThe C++ and Python component executors can be configured to run health checks on components prior\nto running jobs. Health checks are configured using environment variables and an INI file. All of\nthe log lines pertaining to the health check will be prefixed with \n[Health check] -\n instead\nof \n[Job #: media] -\n.\n\n\nWhen the component executor receives a job from ActiveMQ, it checks if health checks are enabled\nand if more than the specified timeout has passed since the last health check. If both conditions\nare true, the component executor will run a health check job before the actual job. Health checks\nonly run after a job from ActiveMQ is received. If the timeout period expires, but no job is\nreceived or a job is already running, the health check will not run until the next job is received.\n\n\nIf the health check job completes successfully, then the component executor runs the job received\nfrom ActiveMQ. If the health check fails, the job will be returned to ActiveMQ. If the maximum\nnumber of consecutive health check failures has not been met, the component executor will wait the\ntimeout period before until attempting to receive another job from ActiveMQ. If the maximum number\nof consecutive health check failures has been met, the component executor will exit with exit\ncode 39. If the component is running in a Docker container, the container will exit.\n\n\nEnvironment Variables\n\n\n\n\nHEALTH_CHECK\n: When set to \"ENABLED\", the component executor will run health checks.\n\n\nHEALTH_CHECK_TIMEOUT\n: When set to a positive integer, specifies the minimum number of seconds\n between health checks. When absent or set to 0, a health check will run before every job.\n\n\nHEALTH_CHECK_RETRY_MAX_ATTEMPTS\n: When set to a positive integer, specifies the number of\n consecutive health check failures that will cause the component service to exit. When absent or\n set to 0, the component service will never exit because of a failed health check.\n\n\n\n\nThe INI File\n\n\nWhen health checks are enabled, the component executor will look for an INI file at\n\n$MPF_HOME/plugins//health/health-check.ini\n. Below is an example of the expected\nINI file.\n\n\nmedia=$MPF_HOME/plugins/OcvFaceDetection/health/meds_faces_image.png\nmin_num_tracks=2\nmedia_type=IMAGE\n\n[job_properties]\nJOB PROP1=VALUE1\nJOB PROP2=VALUE2\n\n[media_properties]\nMEDIA PROP=MEDIA VALUE\n\n\n\nThe supported keys are:\n\n\n\n\nmedia\n: (Required) Path to the media file that will be used in the health check.\n\n\nmin_num_tracks\n: (Required) The minimum number of tracks the component must find for the health\n check to pass.\n\n\nmedia_type\n: (Required) The type of media referenced in the \nmedia\n key. It must be one of\n \"IMAGE\", \"VIDEO\", \"AUDIO\", or \"GENERIC\".\n\n\njob_properties\n: (Optional) Job properties that will set on the health check job.\n\n\nmedia_properties\n: (Optional) Media properties that will set on the health check job.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024\nThe MITRE Corporation. All Rights Reserved.\n\n\nHealth Check Overview\n\n\nThe C++ and Python component executors can be configured to run health checks on components prior\nto running jobs. Health checks are configured using environment variables and an INI file. All of\nthe log lines pertaining to the health check will be prefixed with \n[Health check] -\n instead\nof \n[Job #: media] -\n.\n\n\nWhen the component executor receives a job from ActiveMQ, it checks if health checks are enabled\nand if more than the specified timeout has passed since the last health check. If both conditions\nare true, the component executor will run a health check job before the actual job. Health checks\nonly run after a job from ActiveMQ is received. If the timeout period expires, but no job is\nreceived or a job is already running, the health check will not run until the next job is received.\n\n\nIf the health check job completes successfully, then the component executor runs the job received\nfrom ActiveMQ. If the health check fails, the job will be returned to ActiveMQ. If the maximum\nnumber of consecutive health check failures has not been met, the component executor will wait the\ntimeout period before until attempting to receive another job from ActiveMQ. If the maximum number\nof consecutive health check failures has been met, the component executor will exit with exit\ncode 39. If the component is running in a Docker container, the container will exit.\n\n\nEnvironment Variables\n\n\n\n\nHEALTH_CHECK\n: When set to \"ENABLED\", the component executor will run health checks.\n\n\nHEALTH_CHECK_TIMEOUT\n: When set to a positive integer, specifies the minimum number of seconds\n between health checks. When absent or set to 0, a health check will run before every job.\n\n\nHEALTH_CHECK_RETRY_MAX_ATTEMPTS\n: When set to a positive integer, specifies the number of\n consecutive health check failures that will cause the component service to exit. When absent or\n set to 0, the component service will never exit because of a failed health check.\n\n\n\n\nThe INI File\n\n\nWhen health checks are enabled, the component executor will look for an INI file at\n\n$MPF_HOME/plugins//health/health-check.ini\n. Below is an example of the expected\nINI file.\n\n\nmedia=$MPF_HOME/plugins/OcvFaceDetection/health/meds_faces_image.png\nmin_num_tracks=2\nmedia_type=IMAGE\n\n[job_properties]\nJOB PROP1=VALUE1\nJOB PROP2=VALUE2\n\n[media_properties]\nMEDIA PROP=MEDIA VALUE\n\n\n\nThe supported keys are:\n\n\n\n\nmedia\n: (Required) Path to the media file that will be used in the health check.\n\n\nmin_num_tracks\n: (Required) The minimum number of tracks the component must find for the health\n check to pass.\n\n\nmedia_type\n: (Required) The type of media referenced in the \nmedia\n key. It must be one of\n \"IMAGE\", \"VIDEO\", \"AUDIO\", or \"GENERIC\".\n\n\njob_properties\n: (Optional) Job properties that will set on the health check job.\n\n\nmedia_properties\n: (Optional) Media properties that will set on the health check job.", "title": "Health Check Guide" }, { @@ -637,7 +637,7 @@ }, { "location": "/Component-API-Overview/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nGoals\n\n\nThe OpenMPF Component Application Programming Interface (API) provides a mechanism for integrating components into OpenMPF. The goals of the document are to:\n\n\n\n\nProvide an overview of OpenMPF concepts\n\n\nDefine a \ncomponent\n in the context of OpenMPF\n\n\nExplain the role of the Component API\n\n\n\n\nTerminology\n\n\nIn order to talk about OpenMPF, readers should be familiar with the following key OpenMPF-specific terms:\n\n\n\n\nJob\n - An OpenMPF work unit. A job contains a list of media files and the pipeline that will be used to process that media.\n\n\nPipeline\n - A logical flow of processes that will be performed on a piece of media. For instance, a pipeline may perform motion tracking on a video and feed the results into a face detection algorithm.\n\n\nComponent\n - An OpenMPF plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nDetection Component\n - A component that performs either detection (with or without tracking), or classification on a piece of media.\n\n\nNode\n - An OpenMPF host that launches components. There may be more than one node in an OpenMPF cluster, thus forming a distributed system. There is always a master node that runs the OpenMPF web application.\n\n\nService\n - An instance of an OpenMPF component process. Each OpenMPF node may run one or more services at a time. Multiple services may run in parallel to process a job. For example, each service may process a different piece of media, or a segment of the same video.\n\n\nBatch Processing\n - Process complete image, audio, video, and/or other files that reside on disk.\n\n\nStream Processing\n - Process live video streams.\n\n\n\n\n Background \n\n\nOpenMPF consists of the Workflow Manager (WFM), a Node Manager, components, and a message passing mechanism that enables communication between the WFM and the components. The Node Manager is only used in a non-Docker deployment.\n\n\nWorkflow Manager\n\n\nThe WFM receives job requests from user interface and external systems through the \nREST API\n. The WFM handles each request by creating a job, which consists of a collection of input media and a pipeline. These jobs are then broken down into job requests that are handled by component services, which in turn process media and return results.\n\n\nThe WFM orchestrates the flow of work within a job through the various stages of a processing pipeline. For each stage, the WFM communicates with the appropriate component services by exchanging JMS messages via a message broker. For example, if a pipeline consists of a motion detection stage, then the WFM will communicate with motion detection component services.\n\n\nThe WFM provides work to a component service by placing a job request on the request queue, and it retrieves the component\u2019s response by monitoring the appropriate response queue. The WFM may generate one or more job requests for a large video file, depending on how it segments the file into chunks. The segmentation properties can be specified system-wide using configuration files, or specified on a per-job basis.\n\n\n\n\nNOTE:\n All component messaging is abstracted within the OpenMPF Component API and component developers are not required or able to directly interact with the message queues.\n\n\n\n\nNode Manager\n\n\nThe Node Manager is a process that runs on each OpenMPF node in a non-Docker deployment. The Node Manager handles spawning the desired number of instances of a component based on the end-user's desired configuration. Each instance is referred to as a service.\n\n\nA service behaves differently based on the kind of processing that needs to be performed. After the Node Manager spawns a service:\n\n\n\n\nBatch processing\n - The service waits for job requests from the WFM and produces a response for each request.\n\n\nStream processing\n - The service waits for the next frame from the stream and produces activity alerts and segment summary reports.\n\n\n\n\nNote that in a Docker deployment the Docker daemon creates the desired number of containers for each component. Stream processing only works in a non-Docker deployment.\n\n\nComponents\n\n\nComponents are identified by nine key characteristics:\n\n\n\n\nThe \ntype of action\n the component performs\n\n\nThe \ntype of processing\n the component performs\n\n\nThe \ntypes of data\n it supports\n\n\nThe \ntype of objects\n it detects\n\n\nThe \nname\n of the algorithm or vendor\n\n\nThe user-configurable \nproperties\n that the component exposes\n\n\nThe \nrequired states\n associated with a job prior to the execution of the component\n\n\nThe \nprovided states\n associated with a job following the execution of the component\n\n\nThe \nprogramming language\n used to implement the component\n\n\n\n\nA component\u2019s action type corresponds to the operation which the algorithm performs. Generally, this is \nDETECTION\n.\n\n\nA component can perform batch processing, stream processing, or both. Refer to the \nC++ Batch Component API\n, \nC++ Streaming Component API\n, \nPython Batch Component API\n, and \nJava Batch Component API\n. Only C++ components perform stream processing.\n\n\nThe data that a component accepts as inputs, and correspondingly produces as outputs, constrains its placement in a pipeline. This is some combination of \nIMAGE\n, \nAUDIO\n, and \nVIDEO\n for components that support batch processing, or just \nVIDEO\n for components that only support stream processing. Batch components can also support the \nUNKNOWN\n data type, meaning that they can accept jobs for any kind of media file.\n\n\nAs depicted in the figure below, detection components accept an input media file (or segment of the file in the case of video files) and produce a collection of object detections discovered in the data.\n\n\nThe type of objects produced depends on the input type. For example, video files produce video tracks, audio files produce audio tracks, and images produce image locations.\n\n\n\n\nThe OpenMPF Component API presented provides developers an interface for developing new components for OpenMPF without requiring the developers to understand the internals of the framework.\n\n\nThe figure below depicts a high-level block diagram of the OpenMPF architecture with components.\n\n\n\n\nThe Component Registry serves as a central location for information about the components registered with the OpenMPF instance. A future goal is to develop a web page that can be used to browse the registry and display the metadata associated with each available component.\n\n\nOpenMPF includes a Component Executable for the \nDETECTION\n action type, as denoted by the blue cubes. Note that the Component Executable is shown three times to represent three instances of that process, one for each component type. This executable is responsible for loading a component library based on information provided at launch time. \n\n\nOne Component Executable instance is associated with each component service. For example, a motion detection service, face detection service, and text detection service will require three instances of the Component Executable process, one for each service. For another example, three motion detection services will also require three instances of the Component Executable process, one for each service. The Component Executable is abstract; it does not care what kind of detection is performed. It simply interacts with the component library through the Component API.\n\n\nThe Component Executable receives job requests from the message broker, translates those requests for the component, and converts the component\u2019s outputs into response messages for the OpenMPF.\n\n\nA separate Component Executable is maintained for C++ and Java components. The component library is compiled as a C++ shared object library, or Java JAR, and encapsulates the component's detection logic.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nGoals\n\n\nThe OpenMPF Component Application Programming Interface (API) provides a mechanism for integrating components into OpenMPF. The goals of the document are to:\n\n\n\n\nProvide an overview of OpenMPF concepts\n\n\nDefine a \ncomponent\n in the context of OpenMPF\n\n\nExplain the role of the Component API\n\n\n\n\nTerminology\n\n\nIn order to talk about OpenMPF, readers should be familiar with the following key OpenMPF-specific terms:\n\n\n\n\nJob\n - An OpenMPF work unit. A job contains a list of media files and the pipeline that will be used to process that media.\n\n\nPipeline\n - A logical flow of processes that will be performed on a piece of media. For instance, a pipeline may perform motion tracking on a video and feed the results into a face detection algorithm.\n\n\nComponent\n - An OpenMPF plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nDetection Component\n - A component that performs either detection (with or without tracking), or classification on a piece of media.\n\n\nNode\n - An OpenMPF host that launches components. There may be more than one node in an OpenMPF cluster, thus forming a distributed system. There is always a master node that runs the OpenMPF web application.\n\n\nService\n - An instance of an OpenMPF component process. Each OpenMPF node may run one or more services at a time. Multiple services may run in parallel to process a job. For example, each service may process a different piece of media, or a segment of the same video.\n\n\nBatch Processing\n - Process complete image, audio, video, and/or other files that reside on disk.\n\n\nStream Processing\n - Process live video streams.\n\n\n\n\n Background \n\n\nOpenMPF consists of the Workflow Manager (WFM), a Node Manager, components, and a message passing mechanism that enables communication between the WFM and the components. The Node Manager is only used in a non-Docker deployment.\n\n\nWorkflow Manager\n\n\nThe WFM receives job requests from user interface and external systems through the \nREST API\n. The WFM handles each request by creating a job, which consists of a collection of input media and a pipeline. These jobs are then broken down into job requests that are handled by component services, which in turn process media and return results.\n\n\nThe WFM orchestrates the flow of work within a job through the various stages of a processing pipeline. For each stage, the WFM communicates with the appropriate component services by exchanging JMS messages via a message broker. For example, if a pipeline consists of a motion detection stage, then the WFM will communicate with motion detection component services.\n\n\nThe WFM provides work to a component service by placing a job request on the request queue, and it retrieves the component\u2019s response by monitoring the appropriate response queue. The WFM may generate one or more job requests for a large video file, depending on how it segments the file into chunks. The segmentation properties can be specified system-wide using configuration files, or specified on a per-job basis.\n\n\n\n\nNOTE:\n All component messaging is abstracted within the OpenMPF Component API and component developers are not required or able to directly interact with the message queues.\n\n\n\n\nNode Manager\n\n\nThe Node Manager is a process that runs on each OpenMPF node in a non-Docker deployment. The Node Manager handles spawning the desired number of instances of a component based on the end-user's desired configuration. Each instance is referred to as a service.\n\n\nA service behaves differently based on the kind of processing that needs to be performed. After the Node Manager spawns a service:\n\n\n\n\nBatch processing\n - The service waits for job requests from the WFM and produces a response for each request.\n\n\nStream processing\n - The service waits for the next frame from the stream and produces activity alerts and segment summary reports.\n\n\n\n\nNote that in a Docker deployment the Docker daemon creates the desired number of containers for each component. Stream processing only works in a non-Docker deployment.\n\n\nComponents\n\n\nComponents are identified by nine key characteristics:\n\n\n\n\nThe \ntype of action\n the component performs\n\n\nThe \ntype of processing\n the component performs\n\n\nThe \ntypes of data\n it supports\n\n\nThe \ntype of objects\n it detects\n\n\nThe \nname\n of the algorithm or vendor\n\n\nThe user-configurable \nproperties\n that the component exposes\n\n\nThe \nrequired states\n associated with a job prior to the execution of the component\n\n\nThe \nprovided states\n associated with a job following the execution of the component\n\n\nThe \nprogramming language\n used to implement the component\n\n\n\n\nA component\u2019s action type corresponds to the operation which the algorithm performs. Generally, this is \nDETECTION\n.\n\n\nA component can perform batch processing, stream processing, or both. Refer to the \nC++ Batch Component API\n, \nC++ Streaming Component API\n, \nPython Batch Component API\n, and \nJava Batch Component API\n. Only C++ components perform stream processing.\n\n\nThe data that a component accepts as inputs, and correspondingly produces as outputs, constrains its placement in a pipeline. This is some combination of \nIMAGE\n, \nAUDIO\n, and \nVIDEO\n for components that support batch processing, or just \nVIDEO\n for components that only support stream processing. Batch components can also support the \nUNKNOWN\n data type, meaning that they can accept jobs for any kind of media file.\n\n\nAs depicted in the figure below, detection components accept an input media file (or segment of the file in the case of video files) and produce a collection of object detections discovered in the data.\n\n\nThe type of objects produced depends on the input type. For example, video files produce video tracks, audio files produce audio tracks, and images produce image locations.\n\n\n\n\nThe OpenMPF Component API presented provides developers an interface for developing new components for OpenMPF without requiring the developers to understand the internals of the framework.\n\n\nThe figure below depicts a high-level block diagram of the OpenMPF architecture with components.\n\n\n\n\nThe Component Registry serves as a central location for information about the components registered with the OpenMPF instance. A future goal is to develop a web page that can be used to browse the registry and display the metadata associated with each available component.\n\n\nOpenMPF includes a Component Executable for the \nDETECTION\n action type, as denoted by the blue cubes. Note that the Component Executable is shown three times to represent three instances of that process, one for each component type. This executable is responsible for loading a component library based on information provided at launch time. \n\n\nOne Component Executable instance is associated with each component service. For example, a motion detection service, face detection service, and text detection service will require three instances of the Component Executable process, one for each service. For another example, three motion detection services will also require three instances of the Component Executable process, one for each service. The Component Executable is abstract; it does not care what kind of detection is performed. It simply interacts with the component library through the Component API.\n\n\nThe Component Executable receives job requests from the message broker, translates those requests for the component, and converts the component\u2019s outputs into response messages for the OpenMPF.\n\n\nA separate Component Executable is maintained for C++ and Java components. The component library is compiled as a C++ shared object library, or Java JAR, and encapsulates the component's detection logic.", "title": "Component API Overview" }, { @@ -667,7 +667,7 @@ }, { "location": "/Component-Descriptor-Reference/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nIn order to be registered within OpenMPF, each component must provide a JavaScript Object Notation (JSON) descriptor file which provides contextual information about the component.\n\n\nThis file must be named \"descriptor.json\".\n\n\nFor an example, please see: \nHello World JSON Descriptor\n\n\nData Elements\n\n\nContained within the JSON file should be the following elements:\n\n\ncomponentName\n\n\nRequired.\n\n\nContains the component\u2019s name. Should follow CamelCaseFormat.\n\n\nExample:\n\n\"componentName\" : \"SampleComponent\"\n\n\ncomponentVersion\n\n\nRequired.\n\n\nContains the component\u2019s version. Does not need to match the \ncomponentAPIVersion\n.\n\n\nExample:\n\n\"componentVersion\" : \"2.0.1\"\n\n\nmiddlewareVersion\n\n\nRequired.\n\n\nContains the version of the OpenMPF Component API that the component was built with.\n\n\nExample:\n\n\"middlewareVersion\" : \"2.0.0\"\n\n\nsourceLanguage\n\n\nRequired.\n\n\nContains the language the component is coded in. Should be \"c++\", \"python\", or \"java\".\n\n\nExample:\n\n\"sourceLanguage\" : \"c++\"\n\n\nbatchLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for batch processing once the component is deployed.\n\n\nFor Java components, this contains the name of the jar which contains the component implementation used for batch processing.\n\n\nFor setuptools-based Python components, this contains the component's distribution name, which is declared in the\ncomponent's \nsetup.py\n file. The distribution name is usually the same name as the component.\n\n\nFor basic Python components, this contains the full path to the Python file containing the component class.\n\n\nExample (C++):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libbatchSampleComponent.so\n\n\nExample (Java):\n\n\"batchLibrary\" : \"batch-sample-component-2.0.1.jar\"\n\n\nExample (setuptools-based Python):\n\n\"batchLibrary\" : \"SampleComponent\"\n\n\nExample (basic Python):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/sample_component.py\"\n\n\nstreamLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for stream processing once the component is deployed.\n\n\nNote that Python and Java components currently do not support stream processing, so this field should be omitted from Python and Java component descriptor files.\n\n\nExample (C++):\n\n\"streamLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libstreamSampleComponent.so\n\n\nenvironmentVariables\n\n\nRequired; can be empty.\n\n\nDefines a collection of environment variables that will be set when executing the OpenMPF Component Executable.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Name of the environment variable.\n\n\n\n\n\n\nvalue:\n\n Value of the environment variable.\n Note that value can be a list of values separated by \u201c:\u201d.\n\n\n\n\n\n\nsep:\n\n The \nsep\n field (short for \u201cseparator\u201d) should be set to \u201cnull\u201d or \u201c:\u201d. When set to \u201cnull,\u201d the content of the environment variable specified by \nname\n is the content of \nvalue\n; for an existing variable, its former value will be replaced, otherwise, a new variable will be created and assigned this value. When set to \u201c:\u201d any prior value of the environment variable is retained and the content of \nvalue\n is simply appended to the end after a \u201c:\u201d character.\n\n\n\n\n\n\n\n\nIMPORTANT\n: For C++ components, the LD_LIBRARY_PATH needs to be set in order for the Component Executable to load the component\u2019s shared object library as well as any dependent libraries installed with the component. The usual form of the LD_LIBRARY_PATH variable should be \n${MPF_HOME}/plugins//lib/\n. Additional directories can be appended after a \u201c:\u201d delimiter.\n\n\n\n\nExample:\n\n\n\"environmentVariables\": [\n {\n \"name\": \"LD_LIBRARY_PATH\",\n \"value\": \"${MPF_HOME}/plugins/SampleComponent/lib\",\n \"sep\": \":\"\n }\n ]\n\n\n\nalgorithm\n\n\nRequired.\n\n\nSpecifies information about the component\u2019s algorithm.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the algorithm\u2019s name. Should be unique and all CAPS.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the algorithm.\n\n\n\n\n\n\nactionType:\n\n Required. Defines the type of processing that the algorithm performs. Must be set to \nDETECTION\n.\n\n\n\n\n\n\ntrackType:\n\n Required. The type of object detected by the component. Should be in all CAPS. Examples\n include: \nFACE\n, \nMOTION\n, \nPERSON\n, \nSPEECH\n, \nCLASS\n (for object classification), or \nTEXT\n.\n\n\n\n\n\n\noutputChangedCounter:\n\n Optional. An integer that should be incremented when the component is changed in a way that\n would cause it to produce different output.\n\n\n\n\n\n\nrequiresCollection:\n\n Required, can be empty. Contains the state(s) that must be produced by previous algorithms in the pipeline.\n \nThis value should be empty \nunless\n the component depends on the results of another algorithm.\n\n\n\n\n\n\nprovidesCollection:\n\n Contains the following sub-fields:\n\n\n\n\nstates:\n Required. Contains the state(s) that the algorithm provides.\n Should contain the following values:\n\n\nDETECTION\n\n\nDETECTION_TYPE\n, where \nTYPE\n is the \nalgorithm.detectionType\n\n\nDETECTION_TYPE_ALGORITHM\n, where \nTYPE\n is the value of \nalgorithm.detectionType\n and \nALGORITHM\n is the value of \nalgorithm.name\n\nExample:\n\n\n\"states\": [\n \"DETECTION\",\n \"DETECTION_FACE\",\n \"DETECTION_FACE_SAMPLECOMPONENT\"]\n\n\n\n\n\n\n\n\nproperties:\n\nRequired; can be empty. Declares a list of the configurable properties that the algorithm exposes.\nContains the following sub-fields:\n\n\nname:\n\n Required.\n\n\ntype:\n\n Required.\n \nBOOLEAN\n, \nFLOAT\n, \nDOUBLE\n, \nINT\n, \nLONG\n, or \nSTRING\n.\n\n\ndefaultValue:\n\n Required.\n Must be provided in order to create a default action associated with the algorithm, where an action is a specific instance of an algorithm configured with a set of property values.\n\n\ndescription:\n\n Required.\n Description of the property. By convention, the default value for a property should be described in its description text.\n\n\n\n\n\n\n\n\n\n\n\n\nactions\n\n\nOptional.\n\n\nActions are used in the development of pipelines. Provides a list of custom actions that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default action will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the action\u2019s name. Must be unique among all actions, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the action.\n\n\n\n\n\n\nalgorithm:\n\n Required. Contains the name of the algorithm for this action. The algorithm must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nproperties:\n\n Optional. List of properties that will be passed to the algorithm. Each property has an associated name and value sub-field, which are both required. Name must be one of the properties specified in the algorithm definition for this action.\n\n\n\n\n\n\nExample:\n\n\n\"actions\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION ACTION\",\n \"description\": \"Executes the sample component face detection algorithm using the default parameters.\",\n \"algorithm\": \"SAMPLECOMPONENT\",\n \"properties\": []\n }\n]\n\n\n\ntasks\n\n\nOptional.\n\n\nA list of custom tasks that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default task will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the task's name. Must be unique among all tasks, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the task.\n\n\n\n\n\n\nactions:\n\n Required. Minimum length is 1. Contains the names of the actions that this task uses. Actions must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"tasks\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION TASK\",\n \"description\": \"Performs sample component face detection.\",\n \"actions\": [\n \"SAMPLE COMPONENT FACE DETECTION ACTION\"\n ]\n }\n]\n\n\n\npipelines\n\n\nOptional.\n\n\nA list of custom pipelines that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default pipeline will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the pipeline's name. Must be unique among all pipelines, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the pipeline.\n\n\n\n\n\n\ntasks:\n\n Required. Minimum length is 1. Contains the names of the tasks that this pipeline uses. Tasks must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"pipelines\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION PIPELINE\",\n \"description\": \"Performs sample component face detection.\",\n \"tasks\": [\n \"SAMPLE COMPONENT FACE DETECTION TASK\"\n ]\n }\n]", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nIn order to be registered within OpenMPF, each component must provide a JavaScript Object Notation (JSON) descriptor file which provides contextual information about the component.\n\n\nThis file must be named \"descriptor.json\".\n\n\nFor an example, please see: \nHello World JSON Descriptor\n\n\nData Elements\n\n\nContained within the JSON file should be the following elements:\n\n\ncomponentName\n\n\nRequired.\n\n\nContains the component\u2019s name. Should follow CamelCaseFormat.\n\n\nExample:\n\n\"componentName\" : \"SampleComponent\"\n\n\ncomponentVersion\n\n\nRequired.\n\n\nContains the component\u2019s version. Does not need to match the \ncomponentAPIVersion\n.\n\n\nExample:\n\n\"componentVersion\" : \"2.0.1\"\n\n\nmiddlewareVersion\n\n\nRequired.\n\n\nContains the version of the OpenMPF Component API that the component was built with.\n\n\nExample:\n\n\"middlewareVersion\" : \"2.0.0\"\n\n\nsourceLanguage\n\n\nRequired.\n\n\nContains the language the component is coded in. Should be \"c++\", \"python\", or \"java\".\n\n\nExample:\n\n\"sourceLanguage\" : \"c++\"\n\n\nbatchLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for batch processing once the component is deployed.\n\n\nFor Java components, this contains the name of the jar which contains the component implementation used for batch processing.\n\n\nFor setuptools-based Python components, this contains the component's distribution name, which is declared in the\ncomponent's \nsetup.py\n file. The distribution name is usually the same name as the component.\n\n\nFor basic Python components, this contains the full path to the Python file containing the component class.\n\n\nExample (C++):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libbatchSampleComponent.so\n\n\nExample (Java):\n\n\"batchLibrary\" : \"batch-sample-component-2.0.1.jar\"\n\n\nExample (setuptools-based Python):\n\n\"batchLibrary\" : \"SampleComponent\"\n\n\nExample (basic Python):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/sample_component.py\"\n\n\nstreamLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for stream processing once the component is deployed.\n\n\nNote that Python and Java components currently do not support stream processing, so this field should be omitted from Python and Java component descriptor files.\n\n\nExample (C++):\n\n\"streamLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libstreamSampleComponent.so\n\n\nenvironmentVariables\n\n\nRequired; can be empty.\n\n\nDefines a collection of environment variables that will be set when executing the OpenMPF Component Executable.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Name of the environment variable.\n\n\n\n\n\n\nvalue:\n\n Value of the environment variable.\n Note that value can be a list of values separated by \u201c:\u201d.\n\n\n\n\n\n\nsep:\n\n The \nsep\n field (short for \u201cseparator\u201d) should be set to \u201cnull\u201d or \u201c:\u201d. When set to \u201cnull,\u201d the content of the environment variable specified by \nname\n is the content of \nvalue\n; for an existing variable, its former value will be replaced, otherwise, a new variable will be created and assigned this value. When set to \u201c:\u201d any prior value of the environment variable is retained and the content of \nvalue\n is simply appended to the end after a \u201c:\u201d character.\n\n\n\n\n\n\n\n\nIMPORTANT\n: For C++ components, the LD_LIBRARY_PATH needs to be set in order for the Component Executable to load the component\u2019s shared object library as well as any dependent libraries installed with the component. The usual form of the LD_LIBRARY_PATH variable should be \n${MPF_HOME}/plugins//lib/\n. Additional directories can be appended after a \u201c:\u201d delimiter.\n\n\n\n\nExample:\n\n\n\"environmentVariables\": [\n {\n \"name\": \"LD_LIBRARY_PATH\",\n \"value\": \"${MPF_HOME}/plugins/SampleComponent/lib\",\n \"sep\": \":\"\n }\n ]\n\n\n\nalgorithm\n\n\nRequired.\n\n\nSpecifies information about the component\u2019s algorithm.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the algorithm\u2019s name. Should be unique and all CAPS.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the algorithm.\n\n\n\n\n\n\nactionType:\n\n Required. Defines the type of processing that the algorithm performs. Must be set to \nDETECTION\n.\n\n\n\n\n\n\ntrackType:\n\n Required. The type of object detected by the component. Should be in all CAPS. Examples\n include: \nFACE\n, \nMOTION\n, \nPERSON\n, \nSPEECH\n, \nCLASS\n (for object classification), or \nTEXT\n.\n\n\n\n\n\n\noutputChangedCounter:\n\n Optional. An integer that should be incremented when the component is changed in a way that\n would cause it to produce different output.\n\n\n\n\n\n\nrequiresCollection:\n\n Required, can be empty. Contains the state(s) that must be produced by previous algorithms in the pipeline.\n \nThis value should be empty \nunless\n the component depends on the results of another algorithm.\n\n\n\n\n\n\nprovidesCollection:\n\n Contains the following sub-fields:\n\n\n\n\nstates:\n Required. Contains the state(s) that the algorithm provides.\n Should contain the following values:\n\n\nDETECTION\n\n\nDETECTION_TYPE\n, where \nTYPE\n is the \nalgorithm.detectionType\n\n\nDETECTION_TYPE_ALGORITHM\n, where \nTYPE\n is the value of \nalgorithm.detectionType\n and \nALGORITHM\n is the value of \nalgorithm.name\n\nExample:\n\n\n\"states\": [\n \"DETECTION\",\n \"DETECTION_FACE\",\n \"DETECTION_FACE_SAMPLECOMPONENT\"]\n\n\n\n\n\n\n\n\nproperties:\n\nRequired; can be empty. Declares a list of the configurable properties that the algorithm exposes.\nContains the following sub-fields:\n\n\nname:\n\n Required.\n\n\ntype:\n\n Required.\n \nBOOLEAN\n, \nFLOAT\n, \nDOUBLE\n, \nINT\n, \nLONG\n, or \nSTRING\n.\n\n\ndefaultValue:\n\n Required.\n Must be provided in order to create a default action associated with the algorithm, where an action is a specific instance of an algorithm configured with a set of property values.\n\n\ndescription:\n\n Required.\n Description of the property. By convention, the default value for a property should be described in its description text.\n\n\n\n\n\n\n\n\n\n\n\n\nactions\n\n\nOptional.\n\n\nActions are used in the development of pipelines. Provides a list of custom actions that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default action will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the action\u2019s name. Must be unique among all actions, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the action.\n\n\n\n\n\n\nalgorithm:\n\n Required. Contains the name of the algorithm for this action. The algorithm must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nproperties:\n\n Optional. List of properties that will be passed to the algorithm. Each property has an associated name and value sub-field, which are both required. Name must be one of the properties specified in the algorithm definition for this action.\n\n\n\n\n\n\nExample:\n\n\n\"actions\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION ACTION\",\n \"description\": \"Executes the sample component face detection algorithm using the default parameters.\",\n \"algorithm\": \"SAMPLECOMPONENT\",\n \"properties\": []\n }\n]\n\n\n\ntasks\n\n\nOptional.\n\n\nA list of custom tasks that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default task will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the task's name. Must be unique among all tasks, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the task.\n\n\n\n\n\n\nactions:\n\n Required. Minimum length is 1. Contains the names of the actions that this task uses. Actions must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"tasks\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION TASK\",\n \"description\": \"Performs sample component face detection.\",\n \"actions\": [\n \"SAMPLE COMPONENT FACE DETECTION ACTION\"\n ]\n }\n]\n\n\n\npipelines\n\n\nOptional.\n\n\nA list of custom pipelines that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default pipeline will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the pipeline's name. Must be unique among all pipelines, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the pipeline.\n\n\n\n\n\n\ntasks:\n\n Required. Minimum length is 1. Contains the names of the tasks that this pipeline uses. Tasks must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"pipelines\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION PIPELINE\",\n \"description\": \"Performs sample component face detection.\",\n \"tasks\": [\n \"SAMPLE COMPONENT FACE DETECTION TASK\"\n ]\n }\n]", "title": "Component Descriptor Reference" }, { @@ -682,7 +682,7 @@ }, { "location": "/CPP-Batch-Component-API/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent->SetRunDirectory(...)\ncomponent->Init()\nwhile (true) {\n job = ReceiveJob()\n if (component->Supports(job.data_type))\n component->GetDetections(...) // Component logic does the work here\n SendJobResponse()\n}\ncomponent->Close()\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes functions on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponent\n.\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF C++ detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponent\n interface functions; and \nComponent Utilities\n, which perform actions such as image rotation, and cropping.\n\n\nComponent Interface\n\n\n\n\nMPFComponent\n - Abstract base class for components.\n\n\n\n\nDetection Component Interface\n\n\n\n\nMPFDetectionComponent\n extends \nMPFComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform batch processing.\n\n\n\n\nJob Definitions\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponents must also include two \nComponent Factory Functions\n.\n\n\nComponent Interface\n\n\nThe \nMPFComponent\n class is the abstract base class utilized by all OpenMPF C++ components that perform batch processing.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n This interface should not be directly implemented, because no mechanism exists for launching components based off of it. Currently, the only supported type of component is detection, and all batch detection components should instead extend \nMPFDetectionComponent\n.\n\n\n\n\nSetRunDirectory(string)\n\n\nSets the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid SetRunDirectory(const string &run_dir)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrun_dir\n\n\nconst string &\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nSetRunDirectory\n is called by the Component Executable to set the correct path. This function should not be called within your implementation.\n\n\n\n\nGetRunDirectory()\n\n\nReturns the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\nFunction Definition:\n\n\n\n\nstring GetRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nstring\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nstring run_dir = GetRunDirectory();\nstring plugin_path = run_dir + \"/SampleComponent\";\nstring config_path = plugin_path + \"/config\";\n\n\n\nInit()\n\n\nThe component should perform all initialization operations in the \nInit\n member function.\nThis will be executed once by the Component Executable, on component startup, before the first job, after \nSetRunDirectory\n.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if initialization is successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Init() {\n // Get component paths\n string run_dir = GetRunDirectory();\n string plugin_path = run_dir + \"/SampleComponent\";\n string config_path = plugin_path + \"/config\";\n\n // Setup logger, load data models, etc.\n\n return true;\n}\n\n\n\nClose()\n\n\nThe component should perform all shutdown operations in the \nClose\n member function.\nThis will be executed once by the Component Executable, on component shutdown, usually after the last job.\n\n\nThis function is called before the component instance is deleted (see \nComponent Factory Functions\n).\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Close() {\n // Free memory, etc.\n return true;\n}\n\n\n\nGetComponentType()\n\n\nThe GetComponentType() member function allows the C++ Batch Component API to determine the component \"type.\" Currently \nMPF_DETECTION_COMPONENT\n is the only supported component type. APIs for other component types may be developed in the future.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFComponentType GetComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nMPF_DETECTION_COMPONENT\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nMPFComponentType SampleComponent::GetComponentType() {\n return MPF_DETECTION_COMPONENT;\n};\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macros in its implementation:\n\n\nMPF_COMPONENT_CREATOR(TYPENAME);\n\n\n\nMPF_COMPONENT_DELETER();\n\n\n\nThe creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThe deleter macro creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThese macros must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nMPF_COMPONENT_CREATOR(HelloWorld);\nMPF_COMPONENT_DELETER();\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform batch processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \nGetDetections()\n functions or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\nreturn MPF_UNSUPPORTED_DATA_TYPE;\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, developers may extend one of several convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several functions in \nMPFDetectionComponent\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapters are provided:\n\n\n\n\nImage Detection (\nsource\n)\n\n\nVideo Detection (\nsource\n)\n\n\nImage and Video Detection (\nsource\n)\n\n\nAudio Detection (\nsource\n)\n\n\nAudio and Video Detection (\nsource\n)\n\n\nGeneric Detection (\nsource\n)\n\n\n\n\n\n\nExample: Creating Adaptors to Perform Naive Tracking:\n\nA simple detector that operates on videos may simply go through the video frame-by-frame, extract each frame\u2019s data, and perform detections on that data as though it were processing a new unrelated image each time. As each frame is processed, one or more \nMPFImageLocations\n are generated.\n\n\nGenerally, it is preferred that a detection component that supports \nVIDEO\n data is able to perform tracking across video frames to appropriately correlate \nMPFImageLocation\n detections across frames.\n\n\nAn adapter could be developed to perform simple tracking. This would correlate \nMPFImageLocation\n detections across frames by na\u00efvely looking for bounding box regions in each contiguous frame that overlap by a given threshold such as 50%.\n\n\n\n\nSupports(MPFDetectionDataType)\n\n\nReturns true or false depending on the data type is supported or not.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Supports(MPFDetectionDataType data_type)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndata_type\n\n\nMPFDetectionDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample component that supports only image and video files\nbool SampleComponent::Supports(MPFDetectionDataType data_type) {\n return data_type == MPFDetectionDataType::IMAGE || data_type == MPFDetectionDataType::VIDEO;\n}\n\n\n\nGetDetections(MPFImageJob \u2026)\n\n\nUsed to detect objects in an image file. The MPFImageJob structure contains\nthe data_uri specifying the location of the image file.\n\n\nCurrently, the data_uri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\nThis is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFImageJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFImageJob&\n\n\nStructure containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\nGetDetections(MPFVideoJob \u2026)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive\na request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFVideoJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFVideoJob&\n\n\nStructure containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\nGetDetections(MPFAudioJob \u2026)\n\n\nUsed to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain\nthe entirety of the audio file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFAudioJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFAudioJob &\n\n\nStructure containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\nGetDetections(MPFGenericJob \u2026)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFGenericJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFGenericJob &\n\n\nStructure containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nStructure containing information about a job to be performed on a piece of media.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name \n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndata_uri \n\n\nconst string &\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const MPFImageLocation &location,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nlocation\n\n \nconst MPFImageLocation &\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const MPFVideoTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_frame\n\n \nconst int\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nconst int\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \ntrack\n\n \nconst MPFVideoTrack &\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an audio file. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const MPFAudioTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \ntrack\n\n \nconst MPFAudioTrack &\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const MPFGenericTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \ntrack\n\n \nconst MPFGenericTrack &\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS. See the \nsection\n for \nROTATION\n and \nHORIZONTAL_FLIP\n below,\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\n\nMPFImageLocation {\n x_left_upper = 0, y_left_upper = 0, width = 100, height = 50, confidence = 1.0,\n { {\"CLASSIFICATION\", \"backpack\"} }\n}\n\n\n\n\n\n\nRotation and Horizontal Flip\n\n\nWhen the \ndetection_properties\n map contains a \nROTATION\n key, it should be a floating point value in the interval\n\n[0.0, 360.0)\n indicating the orientation of the detection in degrees in the counter-clockwise direction.\nIn order to view the detection in the upright orientation, it must be rotated the given number of degrees in the\nclockwise direction.\n\n\nThe \ndetection_properties\n map can also contain a \nHORIZONTAL_FLIP\n property that will either be \n\"true\"\n or \n\"false\"\n.\nThe \ndetection_properties\n map may have both \nHORIZONTAL_FLIP\n and \nROTATION\n keys.\n\n\nThe Workflow Manager performs the following algorithm to draw the bounding box when generating markup:\n\n\n\n\n\n Draw the rectangle ignoring rotation and flip.\n\n\n\n\n\n Rotate the rectangle counter-clockwise the given number of degrees around its top left corner.\n\n\n\n\n\n If the rectangle is flipped, flip horizontally around the top left corner.\n\n\n\n\n\n\n\n\nIn the image above you can see the three steps required to properly draw a bounding box.\nStep 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is drawn in green.\nThe detection for the image above is:\n\n\n\nMPFImageLocation {\n x_left_upper = 210, y_left_upper = 189, width = 177, height = 41, confidence = 1.0,\n { {\"ROTATION\", \"15\"}, { \"HORIZONTAL_FLIP\", \"true\" } }\n}\n\n\n\n\nNote that the \nx_left_upper\n, \ny_left_upper\n, \nwidth\n, and \nheight\n values describe the red rectangle. The addition\nof the \nROTATION\n property results in the blue rectangle, and the addition of the \nHORIZONTAL_FLIP\n property results\nin the green rectangle.\n\n\nOne way to think about the process is \"draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,\nand then rotate and flip around the pin\".\n\n\nRotation-Only Example\n\n\n\n\nThe Workflow Manager generated the above image by performing markup on the original image with the following\ndetection:\n\n\n\nMPFImageLocation {\n x_left_upper = 156, y_left_upper = 339, width = 194, height = 243, confidence = 1.0,\n { {\"ROTATION\", \"90.0\"} }\n}\n\n\n\n\nThe markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no\n\nHORIZONTAL_FLIP\n.\n\n\nIn order to properly extract the detection region from the original image, such as when generating an artifact, you\nwould need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the\nbottom-left corner so that the face is in the proper upright position.\n\n\nWhen the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding\nbox. That is why its position is described using the \nx_left_upper\n, and \ny_left_upper\n variables. They refer to the\ntop-left corner of the correctly oriented region.\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nMPFAudioTrack\n\n\nStructure used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioTrack()\nMPFAudioTrack(\n int start_time,\n int stop_time,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nStructure used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericTrack()\nMPFGenericTrack(\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nException Types\n\n\nMPFDetectionException\n\n\nException that should be thrown by the \nGetDetections()\n methods when an error occurs.\nThe content of the \nerror_code\n and \nwhat()\n members will appear in the JSON output object.\n\n\n\n\nConstructors:\n\n\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror_code\n\n\nMPFDetectionError\n\n\nSpecifies the error type. See \nMPFDetectionError\n.\n\n\n\n\n\n\nwhat()\n\n\nconst char*\n\n\nTextual description of the specific error. (Inherited from \nstd::exception\n)\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the type of error that occurred in a \nGetDetections()\n method. It is used as a parameter to\nthe \nMPFDetectionException\n constructor. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component function has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size. For example, if a call to \ncv::imread()\n returns a frame of data with either the number of rows or columns less than or equal to 0.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nFor convenience, the OpenMPF provides the \nMPFImageReader\n (\nsource\n) and \nMPFVideoCapture\n (\nsource\n) utility classes to perform horizontal flipping, rotation, and cropping to a region of interest. Note, that when using these classes, the component will also need to utilize the class to perform a reverse transform to convert the transformed pixel coordinates back to the original (e.g. pre-flipped, pre-rotated, and pre-cropped) coordinate space.\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Optional component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent->SetRunDirectory(...)\ncomponent->Init()\nwhile (true) {\n job = ReceiveJob()\n if (component->Supports(job.data_type))\n component->GetDetections(...) // Component logic does the work here\n SendJobResponse()\n}\ncomponent->Close()\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes functions on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponent\n.\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF C++ detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponent\n interface functions; and \nComponent Utilities\n, which perform actions such as image rotation, and cropping.\n\n\nComponent Interface\n\n\n\n\nMPFComponent\n - Abstract base class for components.\n\n\n\n\nDetection Component Interface\n\n\n\n\nMPFDetectionComponent\n extends \nMPFComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform batch processing.\n\n\n\n\nJob Definitions\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponents must also include two \nComponent Factory Functions\n.\n\n\nComponent Interface\n\n\nThe \nMPFComponent\n class is the abstract base class utilized by all OpenMPF C++ components that perform batch processing.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n This interface should not be directly implemented, because no mechanism exists for launching components based off of it. Currently, the only supported type of component is detection, and all batch detection components should instead extend \nMPFDetectionComponent\n.\n\n\n\n\nSetRunDirectory(string)\n\n\nSets the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid SetRunDirectory(const string &run_dir)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrun_dir\n\n\nconst string &\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nSetRunDirectory\n is called by the Component Executable to set the correct path. This function should not be called within your implementation.\n\n\n\n\nGetRunDirectory()\n\n\nReturns the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\nFunction Definition:\n\n\n\n\nstring GetRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nstring\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nstring run_dir = GetRunDirectory();\nstring plugin_path = run_dir + \"/SampleComponent\";\nstring config_path = plugin_path + \"/config\";\n\n\n\nInit()\n\n\nThe component should perform all initialization operations in the \nInit\n member function.\nThis will be executed once by the Component Executable, on component startup, before the first job, after \nSetRunDirectory\n.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if initialization is successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Init() {\n // Get component paths\n string run_dir = GetRunDirectory();\n string plugin_path = run_dir + \"/SampleComponent\";\n string config_path = plugin_path + \"/config\";\n\n // Setup logger, load data models, etc.\n\n return true;\n}\n\n\n\nClose()\n\n\nThe component should perform all shutdown operations in the \nClose\n member function.\nThis will be executed once by the Component Executable, on component shutdown, usually after the last job.\n\n\nThis function is called before the component instance is deleted (see \nComponent Factory Functions\n).\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Close() {\n // Free memory, etc.\n return true;\n}\n\n\n\nGetComponentType()\n\n\nThe GetComponentType() member function allows the C++ Batch Component API to determine the component \"type.\" Currently \nMPF_DETECTION_COMPONENT\n is the only supported component type. APIs for other component types may be developed in the future.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFComponentType GetComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nMPF_DETECTION_COMPONENT\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nMPFComponentType SampleComponent::GetComponentType() {\n return MPF_DETECTION_COMPONENT;\n};\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macros in its implementation:\n\n\nMPF_COMPONENT_CREATOR(TYPENAME);\n\n\n\nMPF_COMPONENT_DELETER();\n\n\n\nThe creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThe deleter macro creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThese macros must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nMPF_COMPONENT_CREATOR(HelloWorld);\nMPF_COMPONENT_DELETER();\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform batch processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \nGetDetections()\n functions or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\nreturn MPF_UNSUPPORTED_DATA_TYPE;\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, developers may extend one of several convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several functions in \nMPFDetectionComponent\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapters are provided:\n\n\n\n\nImage Detection (\nsource\n)\n\n\nVideo Detection (\nsource\n)\n\n\nImage and Video Detection (\nsource\n)\n\n\nAudio Detection (\nsource\n)\n\n\nAudio and Video Detection (\nsource\n)\n\n\nGeneric Detection (\nsource\n)\n\n\n\n\n\n\nExample: Creating Adaptors to Perform Naive Tracking:\n\nA simple detector that operates on videos may simply go through the video frame-by-frame, extract each frame\u2019s data, and perform detections on that data as though it were processing a new unrelated image each time. As each frame is processed, one or more \nMPFImageLocations\n are generated.\n\n\nGenerally, it is preferred that a detection component that supports \nVIDEO\n data is able to perform tracking across video frames to appropriately correlate \nMPFImageLocation\n detections across frames.\n\n\nAn adapter could be developed to perform simple tracking. This would correlate \nMPFImageLocation\n detections across frames by na\u00efvely looking for bounding box regions in each contiguous frame that overlap by a given threshold such as 50%.\n\n\n\n\nSupports(MPFDetectionDataType)\n\n\nReturns true or false depending on the data type is supported or not.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Supports(MPFDetectionDataType data_type)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndata_type\n\n\nMPFDetectionDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample component that supports only image and video files\nbool SampleComponent::Supports(MPFDetectionDataType data_type) {\n return data_type == MPFDetectionDataType::IMAGE || data_type == MPFDetectionDataType::VIDEO;\n}\n\n\n\nGetDetections(MPFImageJob \u2026)\n\n\nUsed to detect objects in an image file. The MPFImageJob structure contains\nthe data_uri specifying the location of the image file.\n\n\nCurrently, the data_uri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\nThis is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFImageJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFImageJob&\n\n\nStructure containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\nGetDetections(MPFVideoJob \u2026)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive\na request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFVideoJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFVideoJob&\n\n\nStructure containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\nGetDetections(MPFAudioJob \u2026)\n\n\nUsed to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain\nthe entirety of the audio file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFAudioJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFAudioJob &\n\n\nStructure containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\nGetDetections(MPFGenericJob \u2026)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFGenericJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFGenericJob &\n\n\nStructure containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nStructure containing information about a job to be performed on a piece of media.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name \n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndata_uri \n\n\nconst string &\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const MPFImageLocation &location,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nlocation\n\n \nconst MPFImageLocation &\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const MPFVideoTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_frame\n\n \nconst int\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nconst int\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \ntrack\n\n \nconst MPFVideoTrack &\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an audio file. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const MPFAudioTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \ntrack\n\n \nconst MPFAudioTrack &\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const MPFGenericTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \ntrack\n\n \nconst MPFGenericTrack &\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS. See the \nsection\n for \nROTATION\n and \nHORIZONTAL_FLIP\n below,\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\n\nMPFImageLocation {\n x_left_upper = 0, y_left_upper = 0, width = 100, height = 50, confidence = 1.0,\n { {\"CLASSIFICATION\", \"backpack\"} }\n}\n\n\n\n\n\n\nRotation and Horizontal Flip\n\n\nWhen the \ndetection_properties\n map contains a \nROTATION\n key, it should be a floating point value in the interval\n\n[0.0, 360.0)\n indicating the orientation of the detection in degrees in the counter-clockwise direction.\nIn order to view the detection in the upright orientation, it must be rotated the given number of degrees in the\nclockwise direction.\n\n\nThe \ndetection_properties\n map can also contain a \nHORIZONTAL_FLIP\n property that will either be \n\"true\"\n or \n\"false\"\n.\nThe \ndetection_properties\n map may have both \nHORIZONTAL_FLIP\n and \nROTATION\n keys.\n\n\nThe Workflow Manager performs the following algorithm to draw the bounding box when generating markup:\n\n\n\n\n\n Draw the rectangle ignoring rotation and flip.\n\n\n\n\n\n Rotate the rectangle counter-clockwise the given number of degrees around its top left corner.\n\n\n\n\n\n If the rectangle is flipped, flip horizontally around the top left corner.\n\n\n\n\n\n\n\n\nIn the image above you can see the three steps required to properly draw a bounding box.\nStep 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is drawn in green.\nThe detection for the image above is:\n\n\n\nMPFImageLocation {\n x_left_upper = 210, y_left_upper = 189, width = 177, height = 41, confidence = 1.0,\n { {\"ROTATION\", \"15\"}, { \"HORIZONTAL_FLIP\", \"true\" } }\n}\n\n\n\n\nNote that the \nx_left_upper\n, \ny_left_upper\n, \nwidth\n, and \nheight\n values describe the red rectangle. The addition\nof the \nROTATION\n property results in the blue rectangle, and the addition of the \nHORIZONTAL_FLIP\n property results\nin the green rectangle.\n\n\nOne way to think about the process is \"draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,\nand then rotate and flip around the pin\".\n\n\nRotation-Only Example\n\n\n\n\nThe Workflow Manager generated the above image by performing markup on the original image with the following\ndetection:\n\n\n\nMPFImageLocation {\n x_left_upper = 156, y_left_upper = 339, width = 194, height = 243, confidence = 1.0,\n { {\"ROTATION\", \"90.0\"} }\n}\n\n\n\n\nThe markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no\n\nHORIZONTAL_FLIP\n.\n\n\nIn order to properly extract the detection region from the original image, such as when generating an artifact, you\nwould need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the\nbottom-left corner so that the face is in the proper upright position.\n\n\nWhen the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding\nbox. That is why its position is described using the \nx_left_upper\n, and \ny_left_upper\n variables. They refer to the\ntop-left corner of the correctly oriented region.\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nMPFAudioTrack\n\n\nStructure used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioTrack()\nMPFAudioTrack(\n int start_time,\n int stop_time,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nStructure used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericTrack()\nMPFGenericTrack(\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nException Types\n\n\nMPFDetectionException\n\n\nException that should be thrown by the \nGetDetections()\n methods when an error occurs.\nThe content of the \nerror_code\n and \nwhat()\n members will appear in the JSON output object.\n\n\n\n\nConstructors:\n\n\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror_code\n\n\nMPFDetectionError\n\n\nSpecifies the error type. See \nMPFDetectionError\n.\n\n\n\n\n\n\nwhat()\n\n\nconst char*\n\n\nTextual description of the specific error. (Inherited from \nstd::exception\n)\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the type of error that occurred in a \nGetDetections()\n method. It is used as a parameter to\nthe \nMPFDetectionException\n constructor. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component function has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size. For example, if a call to \ncv::imread()\n returns a frame of data with either the number of rows or columns less than or equal to 0.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nFor convenience, the OpenMPF provides the \nMPFImageReader\n (\nsource\n) and \nMPFVideoCapture\n (\nsource\n) utility classes to perform horizontal flipping, rotation, and cropping to a region of interest. Note, that when using these classes, the component will also need to utilize the class to perform a reverse transform to convert the transformed pixel coordinates back to the original (e.g. pre-flipped, pre-rotated, and pre-cropped) coordinate space.\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Optional component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]", "title": "C++ Batch Component API" }, { @@ -902,7 +902,7 @@ }, { "location": "/Python-Batch-Component-API/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used detect\nobjects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n.\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent_cls = locate_component_class()\ncomponent = component_cls()\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections)\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods. See the \nAPI Specification\n for more information.\n\n\nThe figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe Component Executor determines that it is running a Python component so it creates an instance of the\n\nPythonComponentHandle\n\nclass. The \nPythonComponentHandle\n class creates an instance of the component class and calls one of the\n\nget_detections_from_*\n methods on the component instance. The example\nabove is an image component, so \nPythonComponentHandle\n calls \nExampleImageFaceDetection.get_detections_from_image\n\non the component instance. The component instance creates an instance of\n\nmpf_component_util.ImageReader\n to access the image. Components that support video\nwould implement \nget_detections_from_video\n and use\n\nmpf_component_util.VideoCapture\n instead.\n\n\nThis figure show the structure when the mixin classes are used:\n\n\n\n\nThe figure above shows a video component, \nExampleVideoFaceDetection\n, that extends the\n\nmpf_component_util.VideoCaptureMixin\n class. \nPythonComponentHandle\n will\ncall \nget_detections_from_video\n on an instance of \nExampleVideoFaceDetection\n. \nExampleVideoFaceDetection\n does not\nimplement \nget_detections_from_video\n, so the implementation inherited from \nmpf_component_util.VideoCaptureMixin\n\ngets called. \nmpf_component_util.VideoCaptureMixin.get_detections_from_video\n creates an instance of\n\nmpf_component_util.VideoCapture\n and calls\n\nExampleVideoFaceDetection.get_detections_from_video_capture\n, passing in the \nmpf_component_util.VideoCapture\n it\njust created. \nExampleVideoFaceDetection.get_detections_from_video_capture\n is where the component reads the video\nusing the passed-in \nmpf_component_util.VideoCapture\n and attempts to find detections. Components that support images\nwould extend \nmpf_component_util.ImageReaderMixin\n, implement\n\nget_detections_from_image_reader\n, and access the image using the passed-in\n\nmpf_component_util.ImageReader\n.\n\n\nDuring component registration a \nvirtualenv\n is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is \nmpf_component_api\n. \nmpf_component_api\n is the package containing the job classes\n(e.g. \nmpf_component_api.ImageJob\n,\n\nmpf_component_api.VideoJob\n) and detection result classes\n(e.g. \nmpf_component_api.ImageLocation\n,\n\nmpf_component_api.VideoTrack\n).\n\n\nHow to Create a Python Component\n\n\nThere are two types of Python components that are supported, setuptools-based components and basic Python components.\nBasic Python components are quicker to set up, but have no built-in support for dependency management.\nAll dependencies must be handled by the developer. Setuptools-based components are recommended since they use\nsetuptools and pip for dependency management.\n\n\nEither way, the end goal is to create a Docker image. This document describes the steps for developing a component\noutside of Docker. Many developers prefer to do that first and then focus on building and running their component\nwithin Docker after they are confident it works in a local environment. Alternatively, some developers feel confident\ndeveloping their component entirely within Docker. When you're ready for the Docker steps, refer to the\n\nREADME\n.\n\n\nGet openmpf-python-component-sdk\n\n\nIn order to create a Python component you will need to clone the\n\nopenmpf-python-component-sdk repository\n if you don't\nalready have it. While not technically required, it is recommended to also clone the\n\nopenmpf-build-tools repository\n.\nThe rest of the steps assume you cloned openmpf-python-component-sdk to\n\n~/openmpf-projects/openmpf-python-component-sdk\n. The rest of the steps also assume that if you cloned the\nopenmpf-build-tools repository, you cloned it to \n~/openmpf-projects/openmpf-build-tools\n.\n\n\nSetup Python Component Libraries\n\n\nThe component packaging steps require that wheel files for \nmpf_component_api\n, \nmpf_component_util\n, and\ntheir dependencies are available in the \n~/mpf-sdk-install/python/wheelhouse\n directory.\n\n\nIf you have openmpf-build-tools, then you can run:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk\n\n\n\nTo setup the libraries manually you can run:\n\n\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/api\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/component_util\n\n\n\nHow to Create a Setuptools-based Python Component\n\n\nIn this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json\n\n\n\n2. Create pyproject.toml file in project's top-level directory:\n\n\npyproject.toml\n should contain the following content:\n\n\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n\n\n3. Create setup.cfg file in project's top-level directory:\n\n\nExample of a minimal setup.cfg file:\n\n\n[metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/*\n\n\n\nThe \nname\n parameter defines the distribution name. Typically the distribution name matches the component name.\n\n\nAny dependencies that component requires should be listed in the \ninstall_requires\n field.\n\n\nThe Component Executor looks in the \nentry_points\n element and uses the \nmpf.exported_component\n field to determine\nthe component class. The right hand side of \ncomponent =\n should be the dotted module name, followed by a \n:\n,\nfollowed by the name of the class. The general pattern is\n\n'mpf.exported_component': 'component = .:'\n. In the above example,\n\nMyComponent\n is the class name. The module is listed as \nmy_component.my_component\n because the \nmy_component\n\npackage contains the \nmy_component.py\n file and the \nmy_component.py\n file contains the \nMyComponent\n class.\n\n\nThe \n[options.package_data]\n section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed.\n\n\n4. Create descriptor.json file in MyComponent/plugin-files/descriptor:\n\n\nThe \nbatchLibrary\n field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \n\"batchLibrary\" : \"MyComponent\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n5. Implement your component class:\n\n\nBelow is an example of the structure of a simple component. This component extends\n\nmpf_component_util.VideoCaptureMixin\n to simplify the use of\n\nmpf_component_util.VideoCapture\n. You would replace the call to\n\nrun_detection_algorithm_on_frame\n with your component-specific logic.\n\n\nimport logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track\n\n\n\n6. Optional: Add prebuilt wheel files if not available on PyPi:\n\n\nIf your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's \nplugin-files/wheelhouse\n directory.\nThe prebuilt library names must be listed in your \nsetup.cfg\n file's \ninstall_requires\n field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's \nplugin-files/wheelhouse\n directory.\n\n\n7. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nMyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-1.18.4-cp38-cp38-manylinux1_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.2.0.34-cp38-cp38-manylinux1_x86_64.whl\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following commands:\n\n\nmkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n8. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nHow to Create a Basic Python Component\n\n\nIn this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py\n\n\n\n2. Create descriptor.json file in MyComponent/descriptor:\n\n\nThe \nbatchLibrary\n field should be the full path to the Python file containing your component class.\nIn this example the field should be: \n\"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n3. Implement your component class:\n\n\nBelow is an example of the structure of a simple component that does not use\n\nmpf_component_util.VideoCaptureMixin\n. You would replace the call to\n\nrun_detection_algorithm\n with your component-specific logic.\n\n\nimport logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent\n\n\n\nThe Component Executor looks for a module-level variable named \nEXPORT_MPF_COMPONENT\n to specify which class\nis the component.\n\n\n4. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following command:\n\n\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n5. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nAPI Specification\n\n\nAn OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods.\n\n\ncomponent.get_detections_from_* methods\n\n\nAll get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed\nin is an appropriate job object (e.g. \nmpf_component_api.ImageJob\n, \nmpf_component_api.VideoJob\n). Since the methods\nare invoked through an instance, instance methods and class methods end up with two arguments, the first is either the\ninstance or the class, respectively. All get_detections_from_* methods can be implemented either as an instance method,\na static method, or a class method.\nFor example:\n\n\ninstance method:\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nstatic method:\n\n\nclass MyComponent:\n @staticmethod\n def get_detections_from_image(image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nclass method:\n\n\nclass MyComponent:\n @classmethod\n def get_detections_from_image(cls, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nAll get_detections_from_* methods must return an iterable of the appropriate detection type\n(e.g. \nmpf_component_api.ImageLocation\n, \nmpf_component_api.VideoTrack\n). The return value is normally a list or generator,\nbut any iterable can be used.\n\n\nImage API\n\n\ncomponent.get_detections_from_image(image_job)\n\n\nUsed to detect objects in an image file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nget_detections_from_image\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nimage_job\n\n\nmpf_component_api.ImageJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.ImageLocation\n\n\n\n\nmpf_component_api.ImageJob\n\n\nClass containing data used for detection of objects in an image file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nfeed_forward_location\n\n \nNone\n or \nmpf_component_api.ImageLocation\n\n \nAn \nmpf_component_api.ImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.ImageLocation\n\n\nClass used to store the location of detected objects in a image file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, x_left_upper, y_left_upper, width, height, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nSee here for information about rotation and horizontal flipping.\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\nmpf_component_api.ImageLocation(0, 0, 100, 100, 1.0, {'CLASSIFICATION': 'backpack'})\n\n\n\nmpf_component_util.ImageReader\n\n\nmpf_component_util.ImageReader\n is a utility class for accessing images. It is the image equivalent to\n\nmpf_component_util.VideoCapture\n. Like \nmpf_component_util.VideoCapture\n,\nit may modify the read-in frame data based on job_properties. From the point of view of someone using\n\nmpf_component_util.ImageReader\n, these modifications are mostly transparent. \nmpf_component_util.ImageReader\n makes\nit look like you are reading the original image file as though it has already been rotated, flipped, cropped, etc.\n\n\nOne issue with this approach is that the detection bounding boxes will be relative to the\nmodified frame data, not the original. To make the detections relative to the original image\nthe \nmpf_component_util.ImageReader.reverse_transform(image_location)\n method must be called on each\n\nmpf_component_api.ImageLocation\n. Since the use of \nmpf_component_util.ImageReader\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.ImageReader\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_image(image_job):\n image_reader = mpf_component_util.ImageReader(image_job)\n image = image_reader.get_image()\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_image_locations = run_component_specific_algorithm(image)\n for result in result_image_locations:\n image_reader.reverse_transform(result)\n yield result\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.ImageReaderMixin\n for a more concise way to use\n\nmpf_component_util.ImageReader\n below.\n\n\nmpf_component_util.ImageReaderMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.ImageReader\n.\n\nmpf_component_util.ImageReaderMixin\n takes care of initializing a \nmpf_component_util.ImageReader\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.ImageReaderMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.ImageReaderMixin\n.\n\n\nThe component must implement \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must read the image using the \nmpf_component_util.ImageReader\n\n that is passed in to \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must NOT implement \nget_detections_from_image(image_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.ImageReader.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.ImageReaderMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.ImageReaderMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_image_reader(image_job, image_reader):\n image = image_reader.get_image()\n\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n return run_component_specific_algorithm(image)\n\n\n\nmpf_component_util.ImageReaderMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\n\n\nVideo API\n\n\ncomponent.get_detections_from_video(video_job)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive a\nrequest for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_video(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...]\n\n\n\nget_detections_from_video\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nvideo_job\n\n\nmpf_component_api.VideoJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.VideoTrack\n\n\n\n\nmpf_component_api.VideoJob\n\n\nClass containing data used for detection of objects in a video file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n \n\n \n\n \nstart_frame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.VideoTrack\n\n \nAn \nmpf_component_api.VideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.VideoTrack\n\n\nClass used to store the location of detected objects in a video file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_frame, stop_frame, confidence=-1.0, frame_locations=None, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\nframe_locations\n\n\ndict[int, mpf_component_api.ImageLocation]\n\n\nA dict of individual detections. The key for each entry is the frame number where the detection was generated, and the value is a \nmpf_component_api.ImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.VideoTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\ntrack = mpf_component_api.VideoTrack(0, 1)\ntrack.frame_locations[0] = mpf_component_api.ImageLocation(0, 0, 100, 100, 0.75, {'CLASSIFICATION': 'backpack'})\ntrack.frame_locations[1] = mpf_component_api.ImageLocation(10, 10, 110, 110, 0.95, {'CLASSIFICATION': 'backpack'})\ntrack.confidence = max(il.confidence for il in track.frame_locations.itervalues())\n\n\n\nmpf_component_util.VideoCapture\n\n\nmpf_component_util.VideoCapture\n is a utility class for reading videos. \nmpf_component_util.VideoCapture\n works very\nsimilarly to \ncv2.VideoCapture\n, except that it might modify the video frames based on job properties. From the point\nof view of someone using \nmpf_component_util.VideoCapture\n, these modifications are mostly transparent.\n\nmpf_component_util.VideoCapture\n makes it look like you are reading the original video file as though it has already\nbeen rotated, flipped, cropped, etc. Also, if frame skipping is enabled, such as by setting the value of the\n\nFRAME_INTERVAL\n job property, it makes it look like you are reading the video as though it never contained the\nskipped frames.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the\nmodified video, not the original. To make the detections relative to the original video\nthe \nmpf_component_util.VideoCapture.reverse_transform(video_track)\n method must be called on each\n\nmpf_component_api.VideoTrack\n. Since the use of \nmpf_component_util.VideoCapture\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.VideoCapture\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n video_capture = mpf_component_util.VideoCapture(video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n video_capture.reverse_transform(track)\n yield track\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.VideoCaptureMixin\n for a more concise way to use\n\nmpf_component_util.VideoCapture\n below.\n\n\nmpf_component_util.VideoCaptureMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.VideoCapture\n.\n\nmpf_component_util.VideoCaptureMixin\n takes care of initializing a \nmpf_component_util.VideoCapture\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.VideoCaptureMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.VideoCaptureMixin\n.\n\n\nThe component must implement \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must read the video using the \nmpf_component_util.VideoCapture\n\n that is passed in to \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must NOT implement \nget_detections_from_video(video_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.VideoCapture.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.VideoCaptureMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_video_capture(video_job, video_capture):\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield track\n\n\n\nmpf_component_util.VideoCaptureMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\nFor example:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin, mpf_component_util.ImageReaderMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n ...\n\n @staticmethod\n def get_detections_from_image_reader(image_job, image_reader):\n ...\n\n\n\nAudio API\n\n\ncomponent.get_detections_from_audio(audio_job)\n\n\nUsed to detect objects in an audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_audio(self, audio_job):\n return [mpf_component_api.AudioTrack(...), ...]\n\n\n\nget_detections_from_audio\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\naudio_job\n\n\nmpf_component_api.AudioJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.AudioTrack\n\n\n\n\nmpf_component_api.AudioJob\n\n\nClass containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\".\n\n \n\n \n\n \nstart_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.AudioTrack\n\n \nAn \nmpf_component_api.AudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.AudioTrack\n\n\nClass used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_time, stop_time, confidence, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.AudioTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\nGeneric API\n\n\ncomponent.get_detections_from_generic(generic_job)\n\n\nUsed to detect objects in files that are not video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_generic(self, generic_job):\n return [mpf_component_api.GenericTrack(...), ...]\n\n\n\nget_detections_from_generic\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ngeneric_job\n\n\nmpf_component_api.GenericJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.GenericTrack\n\n\n\n\nmpf_component_api.GenericJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is not\nlogically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.txt\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.GenericTrack\n\n \nAn \nmpf_component_api.GenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.GenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nHow to Report Errors\n\n\nThe following is an example of how to throw an exception:\n\n\nimport mpf_component_api as mpf\n\n...\nraise mpf.DetectionError.MISSING_PROPERTY.exception(\n 'The REALLY_IMPORTANT property must be provided as a job property.')\n\n\n\nThe Python Batch Component API supports all of the same error types\nlisted \nhere\n for the C++ Batch Component API. Be sure to omit\nthe \nMPF_\n prefix. You can replace the \nMISSING_PROPERTY\n part in the above code with any other error type. When\ngenerating an exception, choose the type that best describes your error.\n\n\nPython Component Build Environment\n\n\nAll Python components must work with CPython 3.8.10. Also, Python components\nmust work with the Linux version that is used by the OpenMPF Component\nExecutable. At this writing, OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any\nOS, but incompatibility issues can arise when using Python libraries that\ninclude compiled extension modules. Python libraries are typically distributed\nas wheel files. The wheel format requires that the file name follows the pattern\nof \n----.whl\n.\n\n--\n are called\n\ncompatibility tags\n. For example,\n\nmpf_component_api\n is pure Python, so the name of its wheel file is\n\nmpf_component_api-0.1-py3-none-any.whl\n. \npy3\n means it will work with any\nPython 3 implementation because it does not use any implementation-specific\nfeatures. \nnone\n means that it does not use the Python ABI. \nany\n means it will\nwork on any platform.\n\n\nThe following combinations of compatibility tags are supported:\n\n\n\n\ncp38-cp38-manylinux2014_x86_64\n\n\ncp38-cp38-manylinux2010_x86_64\n\n\ncp38-cp38-manylinux1_x86_64\n\n\ncp38-cp38-linux_x86_64\n\n\ncp38-abi3-manylinux2014_x86_64\n\n\ncp38-abi3-manylinux2010_x86_64\n\n\ncp38-abi3-manylinux1_x86_64\n\n\ncp38-abi3-linux_x86_64\n\n\ncp38-none-manylinux2014_x86_64\n\n\ncp38-none-manylinux2010_x86_64\n\n\ncp38-none-manylinux1_x86_64\n\n\ncp38-none-linux_x86_64\n\n\ncp37-abi3-manylinux2014_x86_64\n\n\ncp37-abi3-manylinux2010_x86_64\n\n\ncp37-abi3-manylinux1_x86_64\n\n\ncp37-abi3-linux_x86_64\n\n\ncp36-abi3-manylinux2014_x86_64\n\n\ncp36-abi3-manylinux2010_x86_64\n\n\ncp36-abi3-manylinux1_x86_64\n\n\ncp36-abi3-linux_x86_64\n\n\ncp35-abi3-manylinux2014_x86_64\n\n\ncp35-abi3-manylinux2010_x86_64\n\n\ncp35-abi3-manylinux1_x86_64\n\n\ncp35-abi3-linux_x86_64\n\n\ncp34-abi3-manylinux2014_x86_64\n\n\ncp34-abi3-manylinux2010_x86_64\n\n\ncp34-abi3-manylinux1_x86_64\n\n\ncp34-abi3-linux_x86_64\n\n\ncp33-abi3-manylinux2014_x86_64\n\n\ncp33-abi3-manylinux2010_x86_64\n\n\ncp33-abi3-manylinux1_x86_64\n\n\ncp33-abi3-linux_x86_64\n\n\ncp32-abi3-manylinux2014_x86_64\n\n\ncp32-abi3-manylinux2010_x86_64\n\n\ncp32-abi3-manylinux1_x86_64\n\n\ncp32-abi3-linux_x86_64\n\n\npy38-none-manylinux2014_x86_64\n\n\npy38-none-manylinux2010_x86_64\n\n\npy38-none-manylinux1_x86_64\n\n\npy38-none-linux_x86_64\n\n\npy3-none-manylinux2014_x86_64\n\n\npy3-none-manylinux2010_x86_64\n\n\npy3-none-manylinux1_x86_64\n\n\npy3-none-linux_x86_64\n\n\npy37-none-manylinux2014_x86_64\n\n\npy37-none-manylinux2010_x86_64\n\n\npy37-none-manylinux1_x86_64\n\n\npy37-none-linux_x86_64\n\n\npy36-none-manylinux2014_x86_64\n\n\npy36-none-manylinux2010_x86_64\n\n\npy36-none-manylinux1_x86_64\n\n\npy36-none-linux_x86_64\n\n\npy35-none-manylinux2014_x86_64\n\n\npy35-none-manylinux2010_x86_64\n\n\npy35-none-manylinux1_x86_64\n\n\npy35-none-linux_x86_64\n\n\npy34-none-manylinux2014_x86_64\n\n\npy34-none-manylinux2010_x86_64\n\n\npy34-none-manylinux1_x86_64\n\n\npy34-none-linux_x86_64\n\n\npy33-none-manylinux2014_x86_64\n\n\npy33-none-manylinux2010_x86_64\n\n\npy33-none-manylinux1_x86_64\n\n\npy33-none-linux_x86_64\n\n\npy32-none-manylinux2014_x86_64\n\n\npy32-none-manylinux2010_x86_64\n\n\npy32-none-manylinux1_x86_64\n\n\npy32-none-linux_x86_64\n\n\npy31-none-manylinux2014_x86_64\n\n\npy31-none-manylinux2010_x86_64\n\n\npy31-none-manylinux1_x86_64\n\n\npy31-none-linux_x86_64\n\n\npy30-none-manylinux2014_x86_64\n\n\npy30-none-manylinux2010_x86_64\n\n\npy30-none-manylinux1_x86_64\n\n\npy30-none-linux_x86_64\n\n\ncp38-none-any\n\n\npy38-none-any\n\n\npy3-none-any\n\n\npy37-none-any\n\n\npy36-none-any\n\n\npy35-none-any\n\n\npy34-none-any\n\n\npy33-none-any\n\n\npy32-none-any\n\n\npy31-none-any\n\n\npy30-none-any\n\n\n\n\nThe list above was generated with the following command:\n\npython3 -c 'import pip._internal.pep425tags as tags; print(\"\\n\".join(str(t) for t in tags.get_supported()))'\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through\nmultiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input\n(i.e. when processing the same job).\n\n\nLogging\n\n\nIt recommended that components use Python's built-in\n\nlogging\n module.\n The component should\n\nimport logging\n and call \nlogging.getLogger('')\n to get a logger instance.\nThe component should not configure logging itself. The Component Executor will configure the\n\nlogging\n module for the component. The logger will write log messages to standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n. Note that multiple instances of the\nsame component can log to the same file. Also, logging content can span multiple lines.\n\n\nThe following log levels are supported: \nFATAL, ERROR, WARN, INFO, DEBUG\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nThe format of the log messages is:\n\n\nDATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE\n\n\n\nFor example:\n\n\n2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used detect\nobjects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n.\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent_cls = locate_component_class()\ncomponent = component_cls()\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections)\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods. See the \nAPI Specification\n for more information.\n\n\nThe figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe Component Executor determines that it is running a Python component so it creates an instance of the\n\nPythonComponentHandle\n\nclass. The \nPythonComponentHandle\n class creates an instance of the component class and calls one of the\n\nget_detections_from_*\n methods on the component instance. The example\nabove is an image component, so \nPythonComponentHandle\n calls \nExampleImageFaceDetection.get_detections_from_image\n\non the component instance. The component instance creates an instance of\n\nmpf_component_util.ImageReader\n to access the image. Components that support video\nwould implement \nget_detections_from_video\n and use\n\nmpf_component_util.VideoCapture\n instead.\n\n\nThis figure show the structure when the mixin classes are used:\n\n\n\n\nThe figure above shows a video component, \nExampleVideoFaceDetection\n, that extends the\n\nmpf_component_util.VideoCaptureMixin\n class. \nPythonComponentHandle\n will\ncall \nget_detections_from_video\n on an instance of \nExampleVideoFaceDetection\n. \nExampleVideoFaceDetection\n does not\nimplement \nget_detections_from_video\n, so the implementation inherited from \nmpf_component_util.VideoCaptureMixin\n\ngets called. \nmpf_component_util.VideoCaptureMixin.get_detections_from_video\n creates an instance of\n\nmpf_component_util.VideoCapture\n and calls\n\nExampleVideoFaceDetection.get_detections_from_video_capture\n, passing in the \nmpf_component_util.VideoCapture\n it\njust created. \nExampleVideoFaceDetection.get_detections_from_video_capture\n is where the component reads the video\nusing the passed-in \nmpf_component_util.VideoCapture\n and attempts to find detections. Components that support images\nwould extend \nmpf_component_util.ImageReaderMixin\n, implement\n\nget_detections_from_image_reader\n, and access the image using the passed-in\n\nmpf_component_util.ImageReader\n.\n\n\nDuring component registration a \nvirtualenv\n is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is \nmpf_component_api\n. \nmpf_component_api\n is the package containing the job classes\n(e.g. \nmpf_component_api.ImageJob\n,\n\nmpf_component_api.VideoJob\n) and detection result classes\n(e.g. \nmpf_component_api.ImageLocation\n,\n\nmpf_component_api.VideoTrack\n).\n\n\nHow to Create a Python Component\n\n\nThere are two types of Python components that are supported, setuptools-based components and basic Python components.\nBasic Python components are quicker to set up, but have no built-in support for dependency management.\nAll dependencies must be handled by the developer. Setuptools-based components are recommended since they use\nsetuptools and pip for dependency management.\n\n\nEither way, the end goal is to create a Docker image. This document describes the steps for developing a component\noutside of Docker. Many developers prefer to do that first and then focus on building and running their component\nwithin Docker after they are confident it works in a local environment. Alternatively, some developers feel confident\ndeveloping their component entirely within Docker. When you're ready for the Docker steps, refer to the\n\nREADME\n.\n\n\nGet openmpf-python-component-sdk\n\n\nIn order to create a Python component you will need to clone the\n\nopenmpf-python-component-sdk repository\n if you don't\nalready have it. While not technically required, it is recommended to also clone the\n\nopenmpf-build-tools repository\n.\nThe rest of the steps assume you cloned openmpf-python-component-sdk to\n\n~/openmpf-projects/openmpf-python-component-sdk\n. The rest of the steps also assume that if you cloned the\nopenmpf-build-tools repository, you cloned it to \n~/openmpf-projects/openmpf-build-tools\n.\n\n\nSetup Python Component Libraries\n\n\nThe component packaging steps require that wheel files for \nmpf_component_api\n, \nmpf_component_util\n, and\ntheir dependencies are available in the \n~/mpf-sdk-install/python/wheelhouse\n directory.\n\n\nIf you have openmpf-build-tools, then you can run:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk\n\n\n\nTo setup the libraries manually you can run:\n\n\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/api\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/component_util\n\n\n\nHow to Create a Setuptools-based Python Component\n\n\nIn this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json\n\n\n\n2. Create pyproject.toml file in project's top-level directory:\n\n\npyproject.toml\n should contain the following content:\n\n\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n\n\n3. Create setup.cfg file in project's top-level directory:\n\n\nExample of a minimal setup.cfg file:\n\n\n[metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/*\n\n\n\nThe \nname\n parameter defines the distribution name. Typically the distribution name matches the component name.\n\n\nAny dependencies that component requires should be listed in the \ninstall_requires\n field.\n\n\nThe Component Executor looks in the \nentry_points\n element and uses the \nmpf.exported_component\n field to determine\nthe component class. The right hand side of \ncomponent =\n should be the dotted module name, followed by a \n:\n,\nfollowed by the name of the class. The general pattern is\n\n'mpf.exported_component': 'component = .:'\n. In the above example,\n\nMyComponent\n is the class name. The module is listed as \nmy_component.my_component\n because the \nmy_component\n\npackage contains the \nmy_component.py\n file and the \nmy_component.py\n file contains the \nMyComponent\n class.\n\n\nThe \n[options.package_data]\n section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed.\n\n\n4. Create descriptor.json file in MyComponent/plugin-files/descriptor:\n\n\nThe \nbatchLibrary\n field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \n\"batchLibrary\" : \"MyComponent\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n5. Implement your component class:\n\n\nBelow is an example of the structure of a simple component. This component extends\n\nmpf_component_util.VideoCaptureMixin\n to simplify the use of\n\nmpf_component_util.VideoCapture\n. You would replace the call to\n\nrun_detection_algorithm_on_frame\n with your component-specific logic.\n\n\nimport logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track\n\n\n\n6. Optional: Add prebuilt wheel files if not available on PyPi:\n\n\nIf your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's \nplugin-files/wheelhouse\n directory.\nThe prebuilt library names must be listed in your \nsetup.cfg\n file's \ninstall_requires\n field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's \nplugin-files/wheelhouse\n directory.\n\n\n7. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nMyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-1.18.4-cp38-cp38-manylinux1_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.2.0.34-cp38-cp38-manylinux1_x86_64.whl\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following commands:\n\n\nmkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n8. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nHow to Create a Basic Python Component\n\n\nIn this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py\n\n\n\n2. Create descriptor.json file in MyComponent/descriptor:\n\n\nThe \nbatchLibrary\n field should be the full path to the Python file containing your component class.\nIn this example the field should be: \n\"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n3. Implement your component class:\n\n\nBelow is an example of the structure of a simple component that does not use\n\nmpf_component_util.VideoCaptureMixin\n. You would replace the call to\n\nrun_detection_algorithm\n with your component-specific logic.\n\n\nimport logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent\n\n\n\nThe Component Executor looks for a module-level variable named \nEXPORT_MPF_COMPONENT\n to specify which class\nis the component.\n\n\n4. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following command:\n\n\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n5. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nAPI Specification\n\n\nAn OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods.\n\n\ncomponent.get_detections_from_* methods\n\n\nAll get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed\nin is an appropriate job object (e.g. \nmpf_component_api.ImageJob\n, \nmpf_component_api.VideoJob\n). Since the methods\nare invoked through an instance, instance methods and class methods end up with two arguments, the first is either the\ninstance or the class, respectively. All get_detections_from_* methods can be implemented either as an instance method,\na static method, or a class method.\nFor example:\n\n\ninstance method:\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nstatic method:\n\n\nclass MyComponent:\n @staticmethod\n def get_detections_from_image(image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nclass method:\n\n\nclass MyComponent:\n @classmethod\n def get_detections_from_image(cls, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nAll get_detections_from_* methods must return an iterable of the appropriate detection type\n(e.g. \nmpf_component_api.ImageLocation\n, \nmpf_component_api.VideoTrack\n). The return value is normally a list or generator,\nbut any iterable can be used.\n\n\nImage API\n\n\ncomponent.get_detections_from_image(image_job)\n\n\nUsed to detect objects in an image file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nget_detections_from_image\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nimage_job\n\n\nmpf_component_api.ImageJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.ImageLocation\n\n\n\n\nmpf_component_api.ImageJob\n\n\nClass containing data used for detection of objects in an image file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nfeed_forward_location\n\n \nNone\n or \nmpf_component_api.ImageLocation\n\n \nAn \nmpf_component_api.ImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.ImageLocation\n\n\nClass used to store the location of detected objects in a image file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, x_left_upper, y_left_upper, width, height, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nSee here for information about rotation and horizontal flipping.\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\nmpf_component_api.ImageLocation(0, 0, 100, 100, 1.0, {'CLASSIFICATION': 'backpack'})\n\n\n\nmpf_component_util.ImageReader\n\n\nmpf_component_util.ImageReader\n is a utility class for accessing images. It is the image equivalent to\n\nmpf_component_util.VideoCapture\n. Like \nmpf_component_util.VideoCapture\n,\nit may modify the read-in frame data based on job_properties. From the point of view of someone using\n\nmpf_component_util.ImageReader\n, these modifications are mostly transparent. \nmpf_component_util.ImageReader\n makes\nit look like you are reading the original image file as though it has already been rotated, flipped, cropped, etc.\n\n\nOne issue with this approach is that the detection bounding boxes will be relative to the\nmodified frame data, not the original. To make the detections relative to the original image\nthe \nmpf_component_util.ImageReader.reverse_transform(image_location)\n method must be called on each\n\nmpf_component_api.ImageLocation\n. Since the use of \nmpf_component_util.ImageReader\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.ImageReader\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_image(image_job):\n image_reader = mpf_component_util.ImageReader(image_job)\n image = image_reader.get_image()\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_image_locations = run_component_specific_algorithm(image)\n for result in result_image_locations:\n image_reader.reverse_transform(result)\n yield result\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.ImageReaderMixin\n for a more concise way to use\n\nmpf_component_util.ImageReader\n below.\n\n\nmpf_component_util.ImageReaderMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.ImageReader\n.\n\nmpf_component_util.ImageReaderMixin\n takes care of initializing a \nmpf_component_util.ImageReader\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.ImageReaderMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.ImageReaderMixin\n.\n\n\nThe component must implement \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must read the image using the \nmpf_component_util.ImageReader\n\n that is passed in to \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must NOT implement \nget_detections_from_image(image_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.ImageReader.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.ImageReaderMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.ImageReaderMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_image_reader(image_job, image_reader):\n image = image_reader.get_image()\n\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n return run_component_specific_algorithm(image)\n\n\n\nmpf_component_util.ImageReaderMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\n\n\nVideo API\n\n\ncomponent.get_detections_from_video(video_job)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive a\nrequest for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_video(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...]\n\n\n\nget_detections_from_video\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nvideo_job\n\n\nmpf_component_api.VideoJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.VideoTrack\n\n\n\n\nmpf_component_api.VideoJob\n\n\nClass containing data used for detection of objects in a video file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n \n\n \n\n \nstart_frame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.VideoTrack\n\n \nAn \nmpf_component_api.VideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.VideoTrack\n\n\nClass used to store the location of detected objects in a video file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_frame, stop_frame, confidence=-1.0, frame_locations=None, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\nframe_locations\n\n\ndict[int, mpf_component_api.ImageLocation]\n\n\nA dict of individual detections. The key for each entry is the frame number where the detection was generated, and the value is a \nmpf_component_api.ImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.VideoTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\ntrack = mpf_component_api.VideoTrack(0, 1)\ntrack.frame_locations[0] = mpf_component_api.ImageLocation(0, 0, 100, 100, 0.75, {'CLASSIFICATION': 'backpack'})\ntrack.frame_locations[1] = mpf_component_api.ImageLocation(10, 10, 110, 110, 0.95, {'CLASSIFICATION': 'backpack'})\ntrack.confidence = max(il.confidence for il in track.frame_locations.itervalues())\n\n\n\nmpf_component_util.VideoCapture\n\n\nmpf_component_util.VideoCapture\n is a utility class for reading videos. \nmpf_component_util.VideoCapture\n works very\nsimilarly to \ncv2.VideoCapture\n, except that it might modify the video frames based on job properties. From the point\nof view of someone using \nmpf_component_util.VideoCapture\n, these modifications are mostly transparent.\n\nmpf_component_util.VideoCapture\n makes it look like you are reading the original video file as though it has already\nbeen rotated, flipped, cropped, etc. Also, if frame skipping is enabled, such as by setting the value of the\n\nFRAME_INTERVAL\n job property, it makes it look like you are reading the video as though it never contained the\nskipped frames.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the\nmodified video, not the original. To make the detections relative to the original video\nthe \nmpf_component_util.VideoCapture.reverse_transform(video_track)\n method must be called on each\n\nmpf_component_api.VideoTrack\n. Since the use of \nmpf_component_util.VideoCapture\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.VideoCapture\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n video_capture = mpf_component_util.VideoCapture(video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n video_capture.reverse_transform(track)\n yield track\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.VideoCaptureMixin\n for a more concise way to use\n\nmpf_component_util.VideoCapture\n below.\n\n\nmpf_component_util.VideoCaptureMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.VideoCapture\n.\n\nmpf_component_util.VideoCaptureMixin\n takes care of initializing a \nmpf_component_util.VideoCapture\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.VideoCaptureMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.VideoCaptureMixin\n.\n\n\nThe component must implement \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must read the video using the \nmpf_component_util.VideoCapture\n\n that is passed in to \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must NOT implement \nget_detections_from_video(video_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.VideoCapture.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.VideoCaptureMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_video_capture(video_job, video_capture):\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield track\n\n\n\nmpf_component_util.VideoCaptureMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\nFor example:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin, mpf_component_util.ImageReaderMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n ...\n\n @staticmethod\n def get_detections_from_image_reader(image_job, image_reader):\n ...\n\n\n\nAudio API\n\n\ncomponent.get_detections_from_audio(audio_job)\n\n\nUsed to detect objects in an audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_audio(self, audio_job):\n return [mpf_component_api.AudioTrack(...), ...]\n\n\n\nget_detections_from_audio\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\naudio_job\n\n\nmpf_component_api.AudioJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.AudioTrack\n\n\n\n\nmpf_component_api.AudioJob\n\n\nClass containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\".\n\n \n\n \n\n \nstart_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.AudioTrack\n\n \nAn \nmpf_component_api.AudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.AudioTrack\n\n\nClass used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_time, stop_time, confidence, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.AudioTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\nGeneric API\n\n\ncomponent.get_detections_from_generic(generic_job)\n\n\nUsed to detect objects in files that are not video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_generic(self, generic_job):\n return [mpf_component_api.GenericTrack(...), ...]\n\n\n\nget_detections_from_generic\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ngeneric_job\n\n\nmpf_component_api.GenericJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.GenericTrack\n\n\n\n\nmpf_component_api.GenericJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is not\nlogically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.txt\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.GenericTrack\n\n \nAn \nmpf_component_api.GenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.GenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nHow to Report Errors\n\n\nThe following is an example of how to throw an exception:\n\n\nimport mpf_component_api as mpf\n\n...\nraise mpf.DetectionError.MISSING_PROPERTY.exception(\n 'The REALLY_IMPORTANT property must be provided as a job property.')\n\n\n\nThe Python Batch Component API supports all of the same error types\nlisted \nhere\n for the C++ Batch Component API. Be sure to omit\nthe \nMPF_\n prefix. You can replace the \nMISSING_PROPERTY\n part in the above code with any other error type. When\ngenerating an exception, choose the type that best describes your error.\n\n\nPython Component Build Environment\n\n\nAll Python components must work with CPython 3.8.10. Also, Python components\nmust work with the Linux version that is used by the OpenMPF Component\nExecutable. At this writing, OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any\nOS, but incompatibility issues can arise when using Python libraries that\ninclude compiled extension modules. Python libraries are typically distributed\nas wheel files. The wheel format requires that the file name follows the pattern\nof \n----.whl\n.\n\n--\n are called\n\ncompatibility tags\n. For example,\n\nmpf_component_api\n is pure Python, so the name of its wheel file is\n\nmpf_component_api-0.1-py3-none-any.whl\n. \npy3\n means it will work with any\nPython 3 implementation because it does not use any implementation-specific\nfeatures. \nnone\n means that it does not use the Python ABI. \nany\n means it will\nwork on any platform.\n\n\nThe following combinations of compatibility tags are supported:\n\n\n\n\ncp38-cp38-manylinux2014_x86_64\n\n\ncp38-cp38-manylinux2010_x86_64\n\n\ncp38-cp38-manylinux1_x86_64\n\n\ncp38-cp38-linux_x86_64\n\n\ncp38-abi3-manylinux2014_x86_64\n\n\ncp38-abi3-manylinux2010_x86_64\n\n\ncp38-abi3-manylinux1_x86_64\n\n\ncp38-abi3-linux_x86_64\n\n\ncp38-none-manylinux2014_x86_64\n\n\ncp38-none-manylinux2010_x86_64\n\n\ncp38-none-manylinux1_x86_64\n\n\ncp38-none-linux_x86_64\n\n\ncp37-abi3-manylinux2014_x86_64\n\n\ncp37-abi3-manylinux2010_x86_64\n\n\ncp37-abi3-manylinux1_x86_64\n\n\ncp37-abi3-linux_x86_64\n\n\ncp36-abi3-manylinux2014_x86_64\n\n\ncp36-abi3-manylinux2010_x86_64\n\n\ncp36-abi3-manylinux1_x86_64\n\n\ncp36-abi3-linux_x86_64\n\n\ncp35-abi3-manylinux2014_x86_64\n\n\ncp35-abi3-manylinux2010_x86_64\n\n\ncp35-abi3-manylinux1_x86_64\n\n\ncp35-abi3-linux_x86_64\n\n\ncp34-abi3-manylinux2014_x86_64\n\n\ncp34-abi3-manylinux2010_x86_64\n\n\ncp34-abi3-manylinux1_x86_64\n\n\ncp34-abi3-linux_x86_64\n\n\ncp33-abi3-manylinux2014_x86_64\n\n\ncp33-abi3-manylinux2010_x86_64\n\n\ncp33-abi3-manylinux1_x86_64\n\n\ncp33-abi3-linux_x86_64\n\n\ncp32-abi3-manylinux2014_x86_64\n\n\ncp32-abi3-manylinux2010_x86_64\n\n\ncp32-abi3-manylinux1_x86_64\n\n\ncp32-abi3-linux_x86_64\n\n\npy38-none-manylinux2014_x86_64\n\n\npy38-none-manylinux2010_x86_64\n\n\npy38-none-manylinux1_x86_64\n\n\npy38-none-linux_x86_64\n\n\npy3-none-manylinux2014_x86_64\n\n\npy3-none-manylinux2010_x86_64\n\n\npy3-none-manylinux1_x86_64\n\n\npy3-none-linux_x86_64\n\n\npy37-none-manylinux2014_x86_64\n\n\npy37-none-manylinux2010_x86_64\n\n\npy37-none-manylinux1_x86_64\n\n\npy37-none-linux_x86_64\n\n\npy36-none-manylinux2014_x86_64\n\n\npy36-none-manylinux2010_x86_64\n\n\npy36-none-manylinux1_x86_64\n\n\npy36-none-linux_x86_64\n\n\npy35-none-manylinux2014_x86_64\n\n\npy35-none-manylinux2010_x86_64\n\n\npy35-none-manylinux1_x86_64\n\n\npy35-none-linux_x86_64\n\n\npy34-none-manylinux2014_x86_64\n\n\npy34-none-manylinux2010_x86_64\n\n\npy34-none-manylinux1_x86_64\n\n\npy34-none-linux_x86_64\n\n\npy33-none-manylinux2014_x86_64\n\n\npy33-none-manylinux2010_x86_64\n\n\npy33-none-manylinux1_x86_64\n\n\npy33-none-linux_x86_64\n\n\npy32-none-manylinux2014_x86_64\n\n\npy32-none-manylinux2010_x86_64\n\n\npy32-none-manylinux1_x86_64\n\n\npy32-none-linux_x86_64\n\n\npy31-none-manylinux2014_x86_64\n\n\npy31-none-manylinux2010_x86_64\n\n\npy31-none-manylinux1_x86_64\n\n\npy31-none-linux_x86_64\n\n\npy30-none-manylinux2014_x86_64\n\n\npy30-none-manylinux2010_x86_64\n\n\npy30-none-manylinux1_x86_64\n\n\npy30-none-linux_x86_64\n\n\ncp38-none-any\n\n\npy38-none-any\n\n\npy3-none-any\n\n\npy37-none-any\n\n\npy36-none-any\n\n\npy35-none-any\n\n\npy34-none-any\n\n\npy33-none-any\n\n\npy32-none-any\n\n\npy31-none-any\n\n\npy30-none-any\n\n\n\n\nThe list above was generated with the following command:\n\npython3 -c 'import pip._internal.pep425tags as tags; print(\"\\n\".join(str(t) for t in tags.get_supported()))'\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through\nmultiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input\n(i.e. when processing the same job).\n\n\nLogging\n\n\nIt recommended that components use Python's built-in\n\nlogging\n module.\n The component should\n\nimport logging\n and call \nlogging.getLogger('')\n to get a logger instance.\nThe component should not configure logging itself. The Component Executor will configure the\n\nlogging\n module for the component. The logger will write log messages to standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n. Note that multiple instances of the\nsame component can log to the same file. Also, logging content can span multiple lines.\n\n\nThe following log levels are supported: \nFATAL, ERROR, WARN, INFO, DEBUG\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nThe format of the log messages is:\n\n\nDATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE\n\n\n\nFor example:\n\n\n2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message", "title": "Python Batch Component API" }, { @@ -1082,7 +1082,7 @@ }, { "location": "/Java-Batch-Component-API/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executor\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executor loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executor:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executor is as follows:\n\n\ncomponent.setRunDirectory(...)\ncomponent.init()\nwhile (true) {\n job = ReceiveJob()\n if (component.supports(job.dataType))\n component.getDetections(...) // Component does the work here\n }\ncomponent.close()\n\n\n\nEach instance of a Component Executor runs as a separate process.\n\n\nThe Component Executor receives and parses requests from the WFM, invokes methods on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponentBase\n.\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the Java Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF Java detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponentBase\n.\n\n\nBuilding the component into a jar. (See \nHelloWorldComponent pom.xml\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the Java Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; and \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponentInterface\n methods (See the \nMPFAudioAndVideoDetectionComponentAdapter\n for an example; \nTODO: implement those shown in the diagram\n). In the future, the API will also include \nComponent Utilities\n, which perform actions such as image flipping, rotation, and cropping.\n\n\nComponent Interfaces\n\n\n\n\nMPFComponentInterface\n - Interface for all Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\nDetection Component Interfaces\n\n\n\n\nMPFDetectionComponentInterface\n - Baseline interface for detection components.\n\n\nMPFDetectionComponentBase\n - An abstract baseline for detection components. Provides default implementations for \nMPFDetectionComponentInterface\n.\n\n\n\n\nJob Definitions\n\n\nThe following classes define the details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponent Interface\n\n\nThe OpenMPF Component class structure consists of:\n\n\n\n\nMPFComponentInterface\n - Interface for all OpenMPF Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\n\n\nIMPORTANT:\n This interface and abstract class should not be directly implemented because no mechanism exists for launching components based off of it. Instead, it defines the contract that components must follow. Currently, the only supported type of batch component is \"DETECTION\". Those components should extend \nMPFDetectionComponentBase\n\n\n\n\nSee the latest source here.\n\n\nsetRunDirectory(String)\n\n\nSets the value to the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void setRunDirectory(String runDirectory);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrunDirectory\n\n\nString\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nsetRunDirectory\n is called by the Component Executor to set the correct path. It is not necessary to call this method in your component implementation.\n\n\n\n\ngetRunDirectory()\n\n\nReturns the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic String getRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nString\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\ninit()\n\n\nPerforms any necessary startup tasks for the component. This will be executed once by the Component Executor, on component startup, before the first job, after \nsetRunDirectory\n.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void init() {\n // Setup logger, Load data models, etc.\n}\n\n\n\nclose()\n\n\nPerforms any necessary shutdown tasks for the component. This will be executed once by the Component Executor, on component shutdown, usually after the last job.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void close() {\n // Close file handlers, etc.\n}\n\n\n\ngetComponentType()\n\n\nAllows the Component API to determine the component \"type.\" Currently \nDETECTION\n is the only supported component type.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic MPFComponentType getComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nDETECTION\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic MPFComponentType getComponentType() {\n return MPFComponentType.DETECTION;\n}\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponentInterface\n must be utilized by all OpenMPF Java detection components that perform batch processing.\n\n\nEvery batch detection component must define a \ncomponent\n class which implements the MPFComponentInterface. This is typically performed by extending \nMPFDetectionComponentBase\n, which extends \nMPFComponentBase\n and implements \nMPFDetectionComponentInterface\n.\n\n\nTo designate the component class, every batch detection component should include an applicationContext.xml which defines the \ncomponent\n bean. The \ncomponent\n bean class must implement \nMPFDetectionComponentInterface\n.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \ngetDetections()\n methods or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\n\nthrow new MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE);\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, developers may extend a convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several methods in \nMPFDetectionComponentInterface\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapter is provided:\n\n\n\n\nAudio And Video Detection Component Adapter (\nsource\n)\n\n\n\n\n\n\nExample: Using Adaptors to Provide Simple AudioVisual Handling:\n\nMany components designed to work on audio files, such as speech detection, are relevant to video files as well. Some of the tools for these components, however, only function on audio files (such as .wav, .mp3) and not video files (.avi, .mov, etc).\n\n\nThe \nMPFAudioAndVideoDetectionComponentAdapter\n adapter class implements the \ngetDetections(MPFVideoJob)\n method by translating the video request into an audio request. It builds a temporary audio file by ripping the audio from the video media input, translates the \nMPFVideoJob\n into an \nMPFAudioJob\n, and invokes \ngetDetections(MPFAudioJob)\n on the generated file. Once processing is done, the adapter translates the \nMPFAudioTrack\n list into an \nMPFVideoTrack\n list.\n\n\nSince only audio and video files are relevant to this adapter, it provides a default implementation of the \ngetDetections(MPFImageJob)\n method which throws \nnew MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE)\n.\n\n\nThe Sphinx speech detection component uses this adapter to run Sphinx speech detection on video files. Other components that need to process video files as audio may also use the adapter.\n\n\n\n\nsupports(MPFDataType)\n\n\nReturns the supported data types of the component.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic boolean supports(MPFDataType dataType)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndataType\n\n\nMPFDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nboolean\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample Component that supports only image and video files\npublic boolean supports(MPFDataType dataType) {\n return dataType == MPFDataType.IMAGE || dataType == MPFDataType.VIDEO;\n}\n\n\n\ngetDetections(MPFImageJob)\n\n\nUsed to detect objects in image files. The MPFImageJob class contains the URI specifying the location of the image file.\n\n\nCurrently, the dataUri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\". This is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFImageJob\n\n\nClass containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate image locations\n}\n\n\n\ngetDetections(MPFVideoJob)\n\n\nUsed to detect objects in a video.\n\n\nPrior to being sent to the component, videos are split into logical \"segments\" of video data and each segment (containing a range of frames) is assigned to a different job. Components are not guaranteed to receive requests in any order. For example, the first request processed by a component might receive a request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFVideoJob\n\n\nClass containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate video tracks\n}\n\n\n\ngetDetections(MPFAudioJob)\n\n\nUsed to detect objects in audio files. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFAudioJob\n\n\nClass containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate audio tracks\n}\n\n\n\ngetDetections(MPFGenericJob)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and handled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFGenericJob\n\n\nClass containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate generic tracks\n}\n\n\n\nMPFComponentDetectionError\n\n\nAn exception that occurs in a component. The exception must contain a reference to a valid \nMPFDetectionError\n.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFComponentDetectionError (\n MPFDetectionError error,\n String msg,\n Exception e\n)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror\n\n\nMPFDetectionError\n\n\nThe type of error generated by the component. See \nMPFDetectionError\n.\n\n\n\n\n\n\nmsg\n\n\nString\n\n\nThe detail message (which is saved for later retrieval by the \nThrowable.getMessage()\n method).\n\n\n\n\n\n\ne\n\n\nException\n\n\nThe cause (which is saved for later retrieval by the \nThrowable.getCause()\n method). A null value is permitted.\n\n\n\n\n\n\n\n\nDetection Job Classes\n\n\nThe following classes contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nClass containing data used for detection of objects.\n\n\n\n\nConstructor(s):\n\n\n\n\nprotected MPFJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njobName \n\n\nString\n\n\nA specific name given to the job by the OpenMPF Framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndataUri \n\n\nString\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njobProperties \n\n\nMap\n\n\nThe key corresponds to the property name specified in the component descriptor file described in \"Installing and Registering a Component\". Values are determined by an end user when creating a pipeline. \n Note: Only those property values specified by the user will be in the jobProperties map; for properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmediaProperties \n\n\nMap\n\n\nMetadata about the media associated with the job. The key is the property name and value is the property value. The entries in the map vary depend on the job type. They are defined in the specific Job's API description.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in image files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFImageLocation location)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nlocation\n\n \nMPFImageLocation\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in video files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame)\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame,\n MPFVideoTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartFrame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstopFrame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFVideoTrack\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in audio files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime)\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime,\n MPFAudioTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFAudioTrack\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFGenericTrack track) {\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFGenericTrack\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageLocation(\n int xLeftUpper,\n int yLeftUpper,\n int width,\n int height,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nxLeftUpper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\nyLeftUpper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"CLASSIFICATION\", \"backpack\");\nMPFImageLocation imageLocation = new MPFImageLocation(0, 0, 100, 100, 1.0, detectionProperties);\n\n\n\nMPFVideoTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoTrack(\n int startFrame,\n int stopFrame,\n Map frameLocations,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartFrame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstopFrame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframeLocations\n\n\nMap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame. In some cases, frames are deliberately skipped, as when a FRAME_INTERVAL > 1 is specified\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text could add an entry to \ndetectionProperties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"TRANSCRIPT\", \"RE5ULTS FR0M A TEXT DETECTER\");\nMPFVideoTrack videoTrack = new MPFVideoTrack(0, 5, frameLocations, 1.0, detectionProperties);\n\n\n\nMPFAudioTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioTrack(\n int startTime,\n int stopTime,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstopTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFGenericTrack(\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the status of \ngetDetections\n in a \nMPFComponentDetectionError\n. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component method has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nTODO: Implement Java utility classes\n\n\nJava Component Build Environment\n\n\nA Java Component must be built using a version of the Java SDK that is compatible with the one used to build the Java Component Executor. The OpenMPF Java Component Executor is currently built using OpenJDK 11.0.11. In general, the Java SDK is backwards compatible.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and Java SDK libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that Java components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Other component-specific configuration\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib - All libraries required by the component\n\u2514\u2500\u2500 libComponentName.jar - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nslf4j\n with \nlog4j2\n for OpenMPF Java Component logging. Multiple instances of the same component can log to the same file. Logging content can span multiple lines.\n\n\nLog files should be output to:\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n\n\nEach log statement must take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nThe following log LEVELs are supported:\n \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]\n\n\nThe following log4j2 configuration can be used to match the format of other OpenMPF logs:\n\n\n \n\n \n ${env:MPF_LOG_PATH}/${env:THIS_MPF_NODE}/log/sample-component-detection.log\n %date %level [%thread] %logger{1.} - %msg%n\n \n\n \n \n \n \n\n \n \n \n \n \n \n \n \n\n \n\n \n \n \n\n \n \n \n \n \n", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executor\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executor loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executor:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executor is as follows:\n\n\ncomponent.setRunDirectory(...)\ncomponent.init()\nwhile (true) {\n job = ReceiveJob()\n if (component.supports(job.dataType))\n component.getDetections(...) // Component does the work here\n }\ncomponent.close()\n\n\n\nEach instance of a Component Executor runs as a separate process.\n\n\nThe Component Executor receives and parses requests from the WFM, invokes methods on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponentBase\n.\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the Java Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF Java detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponentBase\n.\n\n\nBuilding the component into a jar. (See \nHelloWorldComponent pom.xml\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the Java Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; and \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponentInterface\n methods (See the \nMPFAudioAndVideoDetectionComponentAdapter\n for an example; \nTODO: implement those shown in the diagram\n). In the future, the API will also include \nComponent Utilities\n, which perform actions such as image flipping, rotation, and cropping.\n\n\nComponent Interfaces\n\n\n\n\nMPFComponentInterface\n - Interface for all Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\nDetection Component Interfaces\n\n\n\n\nMPFDetectionComponentInterface\n - Baseline interface for detection components.\n\n\nMPFDetectionComponentBase\n - An abstract baseline for detection components. Provides default implementations for \nMPFDetectionComponentInterface\n.\n\n\n\n\nJob Definitions\n\n\nThe following classes define the details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponent Interface\n\n\nThe OpenMPF Component class structure consists of:\n\n\n\n\nMPFComponentInterface\n - Interface for all OpenMPF Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\n\n\nIMPORTANT:\n This interface and abstract class should not be directly implemented because no mechanism exists for launching components based off of it. Instead, it defines the contract that components must follow. Currently, the only supported type of batch component is \"DETECTION\". Those components should extend \nMPFDetectionComponentBase\n\n\n\n\nSee the latest source here.\n\n\nsetRunDirectory(String)\n\n\nSets the value to the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void setRunDirectory(String runDirectory);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrunDirectory\n\n\nString\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nsetRunDirectory\n is called by the Component Executor to set the correct path. It is not necessary to call this method in your component implementation.\n\n\n\n\ngetRunDirectory()\n\n\nReturns the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic String getRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nString\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\ninit()\n\n\nPerforms any necessary startup tasks for the component. This will be executed once by the Component Executor, on component startup, before the first job, after \nsetRunDirectory\n.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void init() {\n // Setup logger, Load data models, etc.\n}\n\n\n\nclose()\n\n\nPerforms any necessary shutdown tasks for the component. This will be executed once by the Component Executor, on component shutdown, usually after the last job.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void close() {\n // Close file handlers, etc.\n}\n\n\n\ngetComponentType()\n\n\nAllows the Component API to determine the component \"type.\" Currently \nDETECTION\n is the only supported component type.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic MPFComponentType getComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nDETECTION\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic MPFComponentType getComponentType() {\n return MPFComponentType.DETECTION;\n}\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponentInterface\n must be utilized by all OpenMPF Java detection components that perform batch processing.\n\n\nEvery batch detection component must define a \ncomponent\n class which implements the MPFComponentInterface. This is typically performed by extending \nMPFDetectionComponentBase\n, which extends \nMPFComponentBase\n and implements \nMPFDetectionComponentInterface\n.\n\n\nTo designate the component class, every batch detection component should include an applicationContext.xml which defines the \ncomponent\n bean. The \ncomponent\n bean class must implement \nMPFDetectionComponentInterface\n.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \ngetDetections()\n methods or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\n\nthrow new MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE);\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, developers may extend a convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several methods in \nMPFDetectionComponentInterface\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapter is provided:\n\n\n\n\nAudio And Video Detection Component Adapter (\nsource\n)\n\n\n\n\n\n\nExample: Using Adaptors to Provide Simple AudioVisual Handling:\n\nMany components designed to work on audio files, such as speech detection, are relevant to video files as well. Some of the tools for these components, however, only function on audio files (such as .wav, .mp3) and not video files (.avi, .mov, etc).\n\n\nThe \nMPFAudioAndVideoDetectionComponentAdapter\n adapter class implements the \ngetDetections(MPFVideoJob)\n method by translating the video request into an audio request. It builds a temporary audio file by ripping the audio from the video media input, translates the \nMPFVideoJob\n into an \nMPFAudioJob\n, and invokes \ngetDetections(MPFAudioJob)\n on the generated file. Once processing is done, the adapter translates the \nMPFAudioTrack\n list into an \nMPFVideoTrack\n list.\n\n\nSince only audio and video files are relevant to this adapter, it provides a default implementation of the \ngetDetections(MPFImageJob)\n method which throws \nnew MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE)\n.\n\n\nThe Sphinx speech detection component uses this adapter to run Sphinx speech detection on video files. Other components that need to process video files as audio may also use the adapter.\n\n\n\n\nsupports(MPFDataType)\n\n\nReturns the supported data types of the component.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic boolean supports(MPFDataType dataType)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndataType\n\n\nMPFDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nboolean\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample Component that supports only image and video files\npublic boolean supports(MPFDataType dataType) {\n return dataType == MPFDataType.IMAGE || dataType == MPFDataType.VIDEO;\n}\n\n\n\ngetDetections(MPFImageJob)\n\n\nUsed to detect objects in image files. The MPFImageJob class contains the URI specifying the location of the image file.\n\n\nCurrently, the dataUri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\". This is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFImageJob\n\n\nClass containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate image locations\n}\n\n\n\ngetDetections(MPFVideoJob)\n\n\nUsed to detect objects in a video.\n\n\nPrior to being sent to the component, videos are split into logical \"segments\" of video data and each segment (containing a range of frames) is assigned to a different job. Components are not guaranteed to receive requests in any order. For example, the first request processed by a component might receive a request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFVideoJob\n\n\nClass containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate video tracks\n}\n\n\n\ngetDetections(MPFAudioJob)\n\n\nUsed to detect objects in audio files. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFAudioJob\n\n\nClass containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate audio tracks\n}\n\n\n\ngetDetections(MPFGenericJob)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and handled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFGenericJob\n\n\nClass containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate generic tracks\n}\n\n\n\nMPFComponentDetectionError\n\n\nAn exception that occurs in a component. The exception must contain a reference to a valid \nMPFDetectionError\n.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFComponentDetectionError (\n MPFDetectionError error,\n String msg,\n Exception e\n)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror\n\n\nMPFDetectionError\n\n\nThe type of error generated by the component. See \nMPFDetectionError\n.\n\n\n\n\n\n\nmsg\n\n\nString\n\n\nThe detail message (which is saved for later retrieval by the \nThrowable.getMessage()\n method).\n\n\n\n\n\n\ne\n\n\nException\n\n\nThe cause (which is saved for later retrieval by the \nThrowable.getCause()\n method). A null value is permitted.\n\n\n\n\n\n\n\n\nDetection Job Classes\n\n\nThe following classes contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nClass containing data used for detection of objects.\n\n\n\n\nConstructor(s):\n\n\n\n\nprotected MPFJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njobName \n\n\nString\n\n\nA specific name given to the job by the OpenMPF Framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndataUri \n\n\nString\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njobProperties \n\n\nMap\n\n\nThe key corresponds to the property name specified in the component descriptor file described in \"Installing and Registering a Component\". Values are determined by an end user when creating a pipeline. \n Note: Only those property values specified by the user will be in the jobProperties map; for properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmediaProperties \n\n\nMap\n\n\nMetadata about the media associated with the job. The key is the property name and value is the property value. The entries in the map vary depend on the job type. They are defined in the specific Job's API description.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in image files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFImageLocation location)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nlocation\n\n \nMPFImageLocation\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in video files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame)\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame,\n MPFVideoTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartFrame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstopFrame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFVideoTrack\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in audio files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime)\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime,\n MPFAudioTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFAudioTrack\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFGenericTrack track) {\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFGenericTrack\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageLocation(\n int xLeftUpper,\n int yLeftUpper,\n int width,\n int height,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nxLeftUpper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\nyLeftUpper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"CLASSIFICATION\", \"backpack\");\nMPFImageLocation imageLocation = new MPFImageLocation(0, 0, 100, 100, 1.0, detectionProperties);\n\n\n\nMPFVideoTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoTrack(\n int startFrame,\n int stopFrame,\n Map frameLocations,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartFrame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstopFrame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframeLocations\n\n\nMap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame. In some cases, frames are deliberately skipped, as when a FRAME_INTERVAL > 1 is specified\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text could add an entry to \ndetectionProperties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"TRANSCRIPT\", \"RE5ULTS FR0M A TEXT DETECTER\");\nMPFVideoTrack videoTrack = new MPFVideoTrack(0, 5, frameLocations, 1.0, detectionProperties);\n\n\n\nMPFAudioTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioTrack(\n int startTime,\n int stopTime,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstopTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFGenericTrack(\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the status of \ngetDetections\n in a \nMPFComponentDetectionError\n. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component method has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nTODO: Implement Java utility classes\n\n\nJava Component Build Environment\n\n\nA Java Component must be built using a version of the Java SDK that is compatible with the one used to build the Java Component Executor. The OpenMPF Java Component Executor is currently built using OpenJDK 11.0.11. In general, the Java SDK is backwards compatible.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and Java SDK libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that Java components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Other component-specific configuration\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib - All libraries required by the component\n\u2514\u2500\u2500 libComponentName.jar - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nslf4j\n with \nlog4j2\n for OpenMPF Java Component logging. Multiple instances of the same component can log to the same file. Logging content can span multiple lines.\n\n\nLog files should be output to:\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n\n\nEach log statement must take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nThe following log LEVELs are supported:\n \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]\n\n\nThe following log4j2 configuration can be used to match the format of other OpenMPF logs:\n\n\n \n\n \n ${env:MPF_LOG_PATH}/${env:THIS_MPF_NODE}/log/sample-component-detection.log\n %date %level [%thread] %logger{1.} - %msg%n\n \n\n \n \n \n \n\n \n \n \n \n \n \n \n \n\n \n\n \n \n \n\n \n \n \n \n \n", "title": "Java Batch Component API" }, { @@ -1277,7 +1277,7 @@ }, { "location": "/GPU-Support-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nA subset of OpenMPF components are capable of running on NVIDIA GPUs. GPU support is through the NVIDIA CUDA libraries \nand runtime. This guide provides information needed for new component developers that would like to use NVIDIA GPUs \nto accelerate their component processing, and for users of the existing components that provide GPU support.\n\n\nBuilding a Component\n\n\nOpenMPF components that use GPUs are built with the NVIDIA nvcc compiler. Information about the nvcc compiler can be \nfound \nhere\n. The compiler accepts a number of \nflags to optimize the code generated, and the output of the compiler is called a \"fatbin\", since it may contain \nversions of the CUDA code compiled for multiple GPU architectures. This section discusses the nvcc compiler flags that \nare used within OpenMPF to tell the nvcc compiler what to include in the compiled output.\n\n\nThe nvcc compiler can generate two types of code: ELF code for a specific GPU architecture, and PTX code, which is the \nNVIDIA virtual machine and instruction set architecture that is generated in the first phase of nvcc compilation. \nYou can learn more about PTX \nhere\n. A fatbin may \nhave one or the other type of code, or both, for one or a set of different architectures. \n\n\nBy default, the OpenMPF components are built for maximum portability across NVIDIA GPU architectures. The nvcc flags \nto accomplish this are described in this \n\ntable\n. \nOpenMPF uses the \n-gencode\n flag, with the \n-arch=compute_30\n and \n-code=compute_30\n flags. This generates PTX code \nfor the minimum compute capability; at runtime, the NVIDIA driver will just-in-time compile the PTX code for the \narchitecture the code is running on.\n\n\nCustomizing the GPU Compile Flags\n\n\nOpenMPF has several GPU components. Initially, we tested a GPU component on a variety of NVIDIA GPU architectures and\nfound an insignificant difference in the run time for different architectures using this approach, and so we have opted\nto provide maximum runtime portability. For any new components that may be developed, this may not be the case, and\nsimilar testing should be undertaken to determine the correct set of flags for that component. The nvcc compiler flags\nare configured by setting the \nCUDA_NVCC_FLAGS\n CMake variable in the individual component's CMakeLists.txt file, e.g.:\n\n\nset(CUDA_NVCC_FLAGS --compiler-options -fPIC -gencode arch=compute_30,code=compute_30)\n\n\n\nOpenCV GPU Support\n\n\nIn OpenMPF, OpenCV is built with CUDA support, including the CUDA Deep Neural Network library, cuDNN. C++ components\nthat use OpenCV CUDA support will have built-in access to it through the base C++ builder and executor Docker images, and\nthe above-mentioned GPU compile flags will have already been set when OpenCV was built.\n\n\n\n\nNOTE:\n Most OpenMPF GPU components are written so that they can run on the CPU only, as well as using GPU hardware. \nIf the component is built on a system that does not have the NVIDIA CUDA Toolkit installed, then the build will \ndefault to compiling for the CPU. It is recommended that developers of new GPU components make every attempt to \nfollow this model, so that other users are not burdened with installing the NVIDIA CUDA Toolkit when they have no \nplans to run on GPU hardware.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nA subset of OpenMPF components are capable of running on NVIDIA GPUs. GPU support is through the NVIDIA CUDA libraries \nand runtime. This guide provides information needed for new component developers that would like to use NVIDIA GPUs \nto accelerate their component processing, and for users of the existing components that provide GPU support.\n\n\nBuilding a Component\n\n\nOpenMPF components that use GPUs are built with the NVIDIA nvcc compiler. Information about the nvcc compiler can be \nfound \nhere\n. The compiler accepts a number of \nflags to optimize the code generated, and the output of the compiler is called a \"fatbin\", since it may contain \nversions of the CUDA code compiled for multiple GPU architectures. This section discusses the nvcc compiler flags that \nare used within OpenMPF to tell the nvcc compiler what to include in the compiled output.\n\n\nThe nvcc compiler can generate two types of code: ELF code for a specific GPU architecture, and PTX code, which is the \nNVIDIA virtual machine and instruction set architecture that is generated in the first phase of nvcc compilation. \nYou can learn more about PTX \nhere\n. A fatbin may \nhave one or the other type of code, or both, for one or a set of different architectures. \n\n\nBy default, the OpenMPF components are built for maximum portability across NVIDIA GPU architectures. The nvcc flags \nto accomplish this are described in this \n\ntable\n. \nOpenMPF uses the \n-gencode\n flag, with the \n-arch=compute_30\n and \n-code=compute_30\n flags. This generates PTX code \nfor the minimum compute capability; at runtime, the NVIDIA driver will just-in-time compile the PTX code for the \narchitecture the code is running on.\n\n\nCustomizing the GPU Compile Flags\n\n\nOpenMPF has several GPU components. Initially, we tested a GPU component on a variety of NVIDIA GPU architectures and\nfound an insignificant difference in the run time for different architectures using this approach, and so we have opted\nto provide maximum runtime portability. For any new components that may be developed, this may not be the case, and\nsimilar testing should be undertaken to determine the correct set of flags for that component. The nvcc compiler flags\nare configured by setting the \nCUDA_NVCC_FLAGS\n CMake variable in the individual component's CMakeLists.txt file, e.g.:\n\n\nset(CUDA_NVCC_FLAGS --compiler-options -fPIC -gencode arch=compute_30,code=compute_30)\n\n\n\nOpenCV GPU Support\n\n\nIn OpenMPF, OpenCV is built with CUDA support, including the CUDA Deep Neural Network library, cuDNN. C++ components\nthat use OpenCV CUDA support will have built-in access to it through the base C++ builder and executor Docker images, and\nthe above-mentioned GPU compile flags will have already been set when OpenCV was built.\n\n\n\n\nNOTE:\n Most OpenMPF GPU components are written so that they can run on the CPU only, as well as using GPU hardware. \nIf the component is built on a system that does not have the NVIDIA CUDA Toolkit installed, then the build will \ndefault to compiling for the CPU. It is recommended that developers of new GPU components make every attempt to \nfollow this model, so that other users are not burdened with installing the NVIDIA CUDA Toolkit when they have no \nplans to run on GPU hardware.", "title": "GPU Support Guide" }, { @@ -1302,7 +1302,7 @@ }, { "location": "/Contributor-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nHigh-level Overview\n\n\nWe're excited that you're considering contributing to the OpenMPF project! If you have any questions about the process or how to get involved, please feel free to send us an \ne-mail\n with your question.\n\n\nWe encourage you to read the remainder of the guide as well as review the project's \nLicense\n and other \nDocumentation\n.\n\n\nThe OpenMPF project consists of the following repositories:\n\n\n\n\nopenmpf/openmpf\n\n\nopenmpf/openmpf-components\n\n\nopenmpf/openmpf-contrib-components\n\n\nopenmpf/openmpf-build-tools\n\n\nopenmpf/openmpf-cpp-component-sdk\n\n\nopenmpf/openmpf-python-component-sdk\n \n\n\nopenmpf/openmpf-java-component-sdk\n\n\nopenmpf/openmpf-projects\n\n\nopenmpf/openmpf-docker\n\n\n\n\nWork across the project is tracked using our \nworkboard\n.\n\n\nContribution Guidelines\n\n\nWe welcome all contributions that are made in a good faith effort to meet the following criteria:\n\n\n\n\nIn line with the spirit of the project. Refer to the \nOpenMPF Overview\n.\n\n\nAddresses an issue in the issue tracker. If an issue doesn't exist yet, create one so that it can be discussed among the OpenMPF community.\n\n\nFunctionally correct and logically sound. All code must pass a code review and round of regression tests.\n\n\nDesigned to use existing interfaces, super classes, and utilities\n\n\nMakes use of well-known design patterns, polymorphism, and encapsulation where possible\n\n\nEmploys best practices for integrating with the OpenMPF architecture. Refer to the \nC++ Batch Component API\n, \nC++ Streaming Component API\n, \nPython Batch Component API\n, and \nJava Batch Component API\n.\n\n\nEmploys \nstandard coding style\n that is consistent with the rest of the project\n\n\nSufficiently commented and, if necessary, comes with appropriate documentation\n\n\nComes with sufficient test cases\n\n\nDoes not introduce software vulnerabilities\n\n\n\n\nCode Merging Workflow\n\n\nContributor Instructions\n\n\nPerform the following instructions to create a feature branch off of develop, commit your changes, push your branch, and create a pull request. Before the pull request is accepted a Jenkins build must pass and an OpenMPF project administrator must review the changes. We do not have a public Jenkins server so a project administrator will have to start the build for you.\n\n\n\n\nCreate a feature branch off of the latest version of develop\n\n\n\n\ncd /path/to/repo\ngit checkout develop\ngit pull\ngit checkout -b \n\n\n\n\n\nMake Commits\n\n\n\n\ngit add .\ngit commit\n\n\n\n\n\nPush your feature branch.\n\n\n\n\ngit push -u origin \n\n\n\n\n\nCreate a pull request.\n\n\nGo to GitHub page for repo.\n\n\nClick \"New pull request\"\n\n\nChange the dropdown that says \"base: master\" to develop\n\n\nChange the dropdown that says \"compare: master\" to your feature branch\n\n\nIf a message saying \"Can\u2019t automatically merge.\" appears to the right of the dropdowns, pull the latest version of develop, merge your feature branch with develop, and push it again:\n\n\n\n\ngit checkout develop\ngit pull\ngit checkout \ngit merge develop\n\n# Fix conflicts\ngit add .\ngit commit\ngit push\n\n\n\n\n\nClick on the gear next to \"Reviewers\" and select a reviewer\n\n\nClick \"Create pull request\"\n\n\nGet approval\n\n\nAfter creating the pull request you will see that the pull request says \"Review required\" and \"Some checks haven't completed yet.\"\n\n\nAn OpenMPF project administrator will start a Jenkins build. Once the build completes, Jenkins will post a status check to the pull request.\n\n\nIf the Jenkins build passes, the pull request page will say \"All checks have passed\"\n\n\nIf the Jenkins build fails, a project administrator will provide further guidance.\n\n\n\n\n\n\nAn OpenMPF project administrator will review the pull request.\n\n\nIf the reviewer approves the changes, the reviewer will merge the change in to develop and close the pull request.\n\n\nIf the reviewer requests changes, you will need to make changes to your feature branch and push them. After you push your changes, the Jenkins status check will be reset. A project administrator will run another Jenkins build that will contain your most recent changes.\n\n\n\n\n\n\n\n\nIn order to be accepted and merged, pull requests need to comply with the \nContribution Guidelines\n. In cases where an issue is found, please refer to the reviewer's comments for more information on how to update your code. This review and acceptance process applies to all of the OpenMPF repositories, including the OpenMPF core and all of the OpenMPF components.\n\n\nLarge pull requests should be split up into smaller pull requests where possible. This will make it easier to review the code. In general, each pull request should add new functionality, update an existing feature, or fix a bug. We strive to keep the develop branch stable. If merging a smaller pull request will break the system before additional pull requests can be merged, then it's generally a better idea to merge one larger pull request.\n\n\nNote that GitHub has a 100 MB file size limitation. There is currently no way to push files to any of the OpenMPF repositories that are larger than this size.\n\n\nReviewer Instructions\n\n\n\n\nGo to the GitHub page for the pull request\n\n\nClick on the \"Files changed\"\n\n\nReview the code before you start a Jenkins build. You don't need to post your review comments immediately, but the Jenkins machine is on an internal network so for security you must review the code before you start the Jenkins build.\n\n\nAfter you have looked at the code, start an instance of the openmpf-github-with-pull-request Jenkins build.\n\n\nIf the Jenkins build fails, you will need to work with the developer to get the tests to pass.\n\n\nCheckout their branch locally to test it\n\n\n\n\ngit fetch\ngit checkout \n\n\n\n\n\nOn the pull request page click \"Add your review\"\n\n\nAdd comments\n\n\nClick the green \"Review changes\" dropdown\n\n\nIf changes are necessary, click the radio button to \"Request changes\"\n\n\nAfter the developer makes the necessary changes, go back to the pull request page\n\n\nReview the changes\n\n\nStart another instance of the openmpf-github-with-pull-request Jenkins build.\n\n\nIf you are satisfied with the changes, click the \"Review changes\" dropdown\n\n\nSelect the \"Approve\" radio button, and click \"Submit review\"\n\n\nClick \"Squash and merge\" on the pull request page\n\n\nIf you don't see a \"Squash and merge\" button, find the button that says \"Merge pull request\", click the upside down triangle on the right side of the button, select \"Squash and merge\"\n\n\nA text box showing the commit message will appear above the \"Squash and merge button\". Edit message if necessary.\n\n\nClick \"Confirm squash and merge\"\n\n\nA message will pop up saying \"Pull request successfully merged and closed. You\u2019re all set\u2014the \n branch can be safely deleted.\"\n\n\nClick \"Delete branch\"\n\n\nUpdate the openmpf-projects' develop branch with the new changes:\n\n\n\n\ncd openmpf-projects\ngit checkout develop\ngit pull\ngit submodule foreach 'git checkout develop'\ngit submodule foreach 'git pull'\ngit add .\ngit commit\ngit push\n\n\n\nHotfix Workflow\n\n\nWhen an OpenMPF project administrator determines that a code change is urgently needed to fix a bug in previously released code, the pull request workflow is modified in the following ways.\n\n\n\n\nCreate your feature branch off of the master branch, not develop. The convention is to include the prefix \"hotfix/\" in the name of the branch.\n\n\nWhen creating your pull request on the web page for the repo you are modifying, leave the 'base:master' dropdown menu as is, and change the 'compare:' dropdown to the name of your hotfix branch.\n\n\nAfter the PR has been reviewed and accepted, land the PR as described above.\n\n\nNext, create a new branch off of the develop branch. The convention is to use the prefix \"hf-merge/\" in the name of the branch.\n\n\nMerge the master branch into your hf-merge branch.\n\n\n\n\ngit checkout master\ngit pull\ngit checkout develop\ngit pull\ngit checkout -b hf-merge/\ngit merge master\ngit push -u\n\n\n\n\n\nCreate a pull request for this branch as described above in the Contributor Instructions, using develop as the 'base:' branch, and your hf-merge branch as the 'compare:' branch.\n\n\nThe remainder of the process for reviewing and landing a PR to the develop branch must be followed at this point, with one exception. You should merge your branch to the develop branch on the command line, instead of through the GitHub UI, to preserve commits and not squash them into one.\n\n\n\n\ngit checkout develop\ngit pull\ngit merge hf-merge/\n\n\n\nMake sure that the merge is a fast-forward merge.\n\n\ngit push\n\n\n\nVersioning a New Release\n\n\nThe decision to version a new release is based on the following factors:\n\n\n\n\nChanges have been made to the API which break backwards compatibility. Refer to the \nSemantic Versioning Guide\n.\n\n\nThe system has been updated with major features and/or enhancements.\n\n\nThe system has been updated to work with new versions of critical system dependencies, such as OpenCV and Spring.\n\n\nThe packaging and/or deployment process has changed significantly.\n\n\nIt's been a long time since the last release and many small updates have been made to the system.\n\n\n\n\nWhen the OpenMPF team agrees that it's time to version a new release of the system, a project administrator will create a release branch in each repository off of the develop branch. The name of a release branch takes the form \nr..\n. For example, \nr0.10.0\n. Also, the first commit in the release branch will be tagged as release candidate 1. For example, \nr0.10.0-rc1\n. Beta testers will then have the opportunity to test the release candidate 1 code.\n\n\nIf a bug is found in the release candidate code, then developers should land the bug fix to the release branch via a pull request. Once it has landed, the most recent commit will be tagged as release candidate 2. For example, \nr0.10.0-rc2\n. Beta testers will then have the opportunity to test the release candidate 2 code. The release candidate number will increase by one each time bugs are fixed. The bug fix code should be merged into the develop branch after it lands to the release branch.\n\n\nIf no bugs are found in the release candidate code for a period of time (generally, a month) then the release candidate will be finalized. The release candidate branch for each repo will be merged into the master branch for that repo. That commit on the master branch will be tagged with the release number. For example, \nr0.10.0\n.\n\n\nIf a critical bug fix needs to be made to the master branch, this is known has a \"hot fix\". Developers should land a hot fix to the master branch via a pull request. Once the code lands, the commit will be tagged by incrementing the \n\n number. For example, \nr0.10.1\n. The bug fix code should be merged into the develop branch after it lands to the master branch.\n\n\nNote that you should not use the \n--no-ff\n option when merging one branch into another. Doing so will make the commit history more verbose and difficult to follow.\n\n\nThis process is based on \nGitFlow\n.\n\n\nAdding New Components\n\n\nIn general, a new component will initially go in the \nopenmpf-contrib-components\n repository. That is a holding ground until it can be transitioned to the \nopenmpf-components\n repository. To be a candidate for transition, it must meet the following criteria:\n\n\n\n\nIs strongly in line with the spirit of the project and there is a commitment to maintain and update the code as the project evolves\n\n\nFully licensed under Apache 2.0 or a compatible license. All source code must be provided\n\n\nComes with sufficient unit, system, and/or integration tests with a strong focus on regression testing\n\n\n\n\nNote that new components should have a README.md file, LICENSE file, COPYING file, and optionally a NOTICE file. The LICENSE file should contain information about all of the licenses in the code base, including those licenses for code you didn't write.\n\n\nCoding Style\n\n\nThe following list of style guides provide a comprehensive explanation of some of the best coding practices for the programming languages used in the OpenMPF project:\n\n\n\n\nGoogle C++ Style Guide\n\n\nGoogle Python Style Guide\n\n\nGoogle Java Style Guide\n\n\nGoogle JavaScript Style Guide\n\n\n\n\nGenerally speaking, when writing new code, please refer to existing code in the repositories and match the style. Most style issues boil down to inconsistency. Not all of our code adheres to these style guidelines, but we are striving to improve it.\n\n\nUpdating Online Documentation\n\n\nOur \nopenmpf.github.io repo\n \nrepo is forked from \nBeautiful Jekyll\n. \nIn general, everything within \nopenmpf.github.io/docs\n is part of a \nRead the Docs subsite within our overall Beautiful Jekyll site.\n\n\nTo build the site \nDocker\n must be \ninstalled. After making your changes run \n./build-site.sh\n from within the \ntop-level \nopenmpf.github.io\n directory. To view your changes locally, you can \nrun \n./build-site.sh serve\n and then browse to \nhttp://localhost:4000\n.\n\n\nCommitting Changes\n\n\n\nWhen your changes look good, make sure to run the \n./build-site.sh\n command \nexplained above to generate the HTML site content. Commit all of the generated \nfiles and generate a pull request to merge them into the develop branch. \nNote that \n_site\n is in \n.gitignore\n and should not be committed.\n\n\nWhen a commit is made to the master branch on GitHub, \nthe \nhttps://openmpf.github.io/docs/site/\n page will automatically update \n(often within 5 minutes).", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nHigh-level Overview\n\n\nWe're excited that you're considering contributing to the OpenMPF project! If you have any questions about the process or how to get involved, please feel free to send us an \ne-mail\n with your question.\n\n\nWe encourage you to read the remainder of the guide as well as review the project's \nLicense\n and other \nDocumentation\n.\n\n\nThe OpenMPF project consists of the following repositories:\n\n\n\n\nopenmpf/openmpf\n\n\nopenmpf/openmpf-components\n\n\nopenmpf/openmpf-contrib-components\n\n\nopenmpf/openmpf-build-tools\n\n\nopenmpf/openmpf-cpp-component-sdk\n\n\nopenmpf/openmpf-python-component-sdk\n \n\n\nopenmpf/openmpf-java-component-sdk\n\n\nopenmpf/openmpf-projects\n\n\nopenmpf/openmpf-docker\n\n\n\n\nWork across the project is tracked using our \nworkboard\n.\n\n\nContribution Guidelines\n\n\nWe welcome all contributions that are made in a good faith effort to meet the following criteria:\n\n\n\n\nIn line with the spirit of the project. Refer to the \nOpenMPF Overview\n.\n\n\nAddresses an issue in the issue tracker. If an issue doesn't exist yet, create one so that it can be discussed among the OpenMPF community.\n\n\nFunctionally correct and logically sound. All code must pass a code review and round of regression tests.\n\n\nDesigned to use existing interfaces, super classes, and utilities\n\n\nMakes use of well-known design patterns, polymorphism, and encapsulation where possible\n\n\nEmploys best practices for integrating with the OpenMPF architecture. Refer to the \nC++ Batch Component API\n, \nC++ Streaming Component API\n, \nPython Batch Component API\n, and \nJava Batch Component API\n.\n\n\nEmploys \nstandard coding style\n that is consistent with the rest of the project\n\n\nSufficiently commented and, if necessary, comes with appropriate documentation\n\n\nComes with sufficient test cases\n\n\nDoes not introduce software vulnerabilities\n\n\n\n\nCode Merging Workflow\n\n\nContributor Instructions\n\n\nPerform the following instructions to create a feature branch off of develop, commit your changes, push your branch, and create a pull request. Before the pull request is accepted a Jenkins build must pass and an OpenMPF project administrator must review the changes. We do not have a public Jenkins server so a project administrator will have to start the build for you.\n\n\n\n\nCreate a feature branch off of the latest version of develop\n\n\n\n\ncd /path/to/repo\ngit checkout develop\ngit pull\ngit checkout -b \n\n\n\n\n\nMake Commits\n\n\n\n\ngit add .\ngit commit\n\n\n\n\n\nPush your feature branch.\n\n\n\n\ngit push -u origin \n\n\n\n\n\nCreate a pull request.\n\n\nGo to GitHub page for repo.\n\n\nClick \"New pull request\"\n\n\nChange the dropdown that says \"base: master\" to develop\n\n\nChange the dropdown that says \"compare: master\" to your feature branch\n\n\nIf a message saying \"Can\u2019t automatically merge.\" appears to the right of the dropdowns, pull the latest version of develop, merge your feature branch with develop, and push it again:\n\n\n\n\ngit checkout develop\ngit pull\ngit checkout \ngit merge develop\n\n# Fix conflicts\ngit add .\ngit commit\ngit push\n\n\n\n\n\nClick on the gear next to \"Reviewers\" and select a reviewer\n\n\nClick \"Create pull request\"\n\n\nGet approval\n\n\nAfter creating the pull request you will see that the pull request says \"Review required\" and \"Some checks haven't completed yet.\"\n\n\nAn OpenMPF project administrator will start a Jenkins build. Once the build completes, Jenkins will post a status check to the pull request.\n\n\nIf the Jenkins build passes, the pull request page will say \"All checks have passed\"\n\n\nIf the Jenkins build fails, a project administrator will provide further guidance.\n\n\n\n\n\n\nAn OpenMPF project administrator will review the pull request.\n\n\nIf the reviewer approves the changes, the reviewer will merge the change in to develop and close the pull request.\n\n\nIf the reviewer requests changes, you will need to make changes to your feature branch and push them. After you push your changes, the Jenkins status check will be reset. A project administrator will run another Jenkins build that will contain your most recent changes.\n\n\n\n\n\n\n\n\nIn order to be accepted and merged, pull requests need to comply with the \nContribution Guidelines\n. In cases where an issue is found, please refer to the reviewer's comments for more information on how to update your code. This review and acceptance process applies to all of the OpenMPF repositories, including the OpenMPF core and all of the OpenMPF components.\n\n\nLarge pull requests should be split up into smaller pull requests where possible. This will make it easier to review the code. In general, each pull request should add new functionality, update an existing feature, or fix a bug. We strive to keep the develop branch stable. If merging a smaller pull request will break the system before additional pull requests can be merged, then it's generally a better idea to merge one larger pull request.\n\n\nNote that GitHub has a 100 MB file size limitation. There is currently no way to push files to any of the OpenMPF repositories that are larger than this size.\n\n\nReviewer Instructions\n\n\n\n\nGo to the GitHub page for the pull request\n\n\nClick on the \"Files changed\"\n\n\nReview the code before you start a Jenkins build. You don't need to post your review comments immediately, but the Jenkins machine is on an internal network so for security you must review the code before you start the Jenkins build.\n\n\nAfter you have looked at the code, start an instance of the openmpf-github-with-pull-request Jenkins build.\n\n\nIf the Jenkins build fails, you will need to work with the developer to get the tests to pass.\n\n\nCheckout their branch locally to test it\n\n\n\n\ngit fetch\ngit checkout \n\n\n\n\n\nOn the pull request page click \"Add your review\"\n\n\nAdd comments\n\n\nClick the green \"Review changes\" dropdown\n\n\nIf changes are necessary, click the radio button to \"Request changes\"\n\n\nAfter the developer makes the necessary changes, go back to the pull request page\n\n\nReview the changes\n\n\nStart another instance of the openmpf-github-with-pull-request Jenkins build.\n\n\nIf you are satisfied with the changes, click the \"Review changes\" dropdown\n\n\nSelect the \"Approve\" radio button, and click \"Submit review\"\n\n\nClick \"Squash and merge\" on the pull request page\n\n\nIf you don't see a \"Squash and merge\" button, find the button that says \"Merge pull request\", click the upside down triangle on the right side of the button, select \"Squash and merge\"\n\n\nA text box showing the commit message will appear above the \"Squash and merge button\". Edit message if necessary.\n\n\nClick \"Confirm squash and merge\"\n\n\nA message will pop up saying \"Pull request successfully merged and closed. You\u2019re all set\u2014the \n branch can be safely deleted.\"\n\n\nClick \"Delete branch\"\n\n\nUpdate the openmpf-projects' develop branch with the new changes:\n\n\n\n\ncd openmpf-projects\ngit checkout develop\ngit pull\ngit submodule foreach 'git checkout develop'\ngit submodule foreach 'git pull'\ngit add .\ngit commit\ngit push\n\n\n\nHotfix Workflow\n\n\nWhen an OpenMPF project administrator determines that a code change is urgently needed to fix a bug in previously released code, the pull request workflow is modified in the following ways.\n\n\n\n\nCreate your feature branch off of the master branch, not develop. The convention is to include the prefix \"hotfix/\" in the name of the branch.\n\n\nWhen creating your pull request on the web page for the repo you are modifying, leave the 'base:master' dropdown menu as is, and change the 'compare:' dropdown to the name of your hotfix branch.\n\n\nAfter the PR has been reviewed and accepted, land the PR as described above.\n\n\nNext, create a new branch off of the develop branch. The convention is to use the prefix \"hf-merge/\" in the name of the branch.\n\n\nMerge the master branch into your hf-merge branch.\n\n\n\n\ngit checkout master\ngit pull\ngit checkout develop\ngit pull\ngit checkout -b hf-merge/\ngit merge master\ngit push -u\n\n\n\n\n\nCreate a pull request for this branch as described above in the Contributor Instructions, using develop as the 'base:' branch, and your hf-merge branch as the 'compare:' branch.\n\n\nThe remainder of the process for reviewing and landing a PR to the develop branch must be followed at this point, with one exception. You should merge your branch to the develop branch on the command line, instead of through the GitHub UI, to preserve commits and not squash them into one.\n\n\n\n\ngit checkout develop\ngit pull\ngit merge hf-merge/\n\n\n\nMake sure that the merge is a fast-forward merge.\n\n\ngit push\n\n\n\nVersioning a New Release\n\n\nThe decision to version a new release is based on the following factors:\n\n\n\n\nChanges have been made to the API which break backwards compatibility. Refer to the \nSemantic Versioning Guide\n.\n\n\nThe system has been updated with major features and/or enhancements.\n\n\nThe system has been updated to work with new versions of critical system dependencies, such as OpenCV and Spring.\n\n\nThe packaging and/or deployment process has changed significantly.\n\n\nIt's been a long time since the last release and many small updates have been made to the system.\n\n\n\n\nWhen the OpenMPF team agrees that it's time to version a new release of the system, a project administrator will create a release branch in each repository off of the develop branch. The name of a release branch takes the form \nr..\n. For example, \nr0.10.0\n. Also, the first commit in the release branch will be tagged as release candidate 1. For example, \nr0.10.0-rc1\n. Beta testers will then have the opportunity to test the release candidate 1 code.\n\n\nIf a bug is found in the release candidate code, then developers should land the bug fix to the release branch via a pull request. Once it has landed, the most recent commit will be tagged as release candidate 2. For example, \nr0.10.0-rc2\n. Beta testers will then have the opportunity to test the release candidate 2 code. The release candidate number will increase by one each time bugs are fixed. The bug fix code should be merged into the develop branch after it lands to the release branch.\n\n\nIf no bugs are found in the release candidate code for a period of time (generally, a month) then the release candidate will be finalized. The release candidate branch for each repo will be merged into the master branch for that repo. That commit on the master branch will be tagged with the release number. For example, \nr0.10.0\n.\n\n\nIf a critical bug fix needs to be made to the master branch, this is known has a \"hot fix\". Developers should land a hot fix to the master branch via a pull request. Once the code lands, the commit will be tagged by incrementing the \n\n number. For example, \nr0.10.1\n. The bug fix code should be merged into the develop branch after it lands to the master branch.\n\n\nNote that you should not use the \n--no-ff\n option when merging one branch into another. Doing so will make the commit history more verbose and difficult to follow.\n\n\nThis process is based on \nGitFlow\n.\n\n\nAdding New Components\n\n\nIn general, a new component will initially go in the \nopenmpf-contrib-components\n repository. That is a holding ground until it can be transitioned to the \nopenmpf-components\n repository. To be a candidate for transition, it must meet the following criteria:\n\n\n\n\nIs strongly in line with the spirit of the project and there is a commitment to maintain and update the code as the project evolves\n\n\nFully licensed under Apache 2.0 or a compatible license. All source code must be provided\n\n\nComes with sufficient unit, system, and/or integration tests with a strong focus on regression testing\n\n\n\n\nNote that new components should have a README.md file, LICENSE file, COPYING file, and optionally a NOTICE file. The LICENSE file should contain information about all of the licenses in the code base, including those licenses for code you didn't write.\n\n\nCoding Style\n\n\nThe following list of style guides provide a comprehensive explanation of some of the best coding practices for the programming languages used in the OpenMPF project:\n\n\n\n\nGoogle C++ Style Guide\n\n\nGoogle Python Style Guide\n\n\nGoogle Java Style Guide\n\n\nGoogle JavaScript Style Guide\n\n\n\n\nGenerally speaking, when writing new code, please refer to existing code in the repositories and match the style. Most style issues boil down to inconsistency. Not all of our code adheres to these style guidelines, but we are striving to improve it.\n\n\nUpdating Online Documentation\n\n\nOur \nopenmpf.github.io repo\n \nrepo is forked from \nBeautiful Jekyll\n. \nIn general, everything within \nopenmpf.github.io/docs\n is part of a \nRead the Docs subsite within our overall Beautiful Jekyll site.\n\n\nTo build the site \nDocker\n must be \ninstalled. After making your changes run \n./build-site.sh\n from within the \ntop-level \nopenmpf.github.io\n directory. To view your changes locally, you can \nrun \n./build-site.sh serve\n and then browse to \nhttp://localhost:4000\n.\n\n\nCommitting Changes\n\n\n\nWhen your changes look good, make sure to run the \n./build-site.sh\n command \nexplained above to generate the HTML site content. Commit all of the generated \nfiles and generate a pull request to merge them into the develop branch. \nNote that \n_site\n is in \n.gitignore\n and should not be committed.\n\n\nWhen a commit is made to the master branch on GitHub, \nthe \nhttps://openmpf.github.io/docs/site/\n page will automatically update \n(often within 5 minutes).", "title": "Contributor Guide" }, { @@ -1357,7 +1357,7 @@ }, { "location": "/Development-Environment-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\n\n \nWARNING:\n\n For most component developers, these steps are not necessary. Instead,\n refer to the\n \nC++\n,\n \nPython\n, or\n \nJava\n\n README for developing a Docker component in your desired language.\n\n\n\n\n\n \nWARNING:\n This guide is a work in progress and may not be completely\n accurate or comprehensive.\n\n\n\n\nOverview\n\n\nThe following instructions are for setting up an environment for building and\nrunning OpenMPF outside of Docker. They serve as a reference for developers who\nwant to develop the Workflow Manager web application itself and perform end-to-\nend integration testing.\n\n\nSetup VM\n\n\n\n\n\n\nDownload the ISO for the desktop version of Ubuntu 20.04 from\n \nhttps://releases.ubuntu.com/20.04\n.\n\n\n\n\n\n\nCreate an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using.\n\n\n\n\nUse mpf as your username.\n\n\nDuring the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software.\n\n\n\n\n\n\n\n\nAfter completing the installation, you will likely be prompted to update\n software. You should install the updates.\n\n\n\n\n\n\nOptionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong.\n\n\n\n\n\n\nOpen a terminal and run \nsudo apt update\n\n\n\n\n\n\nRun \nsudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc\n\n\n\n\n\n\nFollow instructions to install Docker:\n \nhttps://docs.docker.com/engine/install/ubuntu/#install-using-the-repository\n\n\n\n\n\n\nOptionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use \ndocker\n commands:\n\n\n\n\n\n\nsudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket;\n\n\n\n\n\n\n\nFollow instructions so that you can run Docker without sudo:\n \nhttps://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user\n\n\n\n\n\n\nInstall Docker Compose:\n\n\n\n\n\n\nsudo apt update\nsudo apt install docker-compose-plugin\n\n\n\n\n\n\n\nOptionally, stop redis from starting automatically:\n \nsudo systemctl disable redis\n\n\n\n\n\n\nOptionally, stop postgresql from starting automatically:\n \nsudo systemctl disable postgresql\n\n\n\n\n\n\nInitialize Postgres (use \"password\" when prompted for a password):\n\n\n\n\n\n\nsudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf\n\n\n\n\n\nBuild and install OpenCV:\n\n\n\n\nmkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.9.0.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.9.0.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.9.0;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.9.0' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.9.0/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.9.0/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.9.0/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv;\n\n\n\n\n\nBuild and install the ActiveMQ C++ library:\n\n\n\n\nmkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp;\n\n\n\n\n\nInstall NotoEmoji font for markup:\n\n\n\n\nmkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto;\n\n\n\n\n\nBuild and install PNG Defry:\n\n\n\n\nmkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry;\n\n\n\n\n\nInstall Maven:\n\n\n\n\nwget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin;\n\n\n\n\n\nBuild and install libheif:\n\n\n\n\nmkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif;\n\n\n\n\n\nFrom your home directory run:\n\n\n\n\ngit clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop;\n\n\n\n\n\n\n\nRun: \npip install openmpf-projects/openmpf/trunk/bin/mpf-scripts\n\n\n\n\n\n\nAdd \nPATH=\"$HOME/.local/bin:$PATH\"\n to \n~/.bashrc\n\n\n\n\n\n\nRun \nmkdir -p openmpf-projects/openmpf/trunk/install/share/logs\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh\n\n\n\n\n\n\nRun \nsudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf'\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service\n\n\n\n\n\n\nRun \ncd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties\n\n\n\n\n\n\nRun \nsudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts'\n\n\n\n\n\n\nRun \nmkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/\n\n\n\n\n\n\nReboot the VM.\n\n\n\n\n\n\nAt this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the \nDockerfile\n\nfor each component you're interested in.\n\n\nConfigure Users\n\n\nTo change the default user password settings, modify\n\nopenmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/user.properties\n.\nNote that the default settings are public knowledge, which could be a security\nrisk.\n\n\nNote that \nmpf remove-user\n and \nmpf add-user\n commands explained in the\n\nCommand Line Tools\n section do not modify the\n\nuser.properties\n file. If you remove a user using the \nmpf remove-user\n\ncommand, the changes will take effect at runtime, but an entry may still exist\nfor that user in the \nuser.properties\n file. If so, then the user account will\nbe recreated the next time the Workflow Manager is restarted.\n\n\nBuild and Run the OpenMPF Workflow Manager Web Application\n\n\n\n\nBuild OpenMPF:\n\n\n\n\ncd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false;\n\n\n\n\n\nStart OpenMPF with \nmpf start\n.\n\n\n\n\nLook for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting:\n\n\n2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661)\n\n\n\nAfter startup, the Workflow Manager will be available at \nhttp://localhost:8080\n.\nBrowse to this URL using Firefox or Chrome.\n\n\nIf you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the\n\nOpenMPF User Guide\n for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the\n\nOpenMPF Admin Guide\n for more information.\nWhen finished using OpenMPF, stop Workflow Manager with \nctrl-c\n and then run \nmpf stop\n to stop\nthe other system dependencies.\n\n\nThe preferred method to start and stop services for OpenMPF is with the\n\nmpf start\n and \nmpf stop\n commands. For additional information on these\ncommands, please see the\n\nCommand Line Tools\n section.\nThese will start and stop the PostgreSQL, Redis, Node Manager, and Workflow Manager processes.\n\n\nKnown Issues\n\n\no.m.m.m.c.JobController - Failure creating job. supplier.get()\n\n\nIf you see an error message similar to:\n\n\n2022-02-07 17:17:30,538 ERROR [http-nio-8080-exec-1] o.m.m.m.c.JobController - Failure creating job. supplier.get()\njava.lang.NullPointerException: supplier.get()\n at java.util.Objects.requireNonNull(Objects.java:246) ~[?:?]\n at java.util.Objects.requireNonNullElseGet(Objects.java:321) ~[?:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getHostName(PropertiesUtil.java:267) ~[classes/:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getExportedJobId(PropertiesUtil.java:285) ~[classes/:?]\n\n\n\nOpen \n/etc/profile.d/mpf.sh\n and change \nexport HOSTNAME\n to\n\nexport HOSTNAME=$(hostname)\n. Then, restart the VM.\n\n\nAppendices\n\n\nCommand Line Tools\n\n\nOpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions:\n\nmpf [options ...]\n.\n\n\nExecute \nmpf --help\n for general documentation and \nmpf --help\n for\ndocumentation about a specific action.\n\n\n\n\nStart / Stop Actions\n: Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster.\n\n\nmpf status\n: displays a message indicating whether each of the system\n dependencies is running or not\n\n\nmpf start\n: starts all of the system dependencies\n\n\nmpf stop\n: stops all of the system dependencies\n\n\nmpf restart\n : stops and then starts all of the system dependencies\n\n\n\n\n\n\nUser Actions\n: Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect.\n\n\nmpf list-users\n : lists all of the existing user accounts and their role\n (non-admin or admin)\n\n\nmpf add-user \n: adds a new user account; will be\n prompted to enter the account password\n\n\nmpf remove-user \n : removes an existing user account\n\n\nmpf change-role \n : change the role (non-admin to admin\n or vice versa) for an existing user\n\n\nmpf change-password \n: change the password for an existing\n user; will be prompted to enter the new account password\n\n\n\n\n\n\nClean Actions\n: Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved.\n\n\nmpf clean\n: cleans out old job information and results, pending job requests, and marked up\n media files, but preserves log files and uploaded media.\n\n\nmpf clean --delete-logs --delete-uploaded-media\n: the same as \nmpf clean\n\n but also deletes log files and uploaded media\n\n\n\n\n\n\nNode Action\n: Actions for managing node membership in the OpenMPF cluster.\n\n\nmpf list-nodes\n: If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes\n\n\n\n\n\n\n\n\nPackaging a Component\n\n\nIn a non-Docker deployment, admin users can register component packages through\nthe web UI. Refer to \nComponent Registration\n.\n\n\nOnce the descriptor file is complete, as described in\n\nComponent Descriptor Reference\n,\nthe next step is to compile your component source code, and finally, create a\n.tar.gz package containing the descriptor file, component library, and all\nother necessary files.\n\n\nThe package should contain a top-level directory with a unique name that will\nnot conflict with existing component packages that have already been developed.\nThe top-level directory name should be the same as the \ncomponentName\n.\n\n\nWithin the top-level directory there must be a directory named \u201cdescriptor\u201d\nwith the descriptor JSON file in it. The name of the file must be\n\u201cdescriptor.json\u201d.\n\n\nExample:\n\n\n//sample-component-1.0.0-tar.gz contents\nSampleComponent/\n config/\n descriptor/\n descriptor.json\n lib/\n\n\n\nInstalling and registering a component\n\n\nThe Component Registration web page, located in the Admin section of the\nOpenMPF web user interface, can be used to upload and register the component.\n\n\nDrag and drop the .tar.gz file containing the component onto the dropzone area\nof that page. The component will automatically be uploaded and registered.\n\n\nUpon successful registration, the component will be available for deployment\nonto OpenMPF nodes via the Node Configuration web page and\n\n/rest/nodes/config\n end point.\n\n\nIf the descriptor contains custom actions, tasks, or pipelines, then they will\nbe automatically added to the system upon registration.\n\n\n\n\nNOTE:\n If the descriptor does not contain custom actions, tasks,\nor pipelines, then a default action, task, and pipeline will be generated\nand added to the system.\n\n\nThe default action will use the component\u2019s algorithm with its default\nproperty value settings.\nThe default task will use the default action.\nThe default pipeline will use the default task. This will only be generated\nif the algorithm does not specify any \nrequiresCollection\n states.\n\n\n\n\nUnregistering a component\n\n\nA component can be unregistered by using the remove button on the Component\nRegistration page.\n\n\nDuring unregistration, all services, algorithms, actions, tasks, and pipelines\nassociated with the component are deleted. Additionally, all actions, tasks,\nand pipelines that depend on these elements are removed.\n\n\nWeb UI\n\n\nThe following sections will cover some additional functionality permitted to\nadmin users in a non-Docker deployment.\n\n\nNode Configuration and Status\n\n\nThis page provides a list of all of the services that are configured to run on\nthe OpenMPF cluster:\n\n\n\n\nEach node shows information about the current status of each service, if it is\nunlaunchable due to an underlying error, and how many services are running for\neach node. If a service is unlaunchable, it will be indicated using a red\nstatus icon (not shown). Note that services are grouped by component type.\nClick the chevron \">\" to expand a service group to view the individual services.\n\n\nAn admin user can start, stop, or restart them on an individual basis. If a\nnon-admin user views this page, the \"Action(s)\" column is not displayed. This\npage also enables an admin user to edit the configuration for all nodes in the\nOpenMPF cluster. A non-admin user can only view the existing configuration.\n\n\nAn admin user can add a node by using the \"Add Node\" button and selecting a\nnode in the OpenMPF cluster from the drop-down list. You can also select to add\nall services at this time. A node and all if its configured services can be\nremoved by clicking the trash can to the right of the node's hostname.\n\n\nAn admin user can add services individually by selecting the node edit button\nat the bottom of the node. The number of service instances can be increased or\ndecreased by using the drop-down. Click the \"Submit\" button to save the changes.\n\n\nWhen making changes, please be aware of the following:\n\n\n\n\nIt may take a minute for the configuration to take effect on the server.\n\n\nIf you remove an existing service from a node, any job that service is\n processing will be stopped, and you will need to resubmit that job.\n\n\nIf you create a new node, its configuration will not take effect until the\n OpenMPF software is properly installed and started on the associated host.\n\n\nIf you delete a node, you will need to manually turn off the hardware running\n that node (deleting a node does not shut down the machine).\n\n\n\n\nComponent Registration\n\n\nThis page allows an admin user to add and remove non-default components to and\nfrom the system:\n\n\n\n\nA component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the Workflow Manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus.\n\n\n\n\nIf for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the Workflow Manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the \n/opt/mpf/plugins\n\ndirectory on the system.\n\n\nOnce registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\n\n \nWARNING:\n\n For most component developers, these steps are not necessary. Instead,\n refer to the\n \nC++\n,\n \nPython\n, or\n \nJava\n\n README for developing a Docker component in your desired language.\n\n\n\n\n\n \nWARNING:\n This guide is a work in progress and may not be completely\n accurate or comprehensive.\n\n\n\n\nOverview\n\n\nThe following instructions are for setting up an environment for building and\nrunning OpenMPF outside of Docker. They serve as a reference for developers who\nwant to develop the Workflow Manager web application itself and perform end-to-\nend integration testing.\n\n\nSetup VM\n\n\n\n\n\n\nDownload the ISO for the desktop version of Ubuntu 20.04 from\n \nhttps://releases.ubuntu.com/20.04\n.\n\n\n\n\n\n\nCreate an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using.\n\n\n\n\nUse mpf as your username.\n\n\nDuring the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software.\n\n\n\n\n\n\n\n\nAfter completing the installation, you will likely be prompted to update\n software. You should install the updates.\n\n\n\n\n\n\nOptionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong.\n\n\n\n\n\n\nOpen a terminal and run \nsudo apt update\n\n\n\n\n\n\nRun \nsudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc\n\n\n\n\n\n\nFollow instructions to install Docker:\n \nhttps://docs.docker.com/engine/install/ubuntu/#install-using-the-repository\n\n\n\n\n\n\nOptionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use \ndocker\n commands:\n\n\n\n\n\n\nsudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket;\n\n\n\n\n\n\n\nFollow instructions so that you can run Docker without sudo:\n \nhttps://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user\n\n\n\n\n\n\nInstall Docker Compose:\n\n\n\n\n\n\nsudo apt update\nsudo apt install docker-compose-plugin\n\n\n\n\n\n\n\nOptionally, stop redis from starting automatically:\n \nsudo systemctl disable redis\n\n\n\n\n\n\nOptionally, stop postgresql from starting automatically:\n \nsudo systemctl disable postgresql\n\n\n\n\n\n\nInitialize Postgres (use \"password\" when prompted for a password):\n\n\n\n\n\n\nsudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf\n\n\n\n\n\nBuild and install OpenCV:\n\n\n\n\nmkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.9.0.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.9.0.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.9.0;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.9.0' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.9.0/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.9.0/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.9.0/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv;\n\n\n\n\n\nBuild and install the ActiveMQ C++ library:\n\n\n\n\nmkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp;\n\n\n\n\n\nInstall NotoEmoji font for markup:\n\n\n\n\nmkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto;\n\n\n\n\n\nBuild and install PNG Defry:\n\n\n\n\nmkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry;\n\n\n\n\n\nInstall Maven:\n\n\n\n\nwget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin;\n\n\n\n\n\nBuild and install libheif:\n\n\n\n\nmkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif;\n\n\n\n\n\nFrom your home directory run:\n\n\n\n\ngit clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop;\n\n\n\n\n\n\n\nRun: \npip install openmpf-projects/openmpf/trunk/bin/mpf-scripts\n\n\n\n\n\n\nAdd \nPATH=\"$HOME/.local/bin:$PATH\"\n to \n~/.bashrc\n\n\n\n\n\n\nRun \nmkdir -p openmpf-projects/openmpf/trunk/install/share/logs\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh\n\n\n\n\n\n\nRun \nsudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf'\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service\n\n\n\n\n\n\nRun \ncd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties\n\n\n\n\n\n\nRun \nsudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts'\n\n\n\n\n\n\nRun \nmkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/\n\n\n\n\n\n\nReboot the VM.\n\n\n\n\n\n\nAt this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the \nDockerfile\n\nfor each component you're interested in.\n\n\nConfigure Users\n\n\nTo change the default user password settings, modify\n\nopenmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/user.properties\n.\nNote that the default settings are public knowledge, which could be a security\nrisk.\n\n\nNote that \nmpf remove-user\n and \nmpf add-user\n commands explained in the\n\nCommand Line Tools\n section do not modify the\n\nuser.properties\n file. If you remove a user using the \nmpf remove-user\n\ncommand, the changes will take effect at runtime, but an entry may still exist\nfor that user in the \nuser.properties\n file. If so, then the user account will\nbe recreated the next time the Workflow Manager is restarted.\n\n\nBuild and Run the OpenMPF Workflow Manager Web Application\n\n\n\n\nBuild OpenMPF:\n\n\n\n\ncd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false;\n\n\n\n\n\nStart OpenMPF with \nmpf start\n.\n\n\n\n\nLook for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting:\n\n\n2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661)\n\n\n\nAfter startup, the Workflow Manager will be available at \nhttp://localhost:8080\n.\nBrowse to this URL using Firefox or Chrome.\n\n\nIf you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the\n\nOpenMPF User Guide\n for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the\n\nOpenMPF Admin Guide\n for more information.\nWhen finished using OpenMPF, stop Workflow Manager with \nctrl-c\n and then run \nmpf stop\n to stop\nthe other system dependencies.\n\n\nThe preferred method to start and stop services for OpenMPF is with the\n\nmpf start\n and \nmpf stop\n commands. For additional information on these\ncommands, please see the\n\nCommand Line Tools\n section.\nThese will start and stop the PostgreSQL, Redis, Node Manager, and Workflow Manager processes.\n\n\nKnown Issues\n\n\no.m.m.m.c.JobController - Failure creating job. supplier.get()\n\n\nIf you see an error message similar to:\n\n\n2022-02-07 17:17:30,538 ERROR [http-nio-8080-exec-1] o.m.m.m.c.JobController - Failure creating job. supplier.get()\njava.lang.NullPointerException: supplier.get()\n at java.util.Objects.requireNonNull(Objects.java:246) ~[?:?]\n at java.util.Objects.requireNonNullElseGet(Objects.java:321) ~[?:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getHostName(PropertiesUtil.java:267) ~[classes/:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getExportedJobId(PropertiesUtil.java:285) ~[classes/:?]\n\n\n\nOpen \n/etc/profile.d/mpf.sh\n and change \nexport HOSTNAME\n to\n\nexport HOSTNAME=$(hostname)\n. Then, restart the VM.\n\n\nAppendices\n\n\nCommand Line Tools\n\n\nOpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions:\n\nmpf [options ...]\n.\n\n\nExecute \nmpf --help\n for general documentation and \nmpf --help\n for\ndocumentation about a specific action.\n\n\n\n\nStart / Stop Actions\n: Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster.\n\n\nmpf status\n: displays a message indicating whether each of the system\n dependencies is running or not\n\n\nmpf start\n: starts all of the system dependencies\n\n\nmpf stop\n: stops all of the system dependencies\n\n\nmpf restart\n : stops and then starts all of the system dependencies\n\n\n\n\n\n\nUser Actions\n: Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect.\n\n\nmpf list-users\n : lists all of the existing user accounts and their role\n (non-admin or admin)\n\n\nmpf add-user \n: adds a new user account; will be\n prompted to enter the account password\n\n\nmpf remove-user \n : removes an existing user account\n\n\nmpf change-role \n : change the role (non-admin to admin\n or vice versa) for an existing user\n\n\nmpf change-password \n: change the password for an existing\n user; will be prompted to enter the new account password\n\n\n\n\n\n\nClean Actions\n: Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved.\n\n\nmpf clean\n: cleans out old job information and results, pending job requests, and marked up\n media files, but preserves log files and uploaded media.\n\n\nmpf clean --delete-logs --delete-uploaded-media\n: the same as \nmpf clean\n\n but also deletes log files and uploaded media\n\n\n\n\n\n\nNode Action\n: Actions for managing node membership in the OpenMPF cluster.\n\n\nmpf list-nodes\n: If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes\n\n\n\n\n\n\n\n\nPackaging a Component\n\n\nIn a non-Docker deployment, admin users can register component packages through\nthe web UI. Refer to \nComponent Registration\n.\n\n\nOnce the descriptor file is complete, as described in\n\nComponent Descriptor Reference\n,\nthe next step is to compile your component source code, and finally, create a\n.tar.gz package containing the descriptor file, component library, and all\nother necessary files.\n\n\nThe package should contain a top-level directory with a unique name that will\nnot conflict with existing component packages that have already been developed.\nThe top-level directory name should be the same as the \ncomponentName\n.\n\n\nWithin the top-level directory there must be a directory named \u201cdescriptor\u201d\nwith the descriptor JSON file in it. The name of the file must be\n\u201cdescriptor.json\u201d.\n\n\nExample:\n\n\n//sample-component-1.0.0-tar.gz contents\nSampleComponent/\n config/\n descriptor/\n descriptor.json\n lib/\n\n\n\nInstalling and registering a component\n\n\nThe Component Registration web page, located in the Admin section of the\nOpenMPF web user interface, can be used to upload and register the component.\n\n\nDrag and drop the .tar.gz file containing the component onto the dropzone area\nof that page. The component will automatically be uploaded and registered.\n\n\nUpon successful registration, the component will be available for deployment\nonto OpenMPF nodes via the Node Configuration web page and\n\n/rest/nodes/config\n end point.\n\n\nIf the descriptor contains custom actions, tasks, or pipelines, then they will\nbe automatically added to the system upon registration.\n\n\n\n\nNOTE:\n If the descriptor does not contain custom actions, tasks,\nor pipelines, then a default action, task, and pipeline will be generated\nand added to the system.\n\n\nThe default action will use the component\u2019s algorithm with its default\nproperty value settings.\nThe default task will use the default action.\nThe default pipeline will use the default task. This will only be generated\nif the algorithm does not specify any \nrequiresCollection\n states.\n\n\n\n\nUnregistering a component\n\n\nA component can be unregistered by using the remove button on the Component\nRegistration page.\n\n\nDuring unregistration, all services, algorithms, actions, tasks, and pipelines\nassociated with the component are deleted. Additionally, all actions, tasks,\nand pipelines that depend on these elements are removed.\n\n\nWeb UI\n\n\nThe following sections will cover some additional functionality permitted to\nadmin users in a non-Docker deployment.\n\n\nNode Configuration and Status\n\n\nThis page provides a list of all of the services that are configured to run on\nthe OpenMPF cluster:\n\n\n\n\nEach node shows information about the current status of each service, if it is\nunlaunchable due to an underlying error, and how many services are running for\neach node. If a service is unlaunchable, it will be indicated using a red\nstatus icon (not shown). Note that services are grouped by component type.\nClick the chevron \">\" to expand a service group to view the individual services.\n\n\nAn admin user can start, stop, or restart them on an individual basis. If a\nnon-admin user views this page, the \"Action(s)\" column is not displayed. This\npage also enables an admin user to edit the configuration for all nodes in the\nOpenMPF cluster. A non-admin user can only view the existing configuration.\n\n\nAn admin user can add a node by using the \"Add Node\" button and selecting a\nnode in the OpenMPF cluster from the drop-down list. You can also select to add\nall services at this time. A node and all if its configured services can be\nremoved by clicking the trash can to the right of the node's hostname.\n\n\nAn admin user can add services individually by selecting the node edit button\nat the bottom of the node. The number of service instances can be increased or\ndecreased by using the drop-down. Click the \"Submit\" button to save the changes.\n\n\nWhen making changes, please be aware of the following:\n\n\n\n\nIt may take a minute for the configuration to take effect on the server.\n\n\nIf you remove an existing service from a node, any job that service is\n processing will be stopped, and you will need to resubmit that job.\n\n\nIf you create a new node, its configuration will not take effect until the\n OpenMPF software is properly installed and started on the associated host.\n\n\nIf you delete a node, you will need to manually turn off the hardware running\n that node (deleting a node does not shut down the machine).\n\n\n\n\nComponent Registration\n\n\nThis page allows an admin user to add and remove non-default components to and\nfrom the system:\n\n\n\n\nA component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the Workflow Manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus.\n\n\n\n\nIf for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the Workflow Manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the \n/opt/mpf/plugins\n\ndirectory on the system.\n\n\nOnce registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.", "title": "Development Environment Guide" }, { @@ -1427,7 +1427,7 @@ }, { "location": "/Node-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n This guide is for non-Docker deployments only, such as for local development environments. In a Docker deployment components are run as Docker-managed containers.\n\n\n\nJGroups Communication\n\n\nOpenMPF uses the \nJGroups\n toolkit for passing messages between node-manager processes. One process runs on each OpenMPF Node Manager Docker container, and one master node-manager runs as part of the Workflow Manager web application on the Workflow Manager Docker container. Each of these containers is effectively a \"node\" in the OpenMPF cluster.\n\n\nThere are two primary aspects of JGroups that an OpenMPF administrator needs to be concerned with:\n\n\n\n\n\n\nOpenMPF uses the JGroups FILE_PING protocol for peer discovery. Each node uses files stored in \n$MPF_HOME/share/nodes/MPF_Channel\n. A node will write a file to that directory when the node-manager starts up, and read files in that directory to determine what other nodes are in the OpenMPF cluster.\n\n\n\n\n\n\nEach OpenMPF node uses network port 7800 for JGroups TCP communication. Please ensure that this port is open in the network firewall on each OpenMPF node, or the firewall is disabled.\n\n\n\n\n\n\nIf for some reason port 7800 is reserved by another process, JGroups will try the next available port, starting at 7801, then 7802, and so on. Note that within an OpenMPF \ndevelopment environment\n the Workflow Manager web application and node-manager process will run on the same machine, resulting in the use of port 7800 and 7801.\n\n\nWhen a node first starts up it will be in its own JGroups cluster. Within a minute it will be merged into the cluster with the other OpenMPF nodes. At that time it will be recognized by the Workflow Manager and become available for running services and processing jobs.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n This guide is for non-Docker deployments only, such as for local development environments. In a Docker deployment components are run as Docker-managed containers.\n\n\n\nJGroups Communication\n\n\nOpenMPF uses the \nJGroups\n toolkit for passing messages between node-manager processes. One process runs on each OpenMPF Node Manager Docker container, and one master node-manager runs as part of the Workflow Manager web application on the Workflow Manager Docker container. Each of these containers is effectively a \"node\" in the OpenMPF cluster.\n\n\nThere are two primary aspects of JGroups that an OpenMPF administrator needs to be concerned with:\n\n\n\n\n\n\nOpenMPF uses the JGroups FILE_PING protocol for peer discovery. Each node uses files stored in \n$MPF_HOME/share/nodes/MPF_Channel\n. A node will write a file to that directory when the node-manager starts up, and read files in that directory to determine what other nodes are in the OpenMPF cluster.\n\n\n\n\n\n\nEach OpenMPF node uses network port 7800 for JGroups TCP communication. Please ensure that this port is open in the network firewall on each OpenMPF node, or the firewall is disabled.\n\n\n\n\n\n\nIf for some reason port 7800 is reserved by another process, JGroups will try the next available port, starting at 7801, then 7802, and so on. Note that within an OpenMPF \ndevelopment environment\n the Workflow Manager web application and node-manager process will run on the same machine, resulting in the use of port 7800 and 7801.\n\n\nWhen a node first starts up it will be in its own JGroups cluster. Within a minute it will be merged into the cluster with the other OpenMPF nodes. At that time it will be recognized by the Workflow Manager and become available for running services and processing jobs.", "title": "Node Guide" }, { @@ -1437,7 +1437,7 @@ }, { "location": "/Workflow-Manager-Architecture/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.\n\n\n\nWorkflow Manager Overview\n\n\nOpenMPF consists of three major pieces:\n\n\n\n\nA collection of \nComponents\n which process media\n\n\nA \nNode Manager\n, which launches and monitors running components in the system in a non-Docker deployment\n\n\nThe \nWorkflow Manager\n (WFM), which allows for the creation of jobs and manages the flow through active components\n\n\n\n\nThese pieces are supported by a number of modules which provide shared functionality, as shown in the dependency diagram below:\n\n\n\n\nThere are three general functional areas in the WFM:\n\n\n\n\nThe \nControllers\n are the primary entry point, accepting REST requests which trigger actions by the WFM\n\n\nThe \nWFM Services\n, which handle administrative tasks such as pipeline creation, node management, and log retrieval\n\n\nJob Management\n, which uses Camel routes to pass a job through the levels of processing\n\n\n\n\nThere are two different databases used by the WFM:\n\n\n\n\nA \nSQL database\n stores persistent data about jobs. This data includes:\n\n\nThe job ID\n\n\nThe start and stop time of the job\n\n\nThe exit status of the job\n\n\nJob priority\n\n\nJob input/outputs\n\n\n\n\n\n\nA \nRedis database\n for storing track and detection data generated by components as they process parts of the job in various stages of the pipeline.\n\n\n\n\nThe diagram below shows the functional areas of the WFM, the databases used by the WFM, and communication with components:\n\n\n\n\nControllers / Services\n\n\nThe controllers are all located \nhere\n.\n\n\nEvery controller provides a collection of REST endpoints which allow access either to a WFM service or to the job management flow. Only the JobController enters the job management flow.\n\n\nBasic Controllers\n\n\nThe table below lists the controllers:\n\n\n\n\n\n\n\n\nController Class\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nAdminComponentRegistrationController\n\n\nHandles component registration\n\n\n\n\n\n\nAdminLogsController\n\n\nAccesses the log content via REST\n\n\n\n\n\n\nAdminPropertySettingsController\n\n\nAllows access and modification of system properties\n\n\n\n\n\n\nAdminStatisticsController\n\n\nGenerates job statistics\n\n\n\n\n\n\nAtmosphereController\n\n\nUses Atmosphere to manage server-side push\n\n\n\n\n\n\nHomeController\n\n\nManages index page and version information\n\n\n\n\n\n\nJobController\n\n\nManages job creation and interaction\n\n\n\n\n\n\nLoginController\n\n\nManages login/logout and authentication\n\n\n\n\n\n\nMarkupController\n\n\nHandles retrieving markup results\n\n\n\n\n\n\nMediaController\n\n\nHandles uploading and downloading media files\n\n\n\n\n\n\nNodeController\n\n\nManages component services across nodes in a non-Docker deployment\n\n\n\n\n\n\nPipelineController\n\n\nHandles the creation and deletion of actions, tasks, and pipelines\n\n\n\n\n\n\nServerMediaController\n\n\nEnables selection and deselection of files at a directory level\n\n\n\n\n\n\nSystemMessageController\n\n\nManages system level messages, such as notifying users that a server restart is needed\n\n\n\n\n\n\n\n\nThe following sections describe some of the controllers in more detail.\n\n\nAdminComponentRegistrationController\n\n\nIn a non-Docker deployment, components can be uploaded as tar.gz packages containing all necessary component data. For more information on components, read \nOpenMPF Component API Overview\n.\n\n\nThe \nAdminComponentRegistrationController\n provides endpoints which allow:\n\n\n\n\nAccess to current component information\n\n\nUpload of new components\n\n\nRegistration and unregistration of components (note that components must be registered to be included in pipelines)\n\n\nDeletion of components\n\n\n\n\nJobController\n\n\nA job is a specific pipeline's tasks and actions applied to a set of media. The \nJobController\n allows:\n\n\n\n\nAccess to information about jobs in the system\n\n\nCreation of new jobs\n\n\nCancellation of existing jobs\n\n\nDownload of job output data\n\n\nResubmission of jobs (regardless of initial job status)\n\n\n\n\nMarkupController\n\n\nMarkup files are copies of the initial media input to a job with detections visually highlighted in the image or video frames. The \nMarkupController\n can provide lists of available Markup files, or it can download a specific file.\n\n\nMediaController\n\n\nThe \nMediaController\n enables upload and organization of media files within the WFM. It provides endpoints for media upload, and also for creation of folders to organize media files in the system. At this time, there are no endpoints which allow for deletion or reorganization of media files, since all media is shared by all users.\n\n\nNodeController\n\n\nOpenMPF uses multiple hosts to enable scalability and parallel processing. The \nNodeController\n provides access to host information and allows components to be deployed on nodes in a non-Docker deployment. One or more components can be installed on a node. The same component can be installed on multiple nodes. Each node can manage one or more services for each component.\n\n\nThe \nNodeController\n provides host information and component service deployment status. It also provides an endpoint to deploy a service on a node and an endpoint to stop a service.\n\n\nFor more information on nodes, please read the \nNode Configuration and Status\n section in the Development Environment Guide.\n\n\nPipelineController\n\n\nThe \nPipeline Controller\n allows for the creation, retrieval, and deletion of pipelines or any of their constituent parts. While actions, tasks, and pipelines may not be directly modified, they may be deleted and recreated.\n\n\nFor more information on pipelines, please read the \nCreate Custom Pipelines\n section in the User Guide.\n\n\nJob Management\n\n\nThe request to create a job begins at the \nJobController\n. From there, it is transformed and passed through multiple flows on its way to the component services. These services process the job then return information to the WFM for JSON output generation.\n\n\nThe diagram below shows the sequence of WFM operations for a job executing a single-stage pipeline.\n\n\n\n\nAfter the job request is validated and saved to the SQL database, it passes through multiple Apache Camel routes, each of which checks that the job is still valid (with no fatal errors or cancellations), and then invokes a series of transformations and processors specific to the route.\n\n\nApache Camel\n is an open-source framework that allows developers to build rule-based routing engines. Within OpenMPF, we use a \nJava DSL\n to define the routes. Every route functions independently, and communication between the routes is URI-based. OpenMPF uses ActiveMQ to handle its message traffic.\n\n\nMedia Retriever Route\n\n\nThe \nMedia Retriever Route\n ensures that the media for the job can all be found and accessed. It stores the media information on the server to ensure continued access.\n\n\nMedia Inspection Route\n\n\nThe \nMedia Inspection Route\n splits a single job with multiple media inputs into separate messages, one for each piece of media. For each piece of media, it collects metadata about the media, including MIME type, duration, frame rate, and orientation data.\n\n\nJob Router Route\n\n\nBy the time the \nJob Router Route\n route is invoked, the job has been persisted in the permanent SQL database.\n\n\nThis route uses the pipeline's flow to create the messages that are sent to the components. For large media files, it splits the job into smaller sub-jobs by logically breaking the media up into segments. Each segment has a start point and an end point (specified as a frame or time offset).\n\n\nThis route compiles properties for the job, media, and algorithm, and determines the next component that needs to be invoked. It then marshals the job into a serialized protobuf format and sends the message off to the component for processing.\n\n\nThis route may be invoked multiple times as future routes redirect back to the Job Router so that the job can be processed by the next component in the pipeline.\n\n\nOnce the job is completed, this route converts the aggregated track and detection data in Redis into a JSON output format. It then clears out all data in Redis for the job, updates the final job status in the SQL database, optionally uploads the JSON output object to an object storage server, and optionally makes a callback to the endpoint listed in the job request.\n\n\nDetection Response Route\n\n\nThe \nDetection Response Route\n is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done.\n\n\nMarkup Response Route \n(Not Shown)\n\n\nMarkup files are copies of the initial media with any detections visually highlighted in the frame. The \nMarkup Response Route\n optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.\n\n\nOther Routes \n(Not Shown)\n\n\nAdditionally, there is a \nDetection Cancellation Route\n for cancelling detection requests sent to components, and a \nMarkup Cancellation Route\n for cancelling requests sent to the Markup component.\n\n\nAlso, there is a \nDLQ Route\n for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document describes the Workflow Manager architecture for C++ and Java batch processing. The Python batch processing architecture and C++ stream processing architecture use many of the same elements and concepts.\n\n\n\nWorkflow Manager Overview\n\n\nOpenMPF consists of three major pieces:\n\n\n\n\nA collection of \nComponents\n which process media\n\n\nA \nNode Manager\n, which launches and monitors running components in the system in a non-Docker deployment\n\n\nThe \nWorkflow Manager\n (WFM), which allows for the creation of jobs and manages the flow through active components\n\n\n\n\nThese pieces are supported by a number of modules which provide shared functionality, as shown in the dependency diagram below:\n\n\n\n\nThere are three general functional areas in the WFM:\n\n\n\n\nThe \nControllers\n are the primary entry point, accepting REST requests which trigger actions by the WFM\n\n\nThe \nWFM Services\n, which handle administrative tasks such as pipeline creation, node management, and log retrieval\n\n\nJob Management\n, which uses Camel routes to pass a job through the levels of processing\n\n\n\n\nThere are two different databases used by the WFM:\n\n\n\n\nA \nSQL database\n stores persistent data about jobs. This data includes:\n\n\nThe job ID\n\n\nThe start and stop time of the job\n\n\nThe exit status of the job\n\n\nJob priority\n\n\nJob input/outputs\n\n\n\n\n\n\nA \nRedis database\n for storing track and detection data generated by components as they process parts of the job in various stages of the pipeline.\n\n\n\n\nThe diagram below shows the functional areas of the WFM, the databases used by the WFM, and communication with components:\n\n\n\n\nControllers / Services\n\n\nThe controllers are all located \nhere\n.\n\n\nEvery controller provides a collection of REST endpoints which allow access either to a WFM service or to the job management flow. Only the JobController enters the job management flow.\n\n\nBasic Controllers\n\n\nThe table below lists the controllers:\n\n\n\n\n\n\n\n\nController Class\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nAdminComponentRegistrationController\n\n\nHandles component registration\n\n\n\n\n\n\nAdminLogsController\n\n\nAccesses the log content via REST\n\n\n\n\n\n\nAdminPropertySettingsController\n\n\nAllows access and modification of system properties\n\n\n\n\n\n\nAdminStatisticsController\n\n\nGenerates job statistics\n\n\n\n\n\n\nAtmosphereController\n\n\nUses Atmosphere to manage server-side push\n\n\n\n\n\n\nHomeController\n\n\nManages index page and version information\n\n\n\n\n\n\nJobController\n\n\nManages job creation and interaction\n\n\n\n\n\n\nLoginController\n\n\nManages login/logout and authentication\n\n\n\n\n\n\nMarkupController\n\n\nHandles retrieving markup results\n\n\n\n\n\n\nMediaController\n\n\nHandles uploading and downloading media files\n\n\n\n\n\n\nNodeController\n\n\nManages component services across nodes in a non-Docker deployment\n\n\n\n\n\n\nPipelineController\n\n\nHandles the creation and deletion of actions, tasks, and pipelines\n\n\n\n\n\n\nServerMediaController\n\n\nEnables selection and deselection of files at a directory level\n\n\n\n\n\n\nSystemMessageController\n\n\nManages system level messages, such as notifying users that a server restart is needed\n\n\n\n\n\n\n\n\nThe following sections describe some of the controllers in more detail.\n\n\nAdminComponentRegistrationController\n\n\nIn a non-Docker deployment, components can be uploaded as tar.gz packages containing all necessary component data. For more information on components, read \nOpenMPF Component API Overview\n.\n\n\nThe \nAdminComponentRegistrationController\n provides endpoints which allow:\n\n\n\n\nAccess to current component information\n\n\nUpload of new components\n\n\nRegistration and unregistration of components (note that components must be registered to be included in pipelines)\n\n\nDeletion of components\n\n\n\n\nJobController\n\n\nA job is a specific pipeline's tasks and actions applied to a set of media. The \nJobController\n allows:\n\n\n\n\nAccess to information about jobs in the system\n\n\nCreation of new jobs\n\n\nCancellation of existing jobs\n\n\nDownload of job output data\n\n\nResubmission of jobs (regardless of initial job status)\n\n\n\n\nMarkupController\n\n\nMarkup files are copies of the initial media input to a job with detections visually highlighted in the image or video frames. The \nMarkupController\n can provide lists of available Markup files, or it can download a specific file.\n\n\nMediaController\n\n\nThe \nMediaController\n enables upload and organization of media files within the WFM. It provides endpoints for media upload, and also for creation of folders to organize media files in the system. At this time, there are no endpoints which allow for deletion or reorganization of media files, since all media is shared by all users.\n\n\nNodeController\n\n\nOpenMPF uses multiple hosts to enable scalability and parallel processing. The \nNodeController\n provides access to host information and allows components to be deployed on nodes in a non-Docker deployment. One or more components can be installed on a node. The same component can be installed on multiple nodes. Each node can manage one or more services for each component.\n\n\nThe \nNodeController\n provides host information and component service deployment status. It also provides an endpoint to deploy a service on a node and an endpoint to stop a service.\n\n\nFor more information on nodes, please read the \nNode Configuration and Status\n section in the Development Environment Guide.\n\n\nPipelineController\n\n\nThe \nPipeline Controller\n allows for the creation, retrieval, and deletion of pipelines or any of their constituent parts. While actions, tasks, and pipelines may not be directly modified, they may be deleted and recreated.\n\n\nFor more information on pipelines, please read the \nCreate Custom Pipelines\n section in the User Guide.\n\n\nJob Management\n\n\nThe request to create a job begins at the \nJobController\n. From there, it is transformed and passed through multiple flows on its way to the component services. These services process the job then return information to the WFM for JSON output generation.\n\n\nThe diagram below shows the sequence of WFM operations for a job executing a single-stage pipeline.\n\n\n\n\nAfter the job request is validated and saved to the SQL database, it passes through multiple Apache Camel routes, each of which checks that the job is still valid (with no fatal errors or cancellations), and then invokes a series of transformations and processors specific to the route.\n\n\nApache Camel\n is an open-source framework that allows developers to build rule-based routing engines. Within OpenMPF, we use a \nJava DSL\n to define the routes. Every route functions independently, and communication between the routes is URI-based. OpenMPF uses ActiveMQ to handle its message traffic.\n\n\nMedia Retriever Route\n\n\nThe \nMedia Retriever Route\n ensures that the media for the job can all be found and accessed. It stores the media information on the server to ensure continued access.\n\n\nMedia Inspection Route\n\n\nThe \nMedia Inspection Route\n splits a single job with multiple media inputs into separate messages, one for each piece of media. For each piece of media, it collects metadata about the media, including MIME type, duration, frame rate, and orientation data.\n\n\nJob Router Route\n\n\nBy the time the \nJob Router Route\n route is invoked, the job has been persisted in the permanent SQL database.\n\n\nThis route uses the pipeline's flow to create the messages that are sent to the components. For large media files, it splits the job into smaller sub-jobs by logically breaking the media up into segments. Each segment has a start point and an end point (specified as a frame or time offset).\n\n\nThis route compiles properties for the job, media, and algorithm, and determines the next component that needs to be invoked. It then marshals the job into a serialized protobuf format and sends the message off to the component for processing.\n\n\nThis route may be invoked multiple times as future routes redirect back to the Job Router so that the job can be processed by the next component in the pipeline.\n\n\nOnce the job is completed, this route converts the aggregated track and detection data in Redis into a JSON output format. It then clears out all data in Redis for the job, updates the final job status in the SQL database, optionally uploads the JSON output object to an object storage server, and optionally makes a callback to the endpoint listed in the job request.\n\n\nDetection Response Route\n\n\nThe \nDetection Response Route\n is the re-entry point to the WFM. It unmarshals the protobuf responses from components and converts them into the Track and Detection objects used within the WFM. It then optionally performs each of the following actions: track merging, detection padding, detecting moving objects, and artifact extraction. It stores the track and detection data in the Redis database and optionally uploads artifacts to an object storage server. Then, it invokes the Job Router Route to see if any additional processing needs to be done.\n\n\nMarkup Response Route \n(Not Shown)\n\n\nMarkup files are copies of the initial media with any detections visually highlighted in the frame. The \nMarkup Response Route\n optionally uploads the markup files generated by the Markup component to an object storage server and persists the locations of these markup files in the SQL database.\n\n\nOther Routes \n(Not Shown)\n\n\nAdditionally, there is a \nDetection Cancellation Route\n for cancelling detection requests sent to components, and a \nMarkup Cancellation Route\n for cancelling requests sent to the Markup component.\n\n\nAlso, there is a \nDLQ Route\n for handling messages that appear in the ActiveMQ Dead Letter Queue (DLQ), which usually indicates a component failure or inability to deliver a message. In these cases, the job is terminated with an error condition.", "title": "Workflow Manager Architecture" }, { @@ -1522,7 +1522,7 @@ }, { "location": "/CPP-Streaming-Component-API/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.\n\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Streaming Component API currently supports the development of \ndetection components\n, which are used detect objects in live RTSP or HTTP video streams.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\n\n\nEach frame of the video is processed as it is read from the stream. After processing enough frames to form a segment (for example, 100 frames), the component starts processing the next segment. Like with batch processing, each segment read from the stream is processed independently of the rest. No detection or track information is carried over between segments. Tracks are not merged across segments.\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\nwhile (has_next_frame) {\n if (is_new_segment) {\n component->BeginSegment(video_segment_info)\n }\n activity_found = component->ProcessFrame(frame, frame_number) // Component logic does the work here\n if (activity_found && !already_sent_new_activity_alert_for_this_segment) {\n SendActivityAlert(frame_number)\n }\n if (is_end_of_segment) {\n streaming_video_tracks = component->EndSegment()\n SendSummaryReport(frame_number, streaming_video_tracks)\n }\n}\n\n\n\nEach instance of a Component Executable runs as a separate process. Generally, each process will execute a different detection algorithm that corresponds to a single stage in a detection pipeline. Each instance is started by the Node Manager as needed in order to execute a streaming video job. The Node Manager will monitor the process status and eventually stop it.\n\n\nThe Component Executable invokes functions on the Component Logic to get detection objects, and subsequently generates new track alerts and segment summary reports based on the output. These alerts and reports are sent to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFStreamingDetectionComponent\n.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Streaming Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n of an example OpenMPF C++ detection component that supports stream processing.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFStreamingDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nPackaging the component into an OpenMPF-compliant .tar.gz file. (See \nComponent Packaging\n).\n\n\nRegistering the component with OpenMPF. (See \nComponent Registration\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Streaming Component API:\n\n\n\n\nThe API consists of a \nDetection Component Interface\n and related input and output structures.\n\n\nDetection Component Interface\n\n\n\n\nMPFStreamingDetectionComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform stream processing.\n\n\n\n\nInputs\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nOutputs\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macro in its implementation:\n\n\nEXPORT_MPF_STREAMING_COMPONENT(TYPENAME);\n\n\n\nThis creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cStreamingHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThis macro also creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThis macro must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nEXPORT_MPF_STREAMING_COMPONENT(StreamingHelloWorld);\n\n\n\nDetection Component Interface\n\n\nThe \nMPFStreamingDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform stream processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\nConstructor\n\n\nSuperclass constructor that must be invoked by the constructor of the component subclass.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFStreamingVideoJob &\n\n\nStructure containing details about the work to be performed. See \nMPFStreamingVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nSampleComponent::SampleComponent(const MPFStreamingVideoJob &job)\n : MPFStreamingDetectionComponent(job)\n , hw_logger_(log4cxx::Logger::getLogger(\"SampleComponent\"))\n , job_name_(job.job_name) {\n\n LOG4CXX_INFO(hw_logger_, \"[\" << job_name_ << \"] Initialized SampleComponent component.\")\n}\n\n\n\nBeginSegment(VideoSegmentInfo)\n\n\nIndicate the beginning of a new video segment. The next call to \nProcessFrame()\n will be the first frame of the new segment. \nProcessFrame()\n will never be called before this function.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_info\n\n\nconst VideoSegmentInfo &\n\n\nStructure containing details about next video segment to process. See \nVideoSegmentInfo\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvoid SampleComponent::BeginSegment(const VideoSegmentInfo &segment_info) {\n // Prepare for next segment\n}\n\n\n\nProcessFrame(Mat ...)\n\n\nProcess a single video frame for the current segment.\n\n\nMust return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.\n\n\nIf the \njob_properties\n map contained in the \nMPFStreamingVideoJob\n struct passed to the component constructor contains a \nCONFIDENCE_THRESHOLD\n entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. After the Component Executable invokes \nEndSegment()\n to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. [NOTE: In the future the C++ Streaming Component API may be updated to support \nQUALITY_SELECTION_THRESHOLD\n instead of \nCONFIDENCE_THRESHOLD\n.]\n\n\nNote that this function may not be invoked for every frame in the current segment. For example, if \nFRAME_INTERVAL = 2\n, then this function will only be invoked for every other frame since those are the only ones that need to be processed.\n\n\nAlso, it may not be invoked for the first nor last frame in the segment. For example, if \nFRAME_INTERVAL = 3\n and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nframe\n\n\nconst cv::Mat &\n\n\nOpenCV class containing frame data. See \ncv::Mat\n\n\n\n\n\n\nframe_number\n\n\nint\n\n\nA unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True when the component begins generating the first track for the current segment; false otherwise.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}\n\n\n\nEndSegment()\n\n\nIndicate the end of the current video segment. This will always be called after \nBeginSegment()\n. Generally, \nProcessFrame()\n will be called one or more times before this function, depending on the number of frames in the segment and the number of frames actually read from the stream.\n\n\nNote that the next time \nBeginSegment()\n is called, this component should start generating new tracks. Each time \nEndSegment()\n is called, it should return only the most recent track data for that segment. Tracks should not be carried over between segments. Do not append new detections to a preexisting track from the previous segment and return that cumulative track when this function is called.\n\n\n\n\nFunction Definition:\n\n\n\n\nvector EndSegment()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nvector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvector SampleComponent::EndSegment() {\n // Perform any necessary cleanup before processing the next segment.\n // Return the collection of tracks generated for this segment only.\n}\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nMPFStreamingVideoJob\n\n\nStructure containing information about a job to be performed on a video stream.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFStreamingVideoJob(\n const string &job_name,\n const string &run_directory,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name\n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\nrun_directory \n\n\nconst string &\n\n\nContains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nVideoSegmentInfo\n\n\nStructure containing information about a segment of a video stream to be processed. A segment is a subset of contiguous video frames.\n\n\n\n\nConstructor(s):\n\n\n\n\nVideoSegmentInfo(\n int segment_number,\n int start_frame,\n int end_frame,\n int frame_width,\n int frame_height\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_number\n\n\nint\n\n\nA unique segment number (0-based index).\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the first frame in this segment.\n\n\n\n\n\n\nend_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the last frame in this segment.\n\n\n\n\n\n\nframe_width\n\n\nint\n\n\nThe height of each frame in this segment.\n\n\n\n\n\n\nframe_height\n\n\nint\n\n\nThe width of each frame in this segment.\n\n\n\n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a single video frame (image).\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMPFImageLocation detection;\ndetection.x_left_upper = 0;\ndetection.y_left_upper = 0;\ndetection.width = 100;\ndetection.height = 100;\ndetection.confidence = 1.0;\ndetection.detection_properties[\"CLASSIFICATION\"] = \"backpack\";\n\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nThrow Exceptions\n\n\nUnlike the \nC++ Batch Component API\n, none of the the C++ Streaming Component API functions return an \nMPFDetectionError\n. Instead, streaming components should throw an exception when a non-recoverable error occurs. The exception should be an instantiation or subclass of \nstd::exception\n and provide a descriptive error message that can be retrieved using \nwhat()\n. For example:\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Something bad happened\n throw std::exception(\"Error: Cannot do X with value Y.\");\n}\n\n\n\nThe exception will be handled by the Component Executable. It will immediately invoke \nEndSegment()\n to retrieve the current tracks. Then the component process and streaming job will be terminated.\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing a segment with the same \nVideoSegmentInfo\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.\n\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Streaming Component API currently supports the development of \ndetection components\n, which are used detect objects in live RTSP or HTTP video streams.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\n\n\nEach frame of the video is processed as it is read from the stream. After processing enough frames to form a segment (for example, 100 frames), the component starts processing the next segment. Like with batch processing, each segment read from the stream is processed independently of the rest. No detection or track information is carried over between segments. Tracks are not merged across segments.\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\nwhile (has_next_frame) {\n if (is_new_segment) {\n component->BeginSegment(video_segment_info)\n }\n activity_found = component->ProcessFrame(frame, frame_number) // Component logic does the work here\n if (activity_found && !already_sent_new_activity_alert_for_this_segment) {\n SendActivityAlert(frame_number)\n }\n if (is_end_of_segment) {\n streaming_video_tracks = component->EndSegment()\n SendSummaryReport(frame_number, streaming_video_tracks)\n }\n}\n\n\n\nEach instance of a Component Executable runs as a separate process. Generally, each process will execute a different detection algorithm that corresponds to a single stage in a detection pipeline. Each instance is started by the Node Manager as needed in order to execute a streaming video job. The Node Manager will monitor the process status and eventually stop it.\n\n\nThe Component Executable invokes functions on the Component Logic to get detection objects, and subsequently generates new track alerts and segment summary reports based on the output. These alerts and reports are sent to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFStreamingDetectionComponent\n.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Streaming Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n of an example OpenMPF C++ detection component that supports stream processing.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFStreamingDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nPackaging the component into an OpenMPF-compliant .tar.gz file. (See \nComponent Packaging\n).\n\n\nRegistering the component with OpenMPF. (See \nComponent Registration\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Streaming Component API:\n\n\n\n\nThe API consists of a \nDetection Component Interface\n and related input and output structures.\n\n\nDetection Component Interface\n\n\n\n\nMPFStreamingDetectionComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform stream processing.\n\n\n\n\nInputs\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nOutputs\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macro in its implementation:\n\n\nEXPORT_MPF_STREAMING_COMPONENT(TYPENAME);\n\n\n\nThis creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cStreamingHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThis macro also creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThis macro must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nEXPORT_MPF_STREAMING_COMPONENT(StreamingHelloWorld);\n\n\n\nDetection Component Interface\n\n\nThe \nMPFStreamingDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform stream processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\nConstructor\n\n\nSuperclass constructor that must be invoked by the constructor of the component subclass.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFStreamingVideoJob &\n\n\nStructure containing details about the work to be performed. See \nMPFStreamingVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nSampleComponent::SampleComponent(const MPFStreamingVideoJob &job)\n : MPFStreamingDetectionComponent(job)\n , hw_logger_(log4cxx::Logger::getLogger(\"SampleComponent\"))\n , job_name_(job.job_name) {\n\n LOG4CXX_INFO(hw_logger_, \"[\" << job_name_ << \"] Initialized SampleComponent component.\")\n}\n\n\n\nBeginSegment(VideoSegmentInfo)\n\n\nIndicate the beginning of a new video segment. The next call to \nProcessFrame()\n will be the first frame of the new segment. \nProcessFrame()\n will never be called before this function.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_info\n\n\nconst VideoSegmentInfo &\n\n\nStructure containing details about next video segment to process. See \nVideoSegmentInfo\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvoid SampleComponent::BeginSegment(const VideoSegmentInfo &segment_info) {\n // Prepare for next segment\n}\n\n\n\nProcessFrame(Mat ...)\n\n\nProcess a single video frame for the current segment.\n\n\nMust return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.\n\n\nIf the \njob_properties\n map contained in the \nMPFStreamingVideoJob\n struct passed to the component constructor contains a \nCONFIDENCE_THRESHOLD\n entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. After the Component Executable invokes \nEndSegment()\n to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. [NOTE: In the future the C++ Streaming Component API may be updated to support \nQUALITY_SELECTION_THRESHOLD\n instead of \nCONFIDENCE_THRESHOLD\n.]\n\n\nNote that this function may not be invoked for every frame in the current segment. For example, if \nFRAME_INTERVAL = 2\n, then this function will only be invoked for every other frame since those are the only ones that need to be processed.\n\n\nAlso, it may not be invoked for the first nor last frame in the segment. For example, if \nFRAME_INTERVAL = 3\n and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nframe\n\n\nconst cv::Mat &\n\n\nOpenCV class containing frame data. See \ncv::Mat\n\n\n\n\n\n\nframe_number\n\n\nint\n\n\nA unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True when the component begins generating the first track for the current segment; false otherwise.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}\n\n\n\nEndSegment()\n\n\nIndicate the end of the current video segment. This will always be called after \nBeginSegment()\n. Generally, \nProcessFrame()\n will be called one or more times before this function, depending on the number of frames in the segment and the number of frames actually read from the stream.\n\n\nNote that the next time \nBeginSegment()\n is called, this component should start generating new tracks. Each time \nEndSegment()\n is called, it should return only the most recent track data for that segment. Tracks should not be carried over between segments. Do not append new detections to a preexisting track from the previous segment and return that cumulative track when this function is called.\n\n\n\n\nFunction Definition:\n\n\n\n\nvector EndSegment()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nvector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvector SampleComponent::EndSegment() {\n // Perform any necessary cleanup before processing the next segment.\n // Return the collection of tracks generated for this segment only.\n}\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nMPFStreamingVideoJob\n\n\nStructure containing information about a job to be performed on a video stream.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFStreamingVideoJob(\n const string &job_name,\n const string &run_directory,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name\n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\nrun_directory \n\n\nconst string &\n\n\nContains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nVideoSegmentInfo\n\n\nStructure containing information about a segment of a video stream to be processed. A segment is a subset of contiguous video frames.\n\n\n\n\nConstructor(s):\n\n\n\n\nVideoSegmentInfo(\n int segment_number,\n int start_frame,\n int end_frame,\n int frame_width,\n int frame_height\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_number\n\n\nint\n\n\nA unique segment number (0-based index).\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the first frame in this segment.\n\n\n\n\n\n\nend_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the last frame in this segment.\n\n\n\n\n\n\nframe_width\n\n\nint\n\n\nThe height of each frame in this segment.\n\n\n\n\n\n\nframe_height\n\n\nint\n\n\nThe width of each frame in this segment.\n\n\n\n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a single video frame (image).\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMPFImageLocation detection;\ndetection.x_left_upper = 0;\ndetection.y_left_upper = 0;\ndetection.width = 100;\ndetection.height = 100;\ndetection.confidence = 1.0;\ndetection.detection_properties[\"CLASSIFICATION\"] = \"backpack\";\n\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nThrow Exceptions\n\n\nUnlike the \nC++ Batch Component API\n, none of the the C++ Streaming Component API functions return an \nMPFDetectionError\n. Instead, streaming components should throw an exception when a non-recoverable error occurs. The exception should be an instantiation or subclass of \nstd::exception\n and provide a descriptive error message that can be retrieved using \nwhat()\n. For example:\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Something bad happened\n throw std::exception(\"Error: Cannot do X with value Y.\");\n}\n\n\n\nThe exception will be handled by the Component Executable. It will immediately invoke \nEndSegment()\n to retrieve the current tracks. Then the component process and streaming job will be terminated.\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing a segment with the same \nVideoSegmentInfo\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]", "title": "C++ Streaming Component API" }, {