From 5329213d97822290f131ac00c9356037b91bb4d2 Mon Sep 17 00:00:00 2001
From: cnowacki
Date: Mon, 10 Feb 2025 10:21:49 -0500
Subject: [PATCH 1/6] added description of
FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST
---
docs/docs/Feed-Forward-Guide.md | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/docs/docs/Feed-Forward-Guide.md b/docs/docs/Feed-Forward-Guide.md
index 214b8550bb16..2b3ef1baa6dc 100644
--- a/docs/docs/Feed-Forward-Guide.md
+++ b/docs/docs/Feed-Forward-Guide.md
@@ -65,8 +65,8 @@ the first stage, the subsequent stages will inherit the effects of those propert
# Feed Forward Properties
-Components that support feed forward have two algorithm properties that control the feed forward behavior:
-`FEED_FORWARD_TYPE` and `FEED_FORWARD_TOP_QUALITY_COUNT`.
+Components that support feed forward have three algorithm properties that control the feed forward behavior:
+`FEED_FORWARD_TYPE`, `FEED_FORWARD_TOP_QUALITY_COUNT`, and `FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST`.
`FEED_FORWARD_TYPE` can be set to the following values:
@@ -96,6 +96,14 @@ the [Quality Selection Guide](Quality-Selection-Guide/index.html). If the track
of the detections in the track will be processed. If one or more detections have the same quality value, then the
detection(s) with the lower frame index take precedence.
+`FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` allows you to include detections based on properties in addition to those
+from the `QUALITY_SELECTION_PROPERTY`. To do this, you need to set the property `BEST_DETECTION_PROPERTY_NAME_LIST` in the
+previous stage of processing. This property is a string composed of a semi-colon separated list of detection properties.
+For example, if you want to use something other than `CONFIDENCE` for the `QUALITY_SELECTION_PROPERTY`, but you also want
+to include the detection with the highest confidence in your feed-forward track, then you can set the
+`BEST_DETECTION_PROPERTY_NAME_LIST` property to `"CONFIDENCE"` in the first stage, and then set the
+`FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` property to `"BEST_CONFIDENCE"`.
+
# Superset Region
From a68655c8e0801675b866dcd1fc5d5f14291a99be Mon Sep 17 00:00:00 2001
From: cnowacki
Date: Mon, 10 Feb 2025 10:23:22 -0500
Subject: [PATCH 2/6] built the docs
---
docs/site/Feed-Forward-Guide/index.html | 11 ++++-
docs/site/index.html | 2 +-
docs/site/search/search_index.json | 8 ++--
docs/site/sitemap.xml | 60 ++++++++++++-------------
4 files changed, 44 insertions(+), 37 deletions(-)
diff --git a/docs/site/Feed-Forward-Guide/index.html b/docs/site/Feed-Forward-Guide/index.html
index 133dc0a59588..9018077d13c2 100644
--- a/docs/site/Feed-Forward-Guide/index.html
+++ b/docs/site/Feed-Forward-Guide/index.html
@@ -324,8 +324,8 @@ First Stage and Combining Properti
stage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from
the first stage, the subsequent stages will inherit the effects of those properties set on the first stage.
Components that support feed forward have two algorithm properties that control the feed forward behavior:
-FEED_FORWARD_TYPE and FEED_FORWARD_TOP_QUALITY_COUNT.
+Components that support feed forward have three algorithm properties that control the feed forward behavior:
+FEED_FORWARD_TYPE, FEED_FORWARD_TOP_QUALITY_COUNT, and FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST.
NONE: Feed forward is disabled (default setting).
@@ -353,6 +353,13 @@ Feed Forward Properties
the Quality Selection Guide. If the track contains less than 5 detections then all
of the detections in the track will be processed. If one or more detections have the same quality value, then the
detection(s) with the lower frame index take precedence.
+FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those
+from the QUALITY_SELECTION_PROPERTY. To do this, you need to set the property BEST_DETECTION_PROPERTY_NAME_LIST in the
+previous stage of processing. This property is a string composed of a semi-colon separated list of detection properties.
+For example, if you want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY, but you also want
+to include the detection with the highest confidence in your feed-forward track, then you can set the
+BEST_DETECTION_PROPERTY_NAME_LIST property to "CONFIDENCE" in the first stage, and then set the
+FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to "BEST_CONFIDENCE".
Superset Region
A “superset region” is the smallest region of interest that contains all of the detections for all of the frames in a
track. This is also known as a “union” or “minimum bounding
diff --git a/docs/site/index.html b/docs/site/index.html
index f5964c5c42d2..12093c5f19a5 100644
--- a/docs/site/index.html
+++ b/docs/site/index.html
@@ -395,5 +395,5 @@ Overview
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index a0449ad2e167..4332505d35b6 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -2,12 +2,12 @@
"docs": [
{
"location": "/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nThere are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison.\n\n\nOpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection.\n\n\nFor those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms.\n\n\nA list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here:\n\n\n\n\n\n\n\n\nOperation\n\n\nObject Type\n\n\nFramework\n\n\n\n\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nLBP-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nMOG w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nSuBSENSE w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nLicense Plate\n\n\nOpenALPR\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nSphinx\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nAzure Cognitive Services Batch Transcription API\n\n\n\n\n\n\nDetection\n\n\nScene\n\n\nOpenCV\n\n\n\n\n\n\nDetection\n\n\nClassification\n\n\nOpenCV DNN (GoogLeNet, Yahoo NSFW, vehicle color)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification\n\n\nOpenCV DNN (YOLO)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification/Features\n\n\nTensorRT (COCO classes)\n\n\n\n\n\n\nDetection\n\n\nText Region\n\n\nEAST\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nApache Tika\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nTesseract OCR\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services Computer Vision API (OCR endpoint)\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services Read API\n\n\n\n\n\n\nDetection\n\n\nForm Structure (with OCR)\n\n\nAzure Cognitive Services Form Recognizer API\n\n\n\n\n\n\nDetection\n\n\nKeywords\n\n\nBoost Regular Expressions\n\n\n\n\n\n\nDetection\n\n\nImage (from document)\n\n\nApache Tika\n\n\n\n\n\n\nTranslation\n\n\nLanguage\n\n\nAzure Cognitive Services Translate API\n\n\n\n\n\n\n\n\nThe OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nThere are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison.\n\n\nOpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection.\n\n\nFor those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms.\n\n\nA list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here:\n\n\n\n\n\n\n\n\nOperation\n\n\nObject Type\n\n\nFramework\n\n\n\n\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nLBP-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nMOG w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nSuBSENSE w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nLicense Plate\n\n\nOpenALPR\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nSphinx\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nAzure Cognitive Services Batch Transcription API\n\n\n\n\n\n\nDetection\n\n\nScene\n\n\nOpenCV\n\n\n\n\n\n\nDetection\n\n\nClassification\n\n\nOpenCV DNN (GoogLeNet, Yahoo NSFW, vehicle color)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification\n\n\nOpenCV DNN (YOLO)\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification/Features\n\n\nTensorRT (COCO classes)\n\n\n\n\n\n\nDetection\n\n\nText Region\n\n\nEAST\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nApache Tika\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nTesseract OCR\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services Read API\n\n\n\n\n\n\nDetection\n\n\nForm Structure (with OCR)\n\n\nAzure Cognitive Services Form Recognizer API\n\n\n\n\n\n\nDetection\n\n\nKeywords\n\n\nBoost Regular Expressions\n\n\n\n\n\n\nDetection\n\n\nImage (from document)\n\n\nApache Tika\n\n\n\n\n\n\nTranslation\n\n\nLanguage\n\n\nAzure Cognitive Services Translate API\n\n\n\n\n\n\n\n\nThe OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.",
"title": "Home"
},
{
"location": "/index.html#overview",
- "text": "There are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison. OpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection. For those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms. A list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here: Operation Object Type Framework Detection/Tracking Face LBP-Based OpenCV Detection/Tracking Motion MOG w/ STRUCK Detection/Tracking Motion SuBSENSE w/ STRUCK Detection/Tracking License Plate OpenALPR Detection Speech Sphinx Detection Speech Azure Cognitive Services Batch Transcription API Detection Scene OpenCV Detection Classification OpenCV DNN (GoogLeNet, Yahoo NSFW, vehicle color) Detection/Tracking Classification OpenCV DNN (YOLO) Detection/Tracking Classification/Features TensorRT (COCO classes) Detection Text Region EAST Detection Text (OCR) Apache Tika Detection Text (OCR) Tesseract OCR Detection Text (OCR) Azure Cognitive Services Computer Vision API (OCR endpoint) Detection Text (OCR) Azure Cognitive Services Read API Detection Form Structure (with OCR) Azure Cognitive Services Form Recognizer API Detection Keywords Boost Regular Expressions Detection Image (from document) Apache Tika Translation Language Azure Cognitive Services Translate API The OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.",
+ "text": "There are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison. OpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection. For those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms. A list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here: Operation Object Type Framework Detection/Tracking Face LBP-Based OpenCV Detection/Tracking Motion MOG w/ STRUCK Detection/Tracking Motion SuBSENSE w/ STRUCK Detection/Tracking License Plate OpenALPR Detection Speech Sphinx Detection Speech Azure Cognitive Services Batch Transcription API Detection Scene OpenCV Detection Classification OpenCV DNN (GoogLeNet, Yahoo NSFW, vehicle color) Detection/Tracking Classification OpenCV DNN (YOLO) Detection/Tracking Classification/Features TensorRT (COCO classes) Detection Text Region EAST Detection Text (OCR) Apache Tika Detection Text (OCR) Tesseract OCR Detection Text (OCR) Azure Cognitive Services Read API Detection Form Structure (with OCR) Azure Cognitive Services Form Recognizer API Detection Keywords Boost Regular Expressions Detection Image (from document) Apache Tika Translation Language Azure Cognitive Services Translate API The OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.",
"title": "Overview"
},
{
@@ -367,7 +367,7 @@
},
{
"location": "/Feed-Forward-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have two algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n and \nFEED_FORWARD_TOP_QUALITY_COUNT\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have three algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n, \nFEED_FORWARD_TOP_QUALITY_COUNT\n, and \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n allows you to include detections based on properties in addition to those\nfrom the \nQUALITY_SELECTION_PROPERTY\n. To do this, you need to set the property \nBEST_DETECTION_PROPERTY_NAME_LIST\n in the\nprevious stage of processing. This property is a string composed of a semi-colon separated list of detection properties.\nFor example, if you want to use something other than \nCONFIDENCE\n for the \nQUALITY_SELECTION_PROPERTY\n, but you also want\nto include the detection with the highest confidence in your feed-forward track, then you can set the\n\nBEST_DETECTION_PROPERTY_NAME_LIST\n property to \n\"CONFIDENCE\"\n in the first stage, and then set the\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n property to \n\"BEST_CONFIDENCE\"\n.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
"title": "Feed Forward Guide"
},
{
@@ -387,7 +387,7 @@
},
{
"location": "/Feed-Forward-Guide/index.html#feed-forward-properties",
- "text": "Components that support feed forward have two algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE and FEED_FORWARD_TOP_QUALITY_COUNT . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the Superset\n Region section for more details. For each detection in the feed forward track, search the superset\n region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track. FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed. When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property QUALITY_SELECTION_PROPERTY , which defaults to CONFIDENCE , but may be set to a different detection property. Refer to\nthe Quality Selection Guide . If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.",
+ "text": "Components that support feed forward have three algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE , FEED_FORWARD_TOP_QUALITY_COUNT , and FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the Superset\n Region section for more details. For each detection in the feed forward track, search the superset\n region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track. FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed. When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property QUALITY_SELECTION_PROPERTY , which defaults to CONFIDENCE , but may be set to a different detection property. Refer to\nthe Quality Selection Guide . If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence. FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those\nfrom the QUALITY_SELECTION_PROPERTY . To do this, you need to set the property BEST_DETECTION_PROPERTY_NAME_LIST in the\nprevious stage of processing. This property is a string composed of a semi-colon separated list of detection properties.\nFor example, if you want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY , but you also want\nto include the detection with the highest confidence in your feed-forward track, then you can set the BEST_DETECTION_PROPERTY_NAME_LIST property to \"CONFIDENCE\" in the first stage, and then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to \"BEST_CONFIDENCE\" .",
"title": "Feed Forward Properties"
},
{
diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml
index 81af8c35d1e8..181ee5ae5cad 100644
--- a/docs/site/sitemap.xml
+++ b/docs/site/sitemap.xml
@@ -2,152 +2,152 @@
/index.html
- 2024-12-06
+ 2025-02-10
daily
/Release-Notes/index.html
- 2024-12-06
+ 2025-02-10
daily
/License-And-Distribution/index.html
- 2024-12-06
+ 2025-02-10
daily
/Acknowledgements/index.html
- 2024-12-06
+ 2025-02-10
daily
/Install-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Admin-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/User-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/OpenID-Connect-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Media-Segmentation-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Feed-Forward-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Derivative-Media-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Object-Storage-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Markup-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/TiesDb-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Trigger-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Roll-Up-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Health-Check-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Quality-Selection-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/REST-API/index.html
- 2024-12-06
+ 2025-02-10
daily
/Component-API-Overview/index.html
- 2024-12-06
+ 2025-02-10
daily
/Component-Descriptor-Reference/index.html
- 2024-12-06
+ 2025-02-10
daily
/CPP-Batch-Component-API/index.html
- 2024-12-06
+ 2025-02-10
daily
/Python-Batch-Component-API/index.html
- 2024-12-06
+ 2025-02-10
daily
/Java-Batch-Component-API/index.html
- 2024-12-06
+ 2025-02-10
daily
/GPU-Support-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Contributor-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Development-Environment-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Node-Guide/index.html
- 2024-12-06
+ 2025-02-10
daily
/Workflow-Manager-Architecture/index.html
- 2024-12-06
+ 2025-02-10
daily
/CPP-Streaming-Component-API/index.html
- 2024-12-06
+ 2025-02-10
daily
\ No newline at end of file
From 314a1524fa3767ff2756f2a07944e2b08935d470 Mon Sep 17 00:00:00 2001
From: cnowacki
Date: Tue, 11 Feb 2025 15:02:41 -0500
Subject: [PATCH 3/6] changes based on review comment
---
docs/docs/Feed-Forward-Guide.md | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/docs/docs/Feed-Forward-Guide.md b/docs/docs/Feed-Forward-Guide.md
index 2b3ef1baa6dc..9989acd87aed 100644
--- a/docs/docs/Feed-Forward-Guide.md
+++ b/docs/docs/Feed-Forward-Guide.md
@@ -97,12 +97,11 @@ of the detections in the track will be processed. If one or more detections have
detection(s) with the lower frame index take precedence.
`FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` allows you to include detections based on properties in addition to those
-from the `QUALITY_SELECTION_PROPERTY`. To do this, you need to set the property `BEST_DETECTION_PROPERTY_NAME_LIST` in the
-previous stage of processing. This property is a string composed of a semi-colon separated list of detection properties.
-For example, if you want to use something other than `CONFIDENCE` for the `QUALITY_SELECTION_PROPERTY`, but you also want
-to include the detection with the highest confidence in your feed-forward track, then you can set the
-`BEST_DETECTION_PROPERTY_NAME_LIST` property to `"CONFIDENCE"` in the first stage, and then set the
-`FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` property to `"BEST_CONFIDENCE"`.
+chosen with the `QUALITY_SELECTION_PROPERTY`. For example, you may want to use something other than `CONFIDENCE` for the
+`QUALITY_SELECTION_PROPERTY`, but you also want to include the detection with the highest confidence in your feed-forward
+track. If the component executing in the first stage of the pipeline adds a `BEST_CONFIDENCE` property to the detection
+with highest confidence in each track, you can then set the `FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` property to
+`"BEST_CONFIDENCE"`, and the detections with that property will be added to the feed-forward track.
# Superset Region
From 9d983630babbb61bb912f7568743524697b2b164 Mon Sep 17 00:00:00 2001
From: cnowacki
Date: Tue, 11 Feb 2025 15:03:20 -0500
Subject: [PATCH 4/6] rebuilt the docs
---
docs/site/Feed-Forward-Guide/index.html | 11 +++--
docs/site/index.html | 2 +-
docs/site/search/search_index.json | 4 +-
docs/site/sitemap.xml | 60 ++++++++++++-------------
4 files changed, 38 insertions(+), 39 deletions(-)
diff --git a/docs/site/Feed-Forward-Guide/index.html b/docs/site/Feed-Forward-Guide/index.html
index 9018077d13c2..0f4efcb0b5d1 100644
--- a/docs/site/Feed-Forward-Guide/index.html
+++ b/docs/site/Feed-Forward-Guide/index.html
@@ -354,12 +354,11 @@ Feed Forward Properties
of the detections in the track will be processed. If one or more detections have the same quality value, then the
detection(s) with the lower frame index take precedence.
FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those
-from the QUALITY_SELECTION_PROPERTY. To do this, you need to set the property BEST_DETECTION_PROPERTY_NAME_LIST in the
-previous stage of processing. This property is a string composed of a semi-colon separated list of detection properties.
-For example, if you want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY, but you also want
-to include the detection with the highest confidence in your feed-forward track, then you can set the
-BEST_DETECTION_PROPERTY_NAME_LIST property to "CONFIDENCE" in the first stage, and then set the
-FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to "BEST_CONFIDENCE".
+chosen with the QUALITY_SELECTION_PROPERTY. For example, you may want to use something other than CONFIDENCE for the
+QUALITY_SELECTION_PROPERTY, but you also want to include the detection with the highest confidence in your feed-forward
+track. If the component executing in the first stage of the pipeline adds a BEST_CONFIDENCE property to the detection
+with highest confidence in each track, you can then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to
+"BEST_CONFIDENCE", and the detections with that property will be added to the feed-forward track.
Superset Region
A “superset region” is the smallest region of interest that contains all of the detections for all of the frames in a
track. This is also known as a “union” or “minimum bounding
diff --git a/docs/site/index.html b/docs/site/index.html
index 12093c5f19a5..ca7f2b3d4cb0 100644
--- a/docs/site/index.html
+++ b/docs/site/index.html
@@ -395,5 +395,5 @@ Overview
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index 4332505d35b6..4e4da95614b0 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -367,7 +367,7 @@
},
{
"location": "/Feed-Forward-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have three algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n, \nFEED_FORWARD_TOP_QUALITY_COUNT\n, and \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n allows you to include detections based on properties in addition to those\nfrom the \nQUALITY_SELECTION_PROPERTY\n. To do this, you need to set the property \nBEST_DETECTION_PROPERTY_NAME_LIST\n in the\nprevious stage of processing. This property is a string composed of a semi-colon separated list of detection properties.\nFor example, if you want to use something other than \nCONFIDENCE\n for the \nQUALITY_SELECTION_PROPERTY\n, but you also want\nto include the detection with the highest confidence in your feed-forward track, then you can set the\n\nBEST_DETECTION_PROPERTY_NAME_LIST\n property to \n\"CONFIDENCE\"\n in the first stage, and then set the\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n property to \n\"BEST_CONFIDENCE\"\n.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have three algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n, \nFEED_FORWARD_TOP_QUALITY_COUNT\n, and \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n allows you to include detections based on properties in addition to those\nchosen with the \nQUALITY_SELECTION_PROPERTY\n. For example, you may want to use something other than \nCONFIDENCE\n for the\n\nQUALITY_SELECTION_PROPERTY\n, but you also want to include the detection with the highest confidence in your feed-forward\ntrack. If the component executing in the first stage of the pipeline adds a \nBEST_CONFIDENCE\n property to the detection\nwith highest confidence in each track, you can then set the \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n property to\n\n\"BEST_CONFIDENCE\"\n, and the detections with that property will be added to the feed-forward track.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
"title": "Feed Forward Guide"
},
{
@@ -387,7 +387,7 @@
},
{
"location": "/Feed-Forward-Guide/index.html#feed-forward-properties",
- "text": "Components that support feed forward have three algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE , FEED_FORWARD_TOP_QUALITY_COUNT , and FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the Superset\n Region section for more details. For each detection in the feed forward track, search the superset\n region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track. FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed. When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property QUALITY_SELECTION_PROPERTY , which defaults to CONFIDENCE , but may be set to a different detection property. Refer to\nthe Quality Selection Guide . If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence. FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those\nfrom the QUALITY_SELECTION_PROPERTY . To do this, you need to set the property BEST_DETECTION_PROPERTY_NAME_LIST in the\nprevious stage of processing. This property is a string composed of a semi-colon separated list of detection properties.\nFor example, if you want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY , but you also want\nto include the detection with the highest confidence in your feed-forward track, then you can set the BEST_DETECTION_PROPERTY_NAME_LIST property to \"CONFIDENCE\" in the first stage, and then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to \"BEST_CONFIDENCE\" .",
+ "text": "Components that support feed forward have three algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE , FEED_FORWARD_TOP_QUALITY_COUNT , and FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the Superset\n Region section for more details. For each detection in the feed forward track, search the superset\n region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track. FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed. When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property QUALITY_SELECTION_PROPERTY , which defaults to CONFIDENCE , but may be set to a different detection property. Refer to\nthe Quality Selection Guide . If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence. FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those\nchosen with the QUALITY_SELECTION_PROPERTY . For example, you may want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY , but you also want to include the detection with the highest confidence in your feed-forward\ntrack. If the component executing in the first stage of the pipeline adds a BEST_CONFIDENCE property to the detection\nwith highest confidence in each track, you can then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to \"BEST_CONFIDENCE\" , and the detections with that property will be added to the feed-forward track.",
"title": "Feed Forward Properties"
},
{
diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml
index 181ee5ae5cad..547770807f2f 100644
--- a/docs/site/sitemap.xml
+++ b/docs/site/sitemap.xml
@@ -2,152 +2,152 @@
/index.html
- 2025-02-10
+ 2025-02-11
daily
/Release-Notes/index.html
- 2025-02-10
+ 2025-02-11
daily
/License-And-Distribution/index.html
- 2025-02-10
+ 2025-02-11
daily
/Acknowledgements/index.html
- 2025-02-10
+ 2025-02-11
daily
/Install-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Admin-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/User-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/OpenID-Connect-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Media-Segmentation-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Feed-Forward-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Derivative-Media-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Object-Storage-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Markup-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/TiesDb-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Trigger-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Roll-Up-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Health-Check-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Quality-Selection-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/REST-API/index.html
- 2025-02-10
+ 2025-02-11
daily
/Component-API-Overview/index.html
- 2025-02-10
+ 2025-02-11
daily
/Component-Descriptor-Reference/index.html
- 2025-02-10
+ 2025-02-11
daily
/CPP-Batch-Component-API/index.html
- 2025-02-10
+ 2025-02-11
daily
/Python-Batch-Component-API/index.html
- 2025-02-10
+ 2025-02-11
daily
/Java-Batch-Component-API/index.html
- 2025-02-10
+ 2025-02-11
daily
/GPU-Support-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Contributor-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Development-Environment-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Node-Guide/index.html
- 2025-02-10
+ 2025-02-11
daily
/Workflow-Manager-Architecture/index.html
- 2025-02-10
+ 2025-02-11
daily
/CPP-Streaming-Component-API/index.html
- 2025-02-10
+ 2025-02-11
daily
\ No newline at end of file
From 2c217a743934e479cf151b1493b4191b092c0d4e Mon Sep 17 00:00:00 2001
From: cnowacki
Date: Wed, 12 Feb 2025 08:40:42 -0500
Subject: [PATCH 5/6] added description of property back in
---
docs/docs/Feed-Forward-Guide.md | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/docs/docs/Feed-Forward-Guide.md b/docs/docs/Feed-Forward-Guide.md
index 9989acd87aed..a9548f6c66e7 100644
--- a/docs/docs/Feed-Forward-Guide.md
+++ b/docs/docs/Feed-Forward-Guide.md
@@ -97,11 +97,12 @@ of the detections in the track will be processed. If one or more detections have
detection(s) with the lower frame index take precedence.
`FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` allows you to include detections based on properties in addition to those
-chosen with the `QUALITY_SELECTION_PROPERTY`. For example, you may want to use something other than `CONFIDENCE` for the
-`QUALITY_SELECTION_PROPERTY`, but you also want to include the detection with the highest confidence in your feed-forward
-track. If the component executing in the first stage of the pipeline adds a `BEST_CONFIDENCE` property to the detection
-with highest confidence in each track, you can then set the `FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` property to
-`"BEST_CONFIDENCE"`, and the detections with that property will be added to the feed-forward track.
+chosen with the `QUALITY_SELECTION_PROPERTY`. It consists of a string that is a semi-colon separated list of detection
+properties. For example, you may want to use something other than `CONFIDENCE` for the `QUALITY_SELECTION_PROPERTY`, but
+you also want to include the detection with the highest confidence in your feed-forward track. If the component executing
+in the first stage of the pipeline adds a `BEST_CONFIDENCE` property to the detection with highest confidence in each track,
+you can then set the `FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST` property to `BEST_CONFIDENCE`, and the detections with
+that property will be added to the feed-forward track.
# Superset Region
From bb7cfe2e27864274f5fbcc2534bfb7ca57e79aa2 Mon Sep 17 00:00:00 2001
From: cnowacki
Date: Wed, 12 Feb 2025 08:41:31 -0500
Subject: [PATCH 6/6] rebuilt the docs
---
docs/site/Feed-Forward-Guide/index.html | 11 ++---
docs/site/index.html | 2 +-
docs/site/search/search_index.json | 4 +-
docs/site/sitemap.xml | 60 ++++++++++++-------------
4 files changed, 39 insertions(+), 38 deletions(-)
diff --git a/docs/site/Feed-Forward-Guide/index.html b/docs/site/Feed-Forward-Guide/index.html
index 0f4efcb0b5d1..6208488c4073 100644
--- a/docs/site/Feed-Forward-Guide/index.html
+++ b/docs/site/Feed-Forward-Guide/index.html
@@ -354,11 +354,12 @@ Feed Forward Properties
of the detections in the track will be processed. If one or more detections have the same quality value, then the
detection(s) with the lower frame index take precedence.
FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those
-chosen with the QUALITY_SELECTION_PROPERTY. For example, you may want to use something other than CONFIDENCE for the
-QUALITY_SELECTION_PROPERTY, but you also want to include the detection with the highest confidence in your feed-forward
-track. If the component executing in the first stage of the pipeline adds a BEST_CONFIDENCE property to the detection
-with highest confidence in each track, you can then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to
-"BEST_CONFIDENCE", and the detections with that property will be added to the feed-forward track.
+chosen with the QUALITY_SELECTION_PROPERTY. It consists of a string that is a semi-colon separated list of detection
+properties. For example, you may want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY, but
+you also want to include the detection with the highest confidence in your feed-forward track. If the component executing
+in the first stage of the pipeline adds a BEST_CONFIDENCE property to the detection with highest confidence in each track,
+you can then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to BEST_CONFIDENCE, and the detections with
+that property will be added to the feed-forward track.
Superset Region
A “superset region” is the smallest region of interest that contains all of the detections for all of the frames in a
track. This is also known as a “union” or “minimum bounding
diff --git a/docs/site/index.html b/docs/site/index.html
index ca7f2b3d4cb0..b4f2d9427ed4 100644
--- a/docs/site/index.html
+++ b/docs/site/index.html
@@ -395,5 +395,5 @@ Overview
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index 4e4da95614b0..af47295551be 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -367,7 +367,7 @@
},
{
"location": "/Feed-Forward-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have three algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n, \nFEED_FORWARD_TOP_QUALITY_COUNT\n, and \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n allows you to include detections based on properties in addition to those\nchosen with the \nQUALITY_SELECTION_PROPERTY\n. For example, you may want to use something other than \nCONFIDENCE\n for the\n\nQUALITY_SELECTION_PROPERTY\n, but you also want to include the detection with the highest confidence in your feed-forward\ntrack. If the component executing in the first stage of the pipeline adds a \nBEST_CONFIDENCE\n property to the detection\nwith highest confidence in each track, you can then set the \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n property to\n\n\"BEST_CONFIDENCE\"\n, and the detections with that property will be added to the feed-forward track.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have three algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n, \nFEED_FORWARD_TOP_QUALITY_COUNT\n, and \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n allows you to include detections based on properties in addition to those\nchosen with the \nQUALITY_SELECTION_PROPERTY\n. It consists of a string that is a semi-colon separated list of detection\nproperties. For example, you may want to use something other than \nCONFIDENCE\n for the \nQUALITY_SELECTION_PROPERTY\n, but\nyou also want to include the detection with the highest confidence in your feed-forward track. If the component executing\nin the first stage of the pipeline adds a \nBEST_CONFIDENCE\n property to the detection with highest confidence in each track,\nyou can then set the \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n property to \nBEST_CONFIDENCE\n, and the detections with\nthat property will be added to the feed-forward track.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
"title": "Feed Forward Guide"
},
{
@@ -387,7 +387,7 @@
},
{
"location": "/Feed-Forward-Guide/index.html#feed-forward-properties",
- "text": "Components that support feed forward have three algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE , FEED_FORWARD_TOP_QUALITY_COUNT , and FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the Superset\n Region section for more details. For each detection in the feed forward track, search the superset\n region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track. FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed. When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property QUALITY_SELECTION_PROPERTY , which defaults to CONFIDENCE , but may be set to a different detection property. Refer to\nthe Quality Selection Guide . If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence. FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those\nchosen with the QUALITY_SELECTION_PROPERTY . For example, you may want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY , but you also want to include the detection with the highest confidence in your feed-forward\ntrack. If the component executing in the first stage of the pipeline adds a BEST_CONFIDENCE property to the detection\nwith highest confidence in each track, you can then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to \"BEST_CONFIDENCE\" , and the detections with that property will be added to the feed-forward track.",
+ "text": "Components that support feed forward have three algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE , FEED_FORWARD_TOP_QUALITY_COUNT , and FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the Superset\n Region section for more details. For each detection in the feed forward track, search the superset\n region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track. FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed. When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property QUALITY_SELECTION_PROPERTY , which defaults to CONFIDENCE , but may be set to a different detection property. Refer to\nthe Quality Selection Guide . If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence. FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST allows you to include detections based on properties in addition to those\nchosen with the QUALITY_SELECTION_PROPERTY . It consists of a string that is a semi-colon separated list of detection\nproperties. For example, you may want to use something other than CONFIDENCE for the QUALITY_SELECTION_PROPERTY , but\nyou also want to include the detection with the highest confidence in your feed-forward track. If the component executing\nin the first stage of the pipeline adds a BEST_CONFIDENCE property to the detection with highest confidence in each track,\nyou can then set the FEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST property to BEST_CONFIDENCE , and the detections with\nthat property will be added to the feed-forward track.",
"title": "Feed Forward Properties"
},
{
diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml
index 547770807f2f..d10cec510494 100644
--- a/docs/site/sitemap.xml
+++ b/docs/site/sitemap.xml
@@ -2,152 +2,152 @@
/index.html
- 2025-02-11
+ 2025-02-12
daily
/Release-Notes/index.html
- 2025-02-11
+ 2025-02-12
daily
/License-And-Distribution/index.html
- 2025-02-11
+ 2025-02-12
daily
/Acknowledgements/index.html
- 2025-02-11
+ 2025-02-12
daily
/Install-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Admin-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/User-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/OpenID-Connect-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Media-Segmentation-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Feed-Forward-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Derivative-Media-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Object-Storage-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Markup-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/TiesDb-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Trigger-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Roll-Up-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Health-Check-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Quality-Selection-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/REST-API/index.html
- 2025-02-11
+ 2025-02-12
daily
/Component-API-Overview/index.html
- 2025-02-11
+ 2025-02-12
daily
/Component-Descriptor-Reference/index.html
- 2025-02-11
+ 2025-02-12
daily
/CPP-Batch-Component-API/index.html
- 2025-02-11
+ 2025-02-12
daily
/Python-Batch-Component-API/index.html
- 2025-02-11
+ 2025-02-12
daily
/Java-Batch-Component-API/index.html
- 2025-02-11
+ 2025-02-12
daily
/GPU-Support-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Contributor-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Development-Environment-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Node-Guide/index.html
- 2025-02-11
+ 2025-02-12
daily
/Workflow-Manager-Architecture/index.html
- 2025-02-11
+ 2025-02-12
daily
/CPP-Streaming-Component-API/index.html
- 2025-02-11
+ 2025-02-12
daily
\ No newline at end of file