Web application for loading videos, segmenting them into frames, and preparing them for keyframe analysis.
- NiceGUI
- NumPy
- OpenCV Python wrapping OpenCV
- PyTubeFix
- FFmpeg for faster segmentation. When
ffmpegis installed and available on yourPATH, KeyFramer prefers theffmpegsegmentation pipeline automatically and falls back to OpenCV if it is unavailable.
Run:
.\install.batThis will:
- recreate the
VENVvirtual environment - use the highest installed Python 3.13+ found by the Windows
pylauncher - install the Python dependencies from
requirements.txt
Run:
./install.shThe Linux installer uses the first python3.13+ interpreter it finds on your PATH.
You can activate the virtual environment manually with:
.\VENV\Scripts\Activate.ps1source VENV/bin/activateYou know the virtual environment is active when (VENV) appears in your prompt.
- Segmentation writes both full-size
-video.jpgframes and smaller640x360-analysis.jpgframes. Keyframe analysis uses the-analysis.jpgcopies when available for faster processing while keeping the original full-size frames for other workflows. - When
ffmpegis available, KeyFramer now prefers a directffmpegsegmentation path that writes the final JPG outputs itself and falls back to the OpenCV pipeline automatically ifffmpegis unavailable or fails. - In the Segment step,
Reuse existing segmentation if presentonly reuses a segmented directory when the saved metadata still matches the current source file and segmentation frequency. - Segmentation also skips rewriting frame outputs that already exist, which helps with reruns and partial resumes.
Evaluate sequential differencesuses parallel CPU workers. In the Keyframing step,Pair Workerscontrols how many CPU processes compare adjacent frame pairs, andBatch Sizecontrols how many comparisons each worker handles at a time.- Those two settings are auto-filled based on your CPU count and the selected directory's frame count, but you can still override them manually at any time.
- Paired-difference and threshold keyframe outputs now use metadata fingerprints, so unchanged frame sets can reuse cached results instantly while stale caches are invalidated automatically.
- Long-running actions support cancellation:
Segment Media,Evaluate sequential differences, andCreate Keyframeseach show aCancelbutton while processing is active. - After
Create Keyframes, the app reports how many keyframes were selected out of the total segmented frames, along with the percentage of the directory that was chosen as keyframes.
Recommended:
.\run.batThis activates VENV and runs app.py from the project root.
You can also run it manually:
KeyFramer requires Python 3.13 or newer and will exit immediately with a clear error if started on an older interpreter.
.\VENV\Scripts\activate.bat
python app.pyRecommended:
./run.shThis activates VENV and runs app.py from the project root.
KeyFramer requires Python 3.13 or newer and will exit immediately with a clear error if started on an older interpreter.
python app.pyAfter startup, open http://localhost:8080.
Load mp4 video from local directorylooks for.mp4,.mkv, and.webmfiles in the project root.- Uploaded files are also saved into the project root before processing.
- Segmented frames are written under
./library/<video-name>/. - Each segmented directory also includes
segmentation_metadata.json, which records the source name, segmentation frequency, frame count, and the method used (ffmpeg,opencv, orreuse). - Processed source videos are moved into
./sources/after segmentation completes.
For this repository, the project root is the folder containing app.py, install.bat, install.sh, run.bat, and run.sh.
Keyframe analysis is stored as plain text Python tuple lists inside each segmented video folder under ./library/<video-name>/.
File name:
./library/<video-name>/<video-name>_paired_key_frames.txt
Contents:
[
('frame_a_path', 'frame_b_path', similarity_score),
...
]Each tuple contains:
- the first frame path
- the next sequential frame path
- the SSIM similarity score between them
Higher similarity scores mean the frames are more alike.
File name:
./library/<video-name>/<video-name>_threshold_key_frames.txt
Contents:
[
(start_index, next_index, frame_a_path, frame_b_path, similarity_score),
...
]Each tuple contains:
start_index: index of the current frame in the sorted JPG listnext_index: index of the next frame chosen as a key transitionframe_a_path: starting frame pathframe_b_path: next selected frame pathsimilarity_score: SSIM score that crossed the threshold rule
The app currently treats a lower score as a stronger visual change. In practice, entries in the threshold file represent jumps to the next frame whose similarity dropped below the chosen threshold.
Segmented frame images are saved as JPEGs like:
0-00-12.30-video.jpg
0-00-12.30-analysis.jpg
-video.jpg keeps the full-size frame for downstream use. -analysis.jpg stores a 640x360 analysis copy used for sequential-difference and keyframe processing. The timestamp encodes the approximate time in hours-minutes-seconds.hundredths form.
KeyFramer also writes JSON metadata files alongside the text outputs:
./library/<video-name>/segmentation_metadata.json
./library/<video-name>/<video-name>_paired_key_frames.meta.json
./library/<video-name>/<video-name>_threshold_key_frames.meta.json
These metadata files store the information KeyFramer uses to decide whether it can safely reuse existing segmentation, paired-difference, and threshold-keyframe outputs.
The paired and threshold data files are not JSON. They are written using Python's str(...) representation of lists and tuples, so the safest way to load them in Python is with ast.literal_eval(...).
Example:
import ast
from pathlib import Path
paired = ast.literal_eval(Path('./library/Witcher-99/Witcher-99_paired_key_frames.txt').read_text())
threshold = ast.literal_eval(Path('./library/Witcher-99/Witcher-99_threshold_key_frames.txt').read_text())- YouTube retrieval downloads the video stream only.
- The app serves its UI on port
8080by default.
Contact Michael Youngblood michael@filuta.ai
Always improving...
If you have direct access to this repository, feel free to make modifications and add yourself as an author below. If you break it, you must fix it. If you do not have direct access, please make the changes and submit a merge request through the repository system.
G. Michael Youngblood, Filuta AI, Inc.
Filuta AI KeyFramer
Copyright 2026 Filuta AI, Inc.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files, to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This is research code. It will be messy and buggy. If you feel that you need to complain, stop, just fix it or live with it.