Conversation
* adding rich arg, adding coldkeys and hotokeys * moving rich to payload from headers * bump version --------- Co-authored-by: benliang99 <caliangben@gmail.com>
Adding two finetuned image models to expand validator challenges
Updated transformers version to fix tokenizer initialization error
* Made gpu id specification consistent across synthetic image generation models * Changed gpu_id to device * Docstring grammar * add neuron.device to SyntheticImageGenerator init * Fixed variable names * adding device to start_validator.sh * deprecating old/biased random prompt generation * properly clear gpu of moderation pipeline * simplifying usage of self.device * fixing moderation pipeline device * explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management * deprecating random prompt generation --------- Co-authored-by: benliang99 <caliangben@gmail.com>
bump version
* simple video challenge implementation wip * dummy multimodal miner * constants reorg * updating verify_models script with t2v * fixing MODEL_PIPELINE init * cleanup * __init__.py * hasattr fix * num_frames must be divisible by 8 * fixing dict iteration * dummy response for videos * fixing small bugs * fixing video logging and compression * apply image transforms uniformly to frames of video * transform list of tensor to pil for synapse prep * cleaning up vali forward * miner function signatures to use Synapse base class instead of ImageSynapse * vali requirements imageio and moviepy * attaching separate video and image forward functions * separating blacklist and priority fns for image/video synapses * pred -> prediction * initial synth video challenge flow * initial video cache implementation * video cache cleanup * video zip downloads * wip fairly large refactor of data generation, functionality and form * generalized hf zip download fn * had claude improve video_cache formatting * vali forward cleanup * cleanup + turning back on randomness for real/fake * fix relative import * wip moving video datasets to vali config * Adding optimization flags to vali config * check if captioning model already loaded * async SyntheticDataGenerator wip * async zip download * ImageCache wip * proper gpu clearing for moderation pipeline * sdg cleanup * new cache system WIP * image/video cache updates * cleaning up unused metadata arg, improving logging * fixed frame sampling, parquet image extraction, image sampling * synth data cache wip * Moving sgd to its own pm2 process * synthetic data gen memory management update * mochi-1-preview * util cleanup, new requirements * ensure SyntheticDataGenerator process waits for ImageCache to populate * adding new t2i models from main * Fixing t2v model output saving * miner cleanup * Moving tall model weights to bitmind hf org * removing test video pkl * fixing circular import * updating usage of hf_hub_download according to some breaking huggingface_hub changes * adding ffmpeg to vali reqs * adding back in video models in async generation after testing * renaming UCF directory to DFB, since it now contains TALL * remaining renames for UCF -> DFB * pyffmpegg * video compatible data augmentations * Default values for level, data_aug_params for failure case * switching image challenges back on * using sample variable to store data for all challenge types * disabling sequential_cpu_offload for CogVideoX5b * logging metadata fields to w&b * log challenge metadata * bump version * adding context manager for generation w different dtypes * variable name fix in ComposeWithTransforms * fixing broken DFB stuff in tall_detector.py * removing unnecessary logging * fixing outdated variable names * cache refactor; moving shared functionality to BaseCache * finally automating w&b project setting * improving logs * improving validator forward structure * detector ABC cleanup + function headers * adding try except for miner performance history loading * fixing import * cleaning up vali logging * pep8 formatting video_utils * cleaning up start_validator.sh, starting validator process before data gen * shortening vali challenge timer * moving data generation management to its own script & added w&B logging * run_data_generator.py * fixing full_path variable name * changing w&b name for data generator * yaml > json gang * simplifying ImageCache.sample to always return one sample * adding option to skip a challenge if no data are available in cache * adding config vars for image/video detector * cleaning up miner class, moving blacklist/priority to base * updating call to image_cache.sample() * fixing mochi gen to 84 frames * fixing video data padding for miners * updating setup script to create new .env file * fixing weight loading after detector refactor * model/detector separation for TALL & modifying base DFB code to allow device configuration * standardizing video detector input to a frames tensor * separation of concerns; moving all video preprocessing to detector class * pep8 cleanup * reformatting if statements * temporarily removing initial dataset class * standardizing config loading across video and image models * finished VideoDataloader and supporting components * moved save config file out of trian script * backwards compatibility for ucf training * moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support * cleaning up data augmentation and target_image_size * import cleanup * gitignore update * fixing typos picked up by flake8 * fixing function name ty flake8 * fixing test fixtures * disabling pytests for now, some are broken after refactor and its 4am
Combined requirements installation
Video UAT fixes
* docs updates * mining docs update
* breaking out cache updates into their own process * adding retries for loading vali info * moving device config to data generation process * typo * removing old run_updater init arg, fixing dataset indexing * only download 1 zip to start to provide data for vali on first boot * cache deletion functionality * log cache size * name images with dataset prefix
* moving download_data.py to base_miner/datasets * removing unused args in download_data * constants -> config * docs updates for new paths * updating outdated fn headers
[Testnet] Variable framerate sampling
* Fix registry module imports * Fixing config loading issues * fixing frame sampling * bugfix * print label on testnet * reenabling model verification * update detector class names * Fixing config_name arg for camo * fixing detector config in camo * fixing ref to self.config_name * udpate default frame rate * vidoe dataset creation example * default config for video datasets * update default num_videosg --------- Co-authored-by: Andrew <caliangandrew@gmail.com>
…rors when using cuda (#126)
* resetting challenge timer to 60s * fix logging for miner history loading * randomize model order, log gen time * remove frame limit * separate logging to after data check * generate with batch=1 first for diverse data availability * load v1 history path for smooth transition to new incentive * prune extracted cache * swapping url open-images for jpg * removing unused config args * shortening cache refresh timer * cache optimizations * typo * better variable naming * default to autocast * log num files in cache along with GB * surfacing max size gb variables * cooked typo * Fixed wrong validation split key string causing no transform to be applied * Changed detector arg to be required * fixing hotkey reset check * removing logline * clamp mcc at 0 so video doesn't negatively impact performant image miners * typo * improving cache logs * prune after clear * only update relevant tracker in reward * improved logging, turned off cache removal in sample() --------- Co-authored-by: Andrew <caliangandrew@gmail.com>
Re-added bitmind HF org prefix to dataset path
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Release 2.0.0 - Video Challenges V1
Validator Update Steps
min_compute.yamlIf you wish to turn off autoupdate or self healing restarts, you can instead start your validator with either
or
NOTE Our startup script
run_neuron.py(which callsstart_validator.py, which is responsible for spawning the pm2 processes) now starts the processes shown below.Do not manually run
neruons/validator.pyrun_neuronmanages self-heal restarts and auto-updatebitmind_validatoris your validator processbitmind_data_generatoris populating~/.cache/sn34/syntheticwith outputs from text-to-video and text-to-image models. This is the only sn34 validator process that uses GPUbitmind_cache_updatermanages the cache of real images and videos at~/.cache/sn34/realMiner Update Steps
Overview
v2.0.0 features our initial version of video challenges with three text-to-video models, and two large real video datasets amounting to nearly 10TB of video data. It also contains significant refactors of core subnet components, with an emphasis on our approach to sampling and generating data for validator challenges. More on this below.
It also includes code to train and deploy a deepfake video detection model called TALL (whitepaper) as a SN34 miner.
Reward distribution will initially be split 90% to image challenges, 10% to video challenges. This is meant to allow miners the freedom to explore and experiment with different video detection models without a significant on incentive distribution, while still allowing high video performance to provide upward mobility.
To test miner generalizability, video challenges include (chronologically ordered) random set of video frames sampled at a variable frame rate. Frames are extracted with a png codec, with randomly applied jpeg compression as part of our pre-existing augmentation pipeline
This release corresponds to our initial version of video challenges. Upcoming future releases will include:
Models
https://huggingface.co/genmo/mochi-1-preview
https://huggingface.co/THUDM/CogVideoX-5b
https://huggingface.co/ByteDance/AnimateDiff-Lightning
Datasets
Initial selection for providing real videos. More to come, these are subject to change.
https://huggingface.co/datasets/nkp37/OpenVid-1M
https://huggingface.co/datasets/shangxd/imagenet-vidvrd
Replacement of open-image-v7 URL dataset with 256x256 JPEG subset (1.9M images).
https://huggingface.co/datasets/bitmind/open-image-v7-256
Miner Updates
TALLDetector + VideoDataset
Model Training
base_miner/datasets/create_video_dataset.py(example usage increate_videos_dataset_example.sh) to transform a directory of mp4 files into a train-ready video frames dataset. This involves extracting individual frames from the mp4s and creating a local Huggingface dataset to reference the extracted frames during training.Miner Deployment
miner.envnow has separate env vars for configuring both an image detection model and a video detection model.ImageSynapseandVideoSynapse.Validator Optimizations and Refactors
SyntheticDataGenerator
~/.cache/sn34/synthetic.start_validator.shnow starts an additional pm2 process calledbitmind_data_generatorImageAnnotationGenerator
ImageCache and VideoCache
start_validator.shnow starts an additional pm2 process calledbitmind_cache_updaterthat manages a real video cache and a real image cache. Each has its own asynchronous tasks to download new zip/parquet files on a regular interval, and extract random images and videos on a shorter interval.Validator.forward
forwardfunction now samples random data from a local cache of real and synthetic images and videos. For videos, a random number of frames are sampled from a random mp4 file. This logic is handled by theVideoCacheclassAdditional Changes
bitmind.constants.pyhad become quite a monolith, with the majority of its contents pertaining to validator operations. These variables have been moved to a more aptly namedbitmind/validator/config.py, and model metadata dictionaries have been given a new structure that is more amenable to the nature of how the codebase interacts with them.requirements.txtandsetup_env.shto avoid out of sync dependency versions