With the rapid development of foundation video generation technologies, long video generation models have exhibited promising research potential thanks to expanded content creation space. Recent studies reveal that the goal of long video generation tasks is not only to extend video duration but also to accurately express richer narrative content within longer videos. However, due to the lack of evaluation benchmarks specifically designed for long video generation models, the current assessment of these models primarily relies on benchmarks with simple narrative prompts (e.g., VBench). To the best of our knowledge, our proposed NarrLV is the first benchmark to comprehensively evaluate the Narrative expression capabilities of Long Video generation models. Inspired by film narrative theory, (i) we first introduce the basic narrative unit maintaining continuous visual presentation in videos as Temporal Narrative Atom (TNA), and use its count to quantitatively measure narrative richness. Guided by three key film narrative elements influencing TNA changes, we construct an automatic prompt generation pipeline capable of producing evaluation prompts with a flexibly expandable number of TNAs. (ii) Then, based on the three progressive levels of narrative content expression, we design an effective evaluation metric using the MLLM-based question generation and answering framework. (iii) Finally, we conduct extensive evaluations on existing long video generation models and the foundation generation models. Experimental results demonstrate that our metric aligns closely with human judgments. The derived evaluation outcomes reveal the detailed capability boundaries of current video generation models in narrative content expression.
Our evaluation model encompasses existing long video generation models as well as the foundational generation models they typically rely on:
Here, the quantity of TNA on the horizontal axis reflects the narrative richness of different evaluation prompts. The vertical axis represents the three evaluation dimensions we propose, i.e., narrative element fidelity
git clone https://github.com/AMAP-ML/NarrLV.git
cd NarrLV
conda create -n NarrLV python=3.10
pip install -r requirements.txtWe have curated evaluation prompts provided in the ./resource/prompt_suite directory. This set contains three TNA transformation factors (i.e., scene attribute changes, target attribute changes, and target action changes) and six ranges of TNA quantity changes, with 20 prompts under each setting. Based on this setup, the raw generation results related to 10 evaluation models can be found in .
Additionally, you can use our constructed prompt auto-generation pipeline to create evaluation prompts of interest. For instance, a prompt containing 3 TNAs due to changes in scene attributes:
python prompt_gen_pipeline.py --tna_factor scene_attribute --tna_num 3For video generation models that need evaluation, please encapsulate their input information based on their feedforward inference process to facilitate standardized testing. We provide several examples in lib/video_generation_model.py.
Next, you can generate the videos for evaluation using the following command:
python video_gen.pyBased on the videos generated in the previous step and the evaluation prompts we provide (located in the ./resource/prompt_suite directory), we first obtain the MLLM responses to these questions using the following script:
python answer_gen.pyAdditionally, we compute the aesthetic scores of each video's initial frame based on the Q-align method. These scores will be used as an offset in the metric calculation.
python answer_gen_aes.pyFinally, we calculate the final metric results using the script below:
python metric_cal.py
