Skip to content

ExplainableVQA/demo_maxvqa.py #9

@cyy-1234

Description

@cyy-1234

Hi, contributor,
I recently read the article Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach. Please let me know. I tried a video of my own, and the score felt like it was between 0-100. The values ​​in the paper then correspond to the paper's "Figure 4: Qualitative studies on different specific factors, with a good video (>0.6) and a bad video (<-0.6) in each dimension of Maxwell; [A-5] Trajectory, [ T-5]Flicker, and [T-8] Fluency are focusing on temporal variations and example videos for them are appended in supplementary package. Zoom in for details.", in my example what counts as good and what counts as bad Yes, looking forward to your reply

image
image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions