Skip to content

Conversation

@tmoreau89
Copy link
Contributor

End to end benchmarking was added to the VM and graph executor in order to faithfully represent execution time of a graph module in order to account for data transfer overheads. This can be particularly significant on discrete GPUs, where PCI-E transfers are important to account for in a typical model serving deployment.

This PR proposes to modify the default measurement done by TVMC to always benchmark end to end execution time from CPU local device context.
Another option is to expose in TVMC a flag that lets the user perform end to end benchmarking. I recommend however not benchmarking without data transfer overheads as it presents an overly optimistic outlook on TVM performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant