Skip to content

Conversation

@vivek-rane
Copy link

I'm a part of the TensorFlow optimization team at Intel and we're working on improving performance of YoloV2/darkflow on Xeon. As a first step, I'm adding the ability to measure and tweak performance. We will also contribute changes to TensorFlow and MKL-DNN to speed up YoloV2/DarkFlow.

  • Added flags for getting better performance during inference when running on TensorFlow built with MKL-DNN
  • Added a script to generate reasonable defaults for these flags depending on the system being used
  • Added ability to generate timeline to find performance bottlenecks

With the right settings, inference performance on a 28 core Xeon Scalable Processor goes up by more than 60%

@vivek-rane
Copy link
Author

vivek-rane commented May 15, 2018

Hang on - I tested this with python 2.7�. Let me update the code for 3.5.
Edit: done

@vivek-rane
Copy link
Author

Could someone help me with the failure in the CI? If I change the offset in darkflow/utils/loader.py from 16 to 20, it works. Not sure why.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant