Skip to content

move flatten_dense_tensors and unflatten_dense_tensors to Native#58006

Closed
eqy wants to merge 5 commits intopytorch:masterfrom
eqy:flatten
Closed

move flatten_dense_tensors and unflatten_dense_tensors to Native#58006
eqy wants to merge 5 commits intopytorch:masterfrom
eqy:flatten

Conversation

@eqy
Copy link
Copy Markdown
Collaborator

@eqy eqy commented May 11, 2021

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented May 11, 2021

💊 CI failures summary and remediations

As of commit 976415b (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@ezyang ezyang requested review from ngimel and removed request for ezyang May 11, 2021 19:45
@codecov
Copy link
Copy Markdown

codecov bot commented May 11, 2021

Codecov Report

Merging #58006 (976415b) into master (a90c229) will increase coverage by 0.00%.
The diff coverage is 56.00%.

@@           Coverage Diff           @@
##           master   #58006   +/-   ##
=======================================
  Coverage   76.83%   76.83%           
=======================================
  Files        1986     1986           
  Lines      197712   197702   -10     
=======================================
- Hits       151911   151909    -2     
+ Misses      45801    45793    -8     

@ngimel
Copy link
Copy Markdown
Collaborator

ngimel commented May 11, 2021

Please resolve conflicts, otherwise looks good. Can you also please run benchmarks similar to original issue and post results here?

@mruberry mruberry added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 12, 2021
@eqy
Copy link
Copy Markdown
Collaborator Author

eqy commented May 12, 2021

Please resolve conflicts, otherwise looks good. Can you also please run benchmarks similar to original issue and post results here?

I tried crafting my own microbenchmarks but while there was a speedup for unflatten I didn't see much of an improvement with flatten (perhaps because I used a maximum of 64 tensors and fairly large tensors while the original issue was for 90 or so small tensors). I was able to measure a speedup using the original benchmark scripts from @stas00.

Flatten (old):

Time to load utils op: 0.10063838958740234 seconds
--------------- timeit -----------------
py  =0.13520007906481624
cpp =0.10440316097810864
apex=0.06790819601155818

Flatten (new):

Time to load utils op: 8.74044132232666 seconds
--------------- timeit -----------------
py  =0.06021712603978813
cpp =0.08567818719893694
apex=0.06282702111639082

(I assume the time to load op is dependent on something being cached as the long time seemed to happen after swapping out the installed version of pytorch).
Unflatten (old):

Time to load utils op: 0.097869873046875 seconds
--------------- timeit -----------------
py  =0.27503183693625033
cpp =0.13111994415521622
apex=0.0947426010388881

Unflatten (new):

Time to load utils op: 0.10356521606445312 seconds
--------------- timeit -----------------
py  =0.09444436593912542
cpp =0.12250399100594223
apex=0.0949542720336467

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ngimel merged this pull request in 645a5f7.

krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
…ytorch#58006)

Summary:
pytorch#55240

CC ngimel

Pull Request resolved: pytorch#58006

Reviewed By: agolynski

Differential Revision: D28386749

Pulled By: ngimel

fbshipit-source-id: 4860c35d5ff95bcc38a243d7001180e7bd536314
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants