Skip to content

Hip refactor for attention, batch, combine, cast, conv#1402

Merged
reyna-abhyankar merged 7 commits intoflexflow:repo-refactorfrom
Bob-Chen222:bob-hip-refactor-abc
Jun 5, 2024
Merged

Hip refactor for attention, batch, combine, cast, conv#1402
reyna-abhyankar merged 7 commits intoflexflow:repo-refactorfrom
Bob-Chen222:bob-hip-refactor-abc

Conversation

@Bob-Chen222
Copy link
Contributor

@Bob-Chen222 Bob-Chen222 commented Jun 1, 2024

Description of changes:
Hip refactor for attention, batch, combine, cast, conv

Related Issues:

Linked Issues:

Issues closed by this PR:

  • Closes #

This change is Reviewable

@Bob-Chen222 Bob-Chen222 changed the title update hip Hip refactor for attention, batch, combine, cast, conv Jun 1, 2024
@codecov
Copy link

codecov bot commented Jun 1, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 38.10%. Comparing base (be2aad1) to head (a81d707).

Additional details and impacted files
@@              Coverage Diff               @@
##           repo-refactor    #1402   +/-   ##
==============================================
  Coverage          38.10%   38.10%           
==============================================
  Files                167      167           
  Lines               5026     5026           
  Branches             246      246           
==============================================
  Hits                1915     1915           
  Misses              3111     3111           
Flag Coverage Δ
unittests 38.10% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Copy link
Collaborator

@reyna-abhyankar reyna-abhyankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 7 files reviewed, 3 unresolved discussions (waiting on @Bob-Chen222)


lib/kernels/src/hip/attention_kernels.cpp line 242 at r1 (raw file):

                                        device_state.reserveSpaceSize,
                                        device_state.reserveSpace));
#endif

Delete


lib/kernels/src/hip/batch_norm_kernels.cpp line 119 at r1 (raw file):

  checkCUDNN(miopenCreateTensorDescriptor(&outputTensor));
  mode = miopenBNSpatial;
#if HIPDNN_VERSION >= 7000

Is this still true for HIP?


lib/kernels/src/hip/cast_kernels.cpp line 75 at r1 (raw file):

};

void forward_kernel(PerDeviceFFHandle handle,

Actually just keep stream as the first parameter for both functions. I'll change this in the cuda kernel as well.

Copy link
Contributor Author

@Bob-Chen222 Bob-Chen222 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 7 files reviewed, 3 unresolved discussions (waiting on @reyna-abhyankar)


lib/kernels/src/hip/attention_kernels.cpp line 242 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Delete

Done.


lib/kernels/src/hip/batch_norm_kernels.cpp line 119 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Is this still true for HIP?

Done.


lib/kernels/src/hip/cast_kernels.cpp line 75 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Actually just keep stream as the first parameter for both functions. I'll change this in the cuda kernel as well.

Done.

reyna-abhyankar
reyna-abhyankar previously approved these changes Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants