Skip to content

linear operator#1180

Merged
reyna-abhyankar merged 12 commits intoflexflow:repo-refactorfrom
lambda7xx:repo-refactor-lambda-linear
Feb 7, 2024
Merged

linear operator#1180
reyna-abhyankar merged 12 commits intoflexflow:repo-refactorfrom
lambda7xx:repo-refactor-lambda-linear

Conversation

@lambda7xx
Copy link
Contributor

@lambda7xx lambda7xx commented Oct 7, 2023

Description of changes:

  • update the linear operator
  • init_kernel has some problem, we need to fix it.

Related Issues:

Linked Issues:

Issues closed by this PR:

  • Closes #

This change is Reviewable

@lambda7xx lambda7xx self-assigned this Oct 8, 2023
@lambda7xx lambda7xx added repo-refactor out of date? Reports that need to be confirmed to still exist and removed out of date? Reports that need to be confirmed to still exist labels Oct 8, 2023
@lockshaw lockshaw removed their request for review January 19, 2024 09:17
@lambda7xx
Copy link
Contributor Author

lib/runtime/src/ops/linear.cc line 86 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

We had agreed previously that ff_dim_t{1} is batch size. Is this equivalent?

ok, make sense

@lambda7xx
Copy link
Contributor Author

lib/runtime/src/ops/linear.cc line 92 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

I don't think this function call matches what you have in the kernel.

where? maybe this. should be

Code snippet:

  DeviceSpecific<LinearPerDeviceState> state =
      acc.create_device_specific<LinearPerDeviceState>(
          init_kernel(handle,
                      allocator,
                      one_ptr,
                      attrs.regularizer,
                      attrs.use_bias,
                      input.data_type,
                      weight.data_type,
                      output.data_type,
                      batch_size,
                      attrs.out_channels));
  return state;
}

Copy link
Contributor Author

@lambda7xx lambda7xx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 6 files reviewed, 12 unresolved discussions (waiting on @reyna-abhyankar)


lib/kernels/src/cuda/linear_kernels.cu line 28 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

What is Activation I think this should be ActiMode

Done.


lib/kernels/src/cuda/linear_kernels.cu line 71 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Use the same allocation method that is in the original constructor for LinearPerDeviceState

Done.


lib/runtime/src/ops/element_binary.cc line 216 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Don't worry about it in this PR. If it's a problem, you can open an issue

Done.


lib/runtime/src/ops/linear.cc line 37 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…
  ATTRS,

Done.


lib/runtime/src/ops/linear.cc line 40 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Why?

Done.


lib/runtime/src/ops/linear.cc line 46 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…
  bind.bind_arg(ATTRS, attrs);

Done.


lib/runtime/src/ops/linear.cc line 85 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Why the + 1?

Done.


lib/runtime/src/ops/linear.cc line 202 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

I think these should always be InputParallelTensorDesc instead of ParallelTensorShape

Done.


deps/fmt line 0 at r1 (raw file):

Previously, reyna-abhyankar (Reyna Abhyankar) wrote…

Use most updated submodules

Done.

@reyna-abhyankar reyna-abhyankar self-requested a review February 7, 2024 23:57
@reyna-abhyankar reyna-abhyankar merged commit bf41a4b into flexflow:repo-refactor Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants