Skip to content

Conversation

@sbak5
Copy link
Contributor

@sbak5 sbak5 commented Jul 19, 2022

Implementation of the following routines

  • broadcast_shapes
  • broadcast_to
  • broadcast_arrays
  • broadcast
    These routines follow the behavior of the corresponding numpy routines.

@sbak5 sbak5 requested review from ipdemes and magnatelee July 19, 2022 00:02
@ipdemes
Copy link
Contributor

ipdemes commented Jul 19, 2022

@sbak5 : it seems like all the tests a failing. Could you, please, have a look ?

@sbak5
Copy link
Contributor Author

sbak5 commented Jul 19, 2022

@sbak5 : it seems like all the tests a failing. Could you, please, have a look ?

Yes. Already prepared a fix and testing on a local machine before pushing it.

@ipdemes
Copy link
Contributor

ipdemes commented Jul 20, 2022

@sbak5 Please add new api to documentation here and try to build documentation (cd docs/cunumeric; make clean html)

@ipdemes
Copy link
Contributor

ipdemes commented Jul 20, 2022

I think there are tests for broadcast_to, broadcast_shapes and broadcast_arrays missing. Could you, please, add them?

@sbak5
Copy link
Contributor Author

sbak5 commented Jul 20, 2022

I think there are tests for broadcast_to, broadcast_shapes and broadcast_arrays missing. Could you, please, add them?

broadcast simply does broadcast_arrays, which uses broadcast_to.
But I can add tests for these to make sure the routines work well in separate tests.

@ipdemes
Copy link
Contributor

ipdemes commented Jul 20, 2022

But I can add tests for these to make sure the routines work well in separate tests.

Yes, I think it would be nice to have separate tests

@ipdemes
Copy link
Contributor

ipdemes commented Jul 21, 2022

Can you replace all calls to np.broadcast_shapes in exiting cuNumeric code yo use your new broadcast_shape method?

behave correctly for an array manipulation routine
@sbak5
Copy link
Contributor Author

sbak5 commented Sep 9, 2022

@bryevdv Can you review the changes again? If it looks fine, I'll merge it. @ipdemes has already reviewed the changes for both your and her comments.

else:
self.array = np.flip(rhs.array, axes)

def broadcast_to(self, shape: NdShape) -> NumPyThunk:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this code doesn't make sense at all. why is an eager array always converted to a deferred array here? can't this be implemented using numpy.broadcast_to?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I remember, this was intended to handle a case when eager and deferred array are used for a operation. In this case, a broadcasted eager array should be changed to a corresponding deferred array, which doesn't work w/ deferred.broadcast_to. So, I changed this code to convert eager array to deferred when broadcast_tois called. I couldn't find a way to handle this case on my own w/o this change.

broadcasted: bool = False,
) -> ndarray:
# create an array object w/ options passed from 'broadcast' routines
arr = array(arr, copy=False, subok=subok)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the point of this call? note that subok isn't handled in array anyway, so this is a no-op.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

broadcast has the argument and numpy.array in cuNumeric still has the argument even though it's not implemented for subclasses. This call is just aligned with the corresponding routines so that there's no need to change this call when subclasses are implemented.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@magnatelee do you have a follow-up to the previous reply? I don't have enough context to know what is appropriate here (and below)

subok: bool = False,
) -> list[ndarray]:
# create an arry object w/ options passed from 'broadcast' routines
arrays = [array(arr, copy=False, subok=subok) for arr in arrs]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment here. why is this necessary?

@magnatelee
Copy link
Contributor

magnatelee commented Oct 5, 2022

@sbak5 it looks like NumPy's writeable flag is more liberal than that in this PR. For example, you can make a view from a np.broadcast_to call writeable by calling setflags(write=True) on it. And there shouldn't be any problem for cuNumeric to allow it. But this PR requires that owndata be True to be able to toggle the writeable flag, whereas NumPy doesn't need that. So I think we should revisit whether that's really necessary and also think about whether there are any true read-only views that cannot be turned into writeable views in any case.

@sbak5
Copy link
Contributor Author

sbak5 commented Nov 1, 2022

@sbak5 it looks like NumPy's writeable flag is more liberal than that in this PR. For example, you can make a view from a np.broadcast_to call writeable by calling setflags(write=True) on it. And there shouldn't be any problem for cuNumeric to allow it. But this PR requires that owndata be True to be able to toggle the writeable flag, whereas NumPy doesn't need that. So I think we should revisit whether that's really necessary and also think about whether there are any true read-only views that cannot be turned into writeable views in any case.

This is because of the runtime issue in legion as far as I remember.
When you create a view from broadcast_to and set write=True, there was issue in legion runtime regarding the ordering of tasks for write requests. @ipdemes found the issue and suggested me this change as far as I recall.
@ipdemes can you elaborate the issue here?

@bryevdv bryevdv changed the base branch from branch-22.10 to branch-22.12 January 19, 2023 20:21
@bryevdv
Copy link
Contributor

bryevdv commented Jan 19, 2023

As a start, brought up to date with current branch-22.12.

@bryevdv
Copy link
Contributor

bryevdv commented Jan 19, 2023

@ipdemes @magnatelee I have made a quick stab at bringing this PR up to date with the latest branch, and also addressing some of the low-hanging suggestions. There's a lot of comments and history here. For my own sake, would you mind if I copied all of this over to a new PR branch and cleaned up the commit history with an interactive rebase, then started on a new review there?

@magnatelee
Copy link
Contributor

For my own sake, would you mind if I copied all of this over to a new PR branch and cleaned up the commit history with an interactive rebase, then started on a new review there?

Not at all. I'd be happy to do a fresh review. Please go open a new PR.

@bryevdv
Copy link
Contributor

bryevdv commented Jan 20, 2023

closing, superseded by #759

@bryevdv bryevdv closed this Jan 20, 2023
XiaLuNV added a commit to XiaLuNV/cunumeric that referenced this pull request Dec 13, 2024
* enhance for random
---------

Co-authored-by: Jacob Faibussowitsch <jacob.fai@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category:new-feature PR introduces a new feature and will be classified as such in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.