Skip to content

Conversation

@Advaitgaur004
Copy link
Contributor

This PR adds support for automatic differentiation of min() and max() tensor operations. The gradients are distributed equally among all occurrences of the min/max values to ensure nice backpropagation.

@Advaitgaur004 Advaitgaur004 changed the title Add autograd support for min and max tensor operations Add: autograd support for min and max tensor operations Jul 4, 2025
@Advaitgaur004 Advaitgaur004 deleted the reduction-operator branch July 6, 2025 17:31
@Advaitgaur004 Advaitgaur004 restored the reduction-operator branch July 6, 2025 17:32
@Advaitgaur004 Advaitgaur004 reopened this Jul 6, 2025
@Advaitgaur004 Advaitgaur004 marked this pull request as draft July 6, 2025 17:48
@Advaitgaur004 Advaitgaur004 marked this pull request as ready for review July 7, 2025 16:55
@Advaitgaur004
Copy link
Contributor Author

@PrimedErwin Now Tensor_max and min have dim based utility. [Macro used]

- Tensor_reduce_with_indices is initially added in the early development, but later on it is simply redudant.
@PrimedErwin PrimedErwin merged commit 5e07b21 into pocketpy:test Jul 8, 2025
5 checks passed
@Advaitgaur004 Advaitgaur004 deleted the reduction-operator branch July 8, 2025 06:49
@Advaitgaur004 Advaitgaur004 changed the title Add: autograd support for min and max tensor operations [Feat] : autograd support for min and max tensor operations Aug 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants