-
Notifications
You must be signed in to change notification settings - Fork 0
Feat: computational graph #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- The graph is serial, each node is in a row
…ate multiple edges - Adopted generation concept in Tensor & Function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a draft implementation of a computational graph to support automatic differentiation using a new PriorityQueue for managing the backward propagation order. Key changes include:
- The addition of a generic PriorityQueue in torch/priority_queue.py for handling function ordering during backpropagation.
- Updates in torch/core.py to use this PriorityQueue and modifications to the backward propagation logic, including support for multi-branch operations.
- Enhancements in tests, documentation, and CI configuration to support and validate the new design.
Reviewed Changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| torch/priority_queue.py | Adds a generic PriorityQueue implementation for function ordering. |
| torch/core.py | Updates the backward propagation logic using PriorityQueue; refactors Function implementations. |
| tests/torch/test_core.py | Adds a multi-branch backward test to ensure correct gradient propagation. |
| README.md | Expands documentation on core concepts and chain rule implementation. |
| .pre-commit-config.yaml | Updates configuration for pre-commit hooks. |
| .github/workflows/main.yml | Adjusts CI workflow to use pre-commit actions and install pytest. |
Comments suppressed due to low confidence (2)
torch/core.py:256
- Duplicate backward method in the Exp class: the first backward definition that returns y should be removed to avoid confusion and unintended behavior, as the second definition properly handles input validation and gradient computation.
return y
torch/core.py:130
- The PriorityQueue class does not implement bool, so 'while funcs:' will always evaluate to True. Either implement a bool method for PriorityQueue or change the loop to 'while len(funcs):' to correctly terminate when the queue is empty.
while funcs:
- mitigate circular reference - remove gradients which are not used no longer during backpropagation
- add torch-like types - add __len__, __repr__, ndim, shape, dtype - add doc-strings to testcases
- add 'requires_grad' and 'grad_fn' - method 'data'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 13 out of 13 changed files in this pull request and generated 2 comments.
Implement basic tensor operations with autograd support
Summary
This PR implements a subset of PyTorch's tensor operations from scratch using NumPy as the computational backend. All operations support automatic differentiation through a define-by-run approach.
Features
torch.tensor()+,-,*,/, negation)square(),exp(),pow())no_grad()) to disable gradient tracking for inferenceTesting
Implementation Notes
Functionobjects with forward and backward methods