Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

Conversation

@Zha0q1
Copy link
Contributor

@Zha0q1 Zha0q1 commented Oct 9, 2020

This PR fixes std and var against large tensors

@mxnet-bot
Copy link

Hey @Zha0q1 , Thanks for submitting the PR
All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands:

  • To trigger all jobs: @mxnet-bot run ci [all]
  • To trigger specific jobs: @mxnet-bot run ci [job1, job2]

CI supported jobs: [unix-cpu, windows-cpu, windows-gpu, sanity, centos-cpu, edge, centos-gpu, clang, unix-gpu, website, miscellaneous]


Note:
Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin.
All CI tests must pass before the PR can be merged.

@Zha0q1
Copy link
Contributor Author

Zha0q1 commented Oct 10, 2020

local run:

ubuntu@ip-172-31-38-169:~/incubator-mxnet/build$ pytest ../tests/nightly/test_np_large_array.py::test_std
/home/ubuntu/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
  return f(*args, **kwds)
/home/ubuntu/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
  return f(*args, **kwds)
/home/ubuntu/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
  return f(*args, **kwds)
================================================ test session starts ================================================
platform linux -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/ubuntu/incubator-mxnet, inifile: pytest.ini
plugins: remotedata-0.3.2, openfiles-0.4.0, astropy-header-0.1.2, hypothesis-5.8.3, arraydiff-0.3, doctestplus-0.5.0
collected 1 item                                                                                                    

../tests/nightly/test_np_large_array.py                               .                                                                     [100%]

=========================================== 1 passed in 284.49s (0:04:44) ==========================================

@Zha0q1
Copy link
Contributor Author

Zha0q1 commented Oct 12, 2020

@access2rohit

MSHADOW_XINLINE static void Map(index_t i,
DType *out,
const DType *data,
const DType *mean,
Copy link
Contributor

@access2rohit access2rohit Oct 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you change index variables inside the kernel to index_t from size_t

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

assert inp.grad[-1, 0] == 5 and inp.grad[-1, 1] == -4 and inp.grad[-1, 2] == -1


@use_np
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, added it back

with mx.autograd.record():
out = np.std(inp, axis=1)
out.backward()
assert out.shape == (2, )
Copy link
Contributor

@access2rohit access2rohit Oct 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we directly comapre the outputs with actual numpy's operators ? call import numpy as _np
Then we probably don't have to worry about correctness of the formula used? @Zha0q1 wdyt ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried that first but it would take too long to run on large tensors so I instead derived a analytical formula. The correctness has been verified with small tensor size first then I switched to large tenosr

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@access2rohit access2rohit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Resolve merge conflicts

@lanking520 lanking520 added pr-awaiting-testing PR is reviewed and waiting CI build and test pr-awaiting-review PR is waiting for code review and removed pr-awaiting-testing PR is reviewed and waiting CI build and test labels Oct 16, 2020
@sandeep-krishnamurthy sandeep-krishnamurthy merged commit dfda45b into apache:master Oct 16, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

pr-awaiting-review PR is waiting for code review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants