Skip to content

FIX: support_enumeration: Disable cache in _indiff_mixed_action#283

Merged
mmcky merged 1 commit intomasterfrom
gt_fix
Feb 23, 2017
Merged

FIX: support_enumeration: Disable cache in _indiff_mixed_action#283
mmcky merged 1 commit intomasterfrom
gt_fix

Conversation

@oyamad
Copy link
Copy Markdown
Member

@oyamad oyamad commented Feb 22, 2017

Remove cache=True in a jitted function for support_enumeration.

The kernel dies if a notebook with support_enumeration is executed by Run All. After this event, Oncesupport_enumeration is executed, Python in a next session or later always dies with support_enumeration with a message

LLVM ERROR: Program used external function '_numba.targets.arraymath.np_any.<locals>.flat_any$26.array(bool,_1d,_C)' which could not be resolved!

This does not happen once cache=True is removed from _indiff_mixed_action.

@coveralls
Copy link
Copy Markdown

Coverage Status

Coverage remained the same at 82.503% when pulling 5cb1936 on gt_fix into c674557 on master.

@mmcky
Copy link
Copy Markdown
Contributor

mmcky commented Feb 23, 2017

Thanks @oyamad and @sglyon for your comments.

@mmcky mmcky merged commit 2b6b853 into master Feb 23, 2017
@mmcky mmcky deleted the gt_fix branch February 23, 2017 00:17
@oyamad
Copy link
Copy Markdown
Member Author

oyamad commented Feb 23, 2017

Thanks guys.

Just for reference, this bug is to be fixed by numba/numba#2286.

@mmcky
Copy link
Copy Markdown
Contributor

mmcky commented Feb 23, 2017

@oyamad should we re-enable cache after the referenced issue is merged?

@oyamad
Copy link
Copy Markdown
Member Author

oyamad commented Feb 23, 2017

Maybe yes, once the version of Numba that incorporates this fix is released and included in a new version of Anaconda.

@mmcky
Copy link
Copy Markdown
Contributor

mmcky commented Feb 23, 2017

Thanks @oyamad I have opened issue: #285 to remind us.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants