Skip to content

Attempt to solve fread stack imbalance on Windows when rereading.#2488

Merged
mattdowle merged 21 commits intomasterfrom
fread_imbalance
Dec 7, 2017
Merged

Attempt to solve fread stack imbalance on Windows when rereading.#2488
mattdowle merged 21 commits intomasterfrom
fread_imbalance

Conversation

@mattdowle
Copy link
Copy Markdown
Member

@mattdowle mattdowle commented Nov 16, 2017

Closes #2481

@mattdowle mattdowle changed the title Removed verbose fread messages between 1st and (if any) 2nd read. Att… Attempt to solve fread stack imbalance on Windows when rereading. Nov 16, 2017
@codecov-io
Copy link
Copy Markdown

codecov-io commented Nov 17, 2017

Codecov Report

Merging #2488 into master will decrease coverage by 0.12%.
The diff coverage is 78.87%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2488      +/-   ##
==========================================
- Coverage   91.55%   91.43%   -0.13%     
==========================================
  Files          63       63              
  Lines       12046    12059      +13     
==========================================
- Hits        11029    11026       -3     
- Misses       1017     1033      +16
Impacted Files Coverage Δ
src/freadR.c 89.38% <43.33%> (-3.57%) ⬇️
src/fread.c 95.96% <88.39%> (-0.35%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8bf7334...907a031. Read the comment docs.

@mattdowle mattdowle requested a review from st-pasha November 17, 2017 00:16
Comment thread src/fread.c
if (jump>0 && !nextGoodLine(&tch, ncol)) {
stopTeam=true;
DTPRINT("No good line could be found from jump point %d\n",jump); // TODO: change to stopErr
continue;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this continue should remain

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reason for removal was :
error: invalid branch to/from OpenMP structured block
but maybe the continue could be just afterwards. Will try ...

@mattdowle mattdowle added this to the v1.10.6 milestone Nov 17, 2017
@st-pasha
Copy link
Copy Markdown
Contributor

Not sure if this would help or not, but I found the following explanation:

Within a given development domain, [calling conventions] tend not to be a problem. Each language generally has a convention that is implicit for all method calls. C/C++ uses the same convention for invocation of C/C++ calls, Python for other Python calls, etc. When crossing domains, it can become a problem if one domain doesn't use the same as another. Perhaps most common in windows, a function exported with "C" style declarations (cdecl) may cause an unbalanced stack (or worse) when called as though it had a stdcall convention, which is the method recognized by WINAPI (windows system) calls.

So the root of the problem could be that some Windows system call somehow gets invoked with wrong calling convention, which after many calls may lead to stack corruption. Which call it is I'm not sure, but could be snprintf or clock_gettime or something similar.

Comment thread src/fread.c
ASSERT(allocnrow <= nrowLimit, "allocnrow(%llu) < nrowLimit(%llu)", (llu)allocnrow, (llu)nrowLimit);
#pragma omp parallel num_threads(nth)
{
int me = omp_get_thread_num();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can add here

bool reportProgress = false;
#pragma omp master
reportProgress = args.showProgress;

then everywhere where you check (me==0 && args.showProgress) you can just replace with reportProgress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants