Skip to content

Conversation

@joshjung
Copy link

Fix for #8.

Adding unit test for special scenario that was failing for me. Also adding options.split to improve performance on predictable streams of data with boundaries across which tokens will not ever pass (e.g. CSV file)

jjung added 3 commits November 25, 2014 11:09
…property to help speed up tokenization for huge files where there is distinct breaks between tokens (e.g. CVS file).
@Floby
Copy link
Owner

Floby commented Nov 26, 2014

Hello,
Thank you for your pull request, I don't have much time right now to have a close enough look, but I will soon.

What I could also do is merge your pull requests as well as #11 into a 1.2 branch and publish that one to npm.

@joshjung
Copy link
Author

Thanks @Floby. Let's hold off on the merge until I can be certain is solves my scenario (you can find it here: https://github.com/joshjung/slim-to-jade.

The fixes in this merge request took care of issue #8 but I still found other issues as I continued, primarily with how the chunks are being broken up and how process.nextTick was being called. It made me wonder if either I was not using the tokenizer properly or if some underlying structure needs to change.

@joshjung joshjung changed the title Several bug fixes + performance improvements Several bug fixes + performance improvements (HOLD FOR NOW) Nov 27, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants