-
Notifications
You must be signed in to change notification settings - Fork 505
Conversation
|
IMHO, this seems really complex. That JSON configuration looks really scary to me and sadly I don't think I could bring myself to use this. Fundamentally the need you are addressing is that you want to manipulate a text file and leverage Stellar as part of that pipeline, right? In a perfect world, I'd be able to use Unix-y tools like Grep, Sed, Awk, etc and introduce Stellar expressions as part of the pipeline. I know it is already possible to some degree with the REPL. I am wondering if with a little brainstorming we might be able to arrive at a simpler solution that allows us to leverage tried-and-true UNIX pipelines so that people can use the Unix-y tools they are familiar with. I feel we'll get better user traction with a simpler, more familiar approach. |
|
I think the need is to 'pre-create' complex objects, and re-use them across multiple stellar rule executions, with the bloom-filter being the example. Is that close? |
|
@nickwallen I definitely hear you, the JSON configs are more complex than I'd like. I'd like another more composable solution available using stellar functions available in the REPL. The intent of this PR, however, was a minimal extension of our existing extractor config to enable this use-case. This PR isn't making a complex bit of JSON per se, but rather it's reusing our existing complex bit of JSON to enable the use-case. I would love to create a discuss thread to see about a more REPL-friendly approach to this, but I think that belongs as a follow-on. Thoughts? Edit: I wanted to make sure that it was clear that this is just a reuse and very slight extension of the extractor config documented here |
|
@ottobackwards Yes, that's spot on. It's to enable creation of summarization objects in a method similar to (and reusing the configs and infrastructure of) the flat file loader. The idea is that this is just one way to create these objects, we might (likely will) have more ways to create them in the future. |
|
Also, a wizard-like UI could simplify this dramatically. That was one of the thoughts around extending and reusing the existing infrastructure in the first pass of this rather than creating a new way of iterating over flat files. I expect some users will want to use the UI and be frightened of composing stellar functions in the REPL. |
|
After more consideration and more egg nog, I decided that I'd create a DISCUSS thread about this entire use-case. We can move the discussion there. EDIT: I also, to make sure we had a full-throated discussion, included what I'd like to see as step 2, which I think is closer to what you were proposing @nickwallen. Interested to hear your thoughts there. |
|
So, the discuss thread has been going for some time now and the discussion is mostly around forward-thinking extensions to this. Are we at the point to agree that this is a viable first step and this can be reviewed? |
justinleet
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution here, this is definitely good stuff. I took a first pass through the code and left a couple minor comments on the assumption that the dev thread has pretty much died down.
I still need to spin it up, and look forward to testing it out.
| return LocationStrategy.getLocation(input, fs); | ||
| } | ||
|
|
||
| public void extractLineByLine( List<String> inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not something I'd want any work done on here, but have we put any thought into having this be a generic delimiter for lines, rather than just lines? It's probably just a light refactoring, but it's always really annoying to me when I need a non-newline delimiter and then end up not being able to trivially specify it. It's uncommon, but it does seem to pop up now and again.
| | -m | --import_mode | No | The Import mode to use: LOCAL, MR. Default: LOCAL | | ||
| | -om | --output_mode | No | The Output mode to use: LOCAL, HDFS. Default: LOCAL | | ||
| | -i | --input | Yes | The input data location on local disk. If this is a file, then that file will be loaded. If this is a directory, then the files will be loaded recursively under that directory. | | ||
| | -i | --output | Yes | The output data location. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-o here, not -i
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch
| import java.util.Optional; | ||
|
|
||
| public interface Writer { | ||
| void validate(Optional<String> output, Configuration hadoopConfig); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why doesn't this either return something or throw a checked exception? Right now the validate method is just something that you call and it doesn't do anything except throw runtime exception in the impls, rather than giving the caller an explicit ability to do anything about the validation failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch.
|
I spun this up in the context of the combined PR, and everything worked as advertised, barring the UI because of ES5 issues. I was able to validate that data flowed through as expected by querying against the indices directly. I'm good with this being a viable first step, and am +1. |
What sort of issues? |
|
@mmiklavc Check out #882 (comment). Looks like the squid mapping @cestella uses doesn't line up (which isn't terribly surprising because it was ES2 until a couple hours ago). |
|
Just following up, I have migrated the mapping of existing data to a template in the instructions and the type mismatch for |
|
I'm still +1 on this, thanks again. |
|
Ok, @justinleet has given a +1, do we have any existing reservations after the discussion thread and the review here on this work? If not, then I'm going to commit on Monday. |
|
I'm not getting in front of the train on this. I don't know how to enter "don't mind me, pretend i'm not here". |
|
haha @ottobackwards neutral would be a +0, which is fine. Thanks for your constructive comments on the discuss thread and here. As always, they're much appreciated. :) |
|
+0 |
|
@cestella Is there any document or description regarding this feature? How would the performance be comparable with normal HBase enrichment? |
|
+0 I'm sure what's here is solid, but I have not reviewed it myself. I just want to clear the way for this to get merged. I don't necessarily like the usability of this approach, but I think it is a good first step. And merging this doesn't preclude providing alternative approaches or enhancing this approach for better usability later. I do love the use case that was added that drove the need for this functionality. So good stuff @cestella! |
Contributor Comments
We have a nice and generalized infrastructure for loading data into HBase and interacting with it via
flatfile_loader.shandENRICHMENT_GET(). It is also useful to summarize a set of data into a static data structure, store it on HDFS and interact with it via stellar. To this end, to complement theflatfile_loader.sh, we should have aflatfile_summarizer.shthat, using the same extractor config, will process a flat file and output a serialized object.The usecase for this is as follows:
Let's say that I have a static list of domains in the second column of a CSV, domains.csv, and I want to generate a bloom filter with those domains in them sans TLD.
I should be able to create a file called
bloom.serwith the serialized bloom filter given the extractor config:Note, the associated stellar function
OBJECT_GETis available in #880.Testing Plan
We should run the test plan for #445 to ensure no regressions since 80% of this PR is just refactoring existing abstractions to reuse.
Write out a String Locally
We are going to take the top 10k alexa domains (saved as part of #445 's test plan to
~/top-10k.csv)Test
~/extractor_sample.jsonwith the following contents:$METRON_HOME//bin/flatfile_summarizer.sh -i ~/top-10k.csv -o ~/sample.ser -e ./extractor_sample.json -p 5 -b 128hexdump -C ./sample.serand ensure that there is a string in there. It may end or start with some non-ascii bytes at the beginning and end.e.g.
Typosquatting Use-case Testing
You can also follow the testing plan for #882 as this code is merged into that PR and it shows how this feature can be used in a real use-case.
Pull Request Checklist
Thank you for submitting a contribution to Apache Metron.
Please refer to our Development Guidelines for the complete guide to follow for contributions.
Please refer also to our Build Verification Guidelines for complete smoke testing guides.
In order to streamline the review of the contribution we ask you follow these guidelines and ask you to double check the following:
For all changes:
For code changes:
Have you included steps to reproduce the behavior or problem that is being changed or addressed?
Have you included steps or a guide to how the change may be verified and tested manually?
Have you ensured that the full suite of tests and checks have been executed in the root metron folder via:
Have you written or updated unit tests and or integration tests to verify your changes?
If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
Have you verified the basic functionality of the build by building and running locally with Vagrant full-dev environment or the equivalent?
For documentation related changes:
Have you ensured that format looks appropriate for the output in which it is rendered by building and verifying the site-book? If not then run the following commands and the verify changes via
site-book/target/site/index.html:Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
It is also recommended that travis-ci is set up for your personal repository such that your branches are built there before submitting a pull request.