-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-22232][PYTHON][SQL] Fixed Row pickling to include __from_dict__ flag #20280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-22232][PYTHON][SQL] Fixed Row pickling to include __from_dict__ flag #20280
Conversation
| self.assertEqual(df.collect(), [Row(key=str(i), value=str(i)) for i in range(100)]) | ||
|
|
||
| # field names can differ. | ||
| df = rdd.toDF(" a: int, b: string ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test was flawed because it only worked since ("a", "b") is in the same alphabetical order as ("key", "value"). If it was ("key", "aaa") then it would fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But wasn't this testing field names can differ. by user's schema input?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think you're right. The schema should be able to override names no matter what. I was thinking it was flawed because it relied on the row field to be sorted internally, so just changing the kwarg names (not the order) could cause it to fail. That seems a little strange, but maybe it is the intent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think this test is invalid because it only works as the Rows get serialized and lose the __from_dict__ flag. This should be the same think, but would fail:
df = spark.createDataFrame([Row(key=i, value=str(i)) for i in range(100)], schema=" a: int, b: string ")
And it would also fail if the named args of the Row objects were not in the same alphabetical order as the schema:
data = [Row(z=i, y=str(i)) for i in range(100)]
rdd = self.sc.parallelize(data, 5)
df = rdd.toDF(" a: int, b: string ")
This fails because the Row fields would be sorted in a different order, switching the type order.
|
After looking into this, it seems like the behavior of the If a If a |
|
@MrBago @HyukjinKwon I think the above behavior of the |
|
Test build #86193 has finished for PR 20280 at commit
|
|
Test build #86204 has finished for PR 20280 at commit
|
2192e49 to
a7d3396
Compare
|
Test build #86205 has finished for PR 20280 at commit
|
|
I think we should raise an error if Indexing by field name takes the same code path for Rows that are and are not |
|
Ahh. @zero323 too if you are available because I think we had a talk about this somewhere long time ago. Yea, I am aware of the issue itself. Will take a look soon. |
|
BTW the performance issue is orthogonal to the serialization issue raised in this jira/PR. Maybe we should avoid scope creep in this thread. |
|
^ Yup, let's leave the performance issue out. I think we might have to raise an error too but it's kind of a radical change. As a note, sorted fields are documented: spark/python/pyspark/sql/types.py Lines 1451 to 1452 in 3e40eb3
My only main concern is:
I think this is kind of a breaking change because we will basically now disallow the names in the schema given by user explicitly for |
|
Thanks @HyukjinKwon and @MrBago for reviewing. After thinking about this some more, I don't think this is the right solution. Like @HyukjinKwon pointed out, the supplied schema names should always override any specified from Something still seems wrong though because the example @MrBago has in the JIRA is inconsistent. I'll go over it again tomorrow, when I'm more awake.. |
|
Hey @BryanCutler is this still on your radar? |
|
Hey @holdenk , yeah I've been meaning to circle back to this and get some kind of resolution. I'll try to take another look later this week. |
|
Awesome, looking forward to the update. |
|
Looking at this again, I'm back to thinking this is the right fix. Based on #14469, if the |
|
Let me restate what I think the intended behavior of Row is: If a If a |
|
Test build #89530 has finished for PR 20280 at commit
|
I'd also like to follow up with another PR to address some of the usability issues with |
|
Also, this will cause a breaking change if and this would work but might be slower, depending on how complicated the schema is, because now the field names are searched for instead of just going by position So if we go forward with this fix, I should probably add something in the migration guide |
|
@HyukjinKwon @holdenk and @MrBago have any thoughts on moving forward with this change? |
|
oops, I missed this. will take a look shortly. |
|
Right. Will triple check for sure but I am with you for now. Yup, something in the migration guide makes much more sense to me too. |
|
I'm kinda worry the example you give above is actually fairly common - construct with kwargs, and then (re-)name the columns. perhaps worthwhile to consider a config switch? |
|
If the renaming scenario works in most of cases as expected, I think it'd be worthwhile to have a configuration; however, the previous behaviour looks actually odd because it's going to work only in certain weird conditions when fields in the given schema is in the alphabetical order mapping to fields in Row (#20280 (comment)). Otherwise this case fails already as well. The test case modified in #20280 (comment) actually works only because I thought about this for a while and failed to describe what the configuration does .. It sounded describing a bug like it was a proper behaviour that can be controlled by a configuration .. I think this one sounds more like a bug fix to me so far. Workaround should be relatively easy. Maybe, would it be good enough to describe workaround in the guide instead? I think it should be fine if we just use a map to convert |
|
BTW, I believe it's not so easy to pass a configuration from a very quick look because the exception usually would be thrown in a Python worker process. |
|
@BryanCutler, mind if I ask to clarify what happens for end-to-end cases in the PR description (like before & after with explaining the reasons)? the change looks small but possibly a breaking change about end-to-end cases although I think for now we are restoring the correct behaviour as expected. |
|
Thanks @HyukjinKwon and @felixcheung , I'm a bit worried too that this might break someones code, but it doesn't affect I'm not sure about adding a config switch, it might be a little hard to add and could be confusing to the user to explain that its only when serialized and the schema would need to be sorted by the original Row keywords. I'll go ahead and update the migration guide, and expand on the PR description to hopefully make the change as clear as possible. |
|
I'm worried that people might have two rows with different meanings but the same type and their application will start producing garbage #s. I think a lot of people go from RDDs of Rows to DFs in PySpark these days, so I'm a little nervous about this change even though I think its a step in the right direction. How about a config flag defaulting to off and we switch over @ 3.0? |
|
Probably that'd work but also it'd be trickier to add / remove that configuration. Another similar option maybe just close this for now and target this for 3.0.0 since we already started to talk about it. |
|
@HyukjinKwon , I was also thinking about holding off on this until 3.0.0 and then make a clean switch. What do you think about that @holdenk ? This bug doesn't block anything since there are workarounds, so fixing it is just to make more consistent. |
|
holding off is fine; however, I am less sure about the configuration if that's not something you guys feel strongly. |
|
closing now, will revisit for Spark 3.0 |
|
@BryanCutler, do you think we should try it out in 3.0? |
|
@HyukjinKwon Does it really make any sense to introduce such change at this point? If SPARK-27884 is to be followed, it will survive at most two minor versions. Unless Spark developers are committed to supporting Python 3.4 (already end-of-life) and 3.5 (will reach its end-of-life in the middle of 2020) after dropping Python 2 support. |
| if kwargs: | ||
| # create row objects | ||
| # create row object from named arguments, order not guaranteed so will be sorted | ||
| names = sorted(kwargs.keys()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing that out.
In my opinion dropping 3.5 support should be followed by removal of sorting by name (from what I've seen this behavior is confusing at best and it will have no use, once 3.5 support is dropped), although that might hard to sell, especially in a minor version.
Ideally I would see Row going away completely, but that's just pie in the sky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're in 3.0, we might have to consider to drop that sorting by name one .. yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd be great, but 3.0 only deprecates, not drops Python 2 support. So the main why sorting was introduced stays valid.
If Spark 2.5 was accepted, Python 2 could be deprecated there, and dropped in 3.0. In general I think that bringing Python 2 into Spark 3 is a mistake, but I am first to admit I am biased.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Spark 2.5 idea looks almost failed (and I either don't personally support that idea). This thing is rather legitimate to make it happen between minor releases. Actually, 3.0 is getting close. We could just let 3.0 release first and remove that sorting behaviour too optionally.
|
Yeah, this has been in my queue to look at again, but haven't been able to
yet. Besides dropping support for python versions, is it possible to remove
the need for sorting fields by requiring an OrderedDict be used for
versions before 3.6 or something similar?
…On Thu, Oct 24, 2019, 3:16 AM Hyukjin Kwon ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In python/pyspark/sql/types.py
<#20280 (comment)>:
> @@ -1445,15 +1447,15 @@ def __new__(self, *args, **kwargs):
raise ValueError("Can not use both args "
"and kwargs to create Row")
if kwargs:
- # create row objects
+ # create row object from named arguments, order not guaranteed so will be sorted
names = sorted(kwargs.keys())
I think Spark 2.5 idea looks almost failed (and I either don't personally
support that idea). This thing is rather legitimate to make it happen
between minor releases. Actually, 3.0 is getting close. We could just let
3.0 release first and remove that sorting behaviour too optionally.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#20280?email_source=notifications&email_token=ABCTA5J4RET3FGOHSNSRW6LQQFYYHA5CNFSM4EMBIFR2YY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCJCEDVA#discussion_r338491481>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABCTA5MQ3Q4TE35RBHRPFP3QQFYYHANCNFSM4EMBIFRQ>
.
|
One could drop def __new__(self, *args):
# "Keyword" path
if args and len(args) == 1 and isinstance(args[0], OrderedDict):
...
else:
return tuple.__new__(self, args)but that's a breaking change with significant potential impact (updating docs alone would require a lot of work). In my opinion, not something that is justified only to support EOL Python versions, especially when 3.6 has been around for almost 3 years now, and migration doesn't require any changes in the user codebase. |
|
Yes, that is kind of what I was thinking @zero323 , but still allow kwargs for python >= 3.6, where they will be ordered. To prevent a breaking change, we could offer a conf that would fall back to creating a class Row(tuple):
def __new__(self, *args, **kwargs):
if args and kwargs:
raise ValueError("Can not use both args "
"and kwargs to create Row")
if py_version < "3.6":
if len(args) != 1 and not isinstance(args[0], OrderedDict):
if conf.get("python.useLegacyRow"):
return LegacyRow(args, kwargs)
else:
raise ValueError(...)
else:
kwargs = args[0].to_dict()
if kwargs:
# create row objects
names = kwargs.keys()
row = tuple.__new__(self, [kwargs[n] for n in names])
row.__fields__ = names
# row.__from_dict__ = True ## Don't need anymore, I think
return row
else:
# create row class or objects
return tuple.__new__(self, args)
class LegacyRow(Row):
...
names = sorted(kwargs.keys())
...
row.__from_dict__ = True@HyukjinKwon and @zero323 does this look like a possibility? |
|
@BryanCutler The idea of controlling sorting behavior via configuration seem legit, but I wouldn't go so far, as adding a subclass, unless we plan to have diverging implementation later in time (thought I think that in such case it would make more sense to have common base class or dynamically provide one, or another implementation). In other words we could resort input if:
class Row(tuple):
def __new__(self, *args, **kwargs):
if args and kwargs:
raise ValueError("Can not use both args "
"and kwargs to create Row")
if kwargs:
if sys.version_info > (3, 6) and conf.get("python.useLegacyRowBehaviour"):
names, values = zip(*sorted(kwargs.items()))
else:
names, values = list(kwargs.keys()), kwargs.values()
row = tuple.__new__(self, values)
row.__fields__ = names
row.__from_dict__ = True
return row
else:
# create row class or objects
return tuple.__new__(self, args)I don't think that providing separate path for d: OrderedDict = ...
Row(**d)If anything there should be a separate path for spark/python/pyspark/sql/types.py Lines 1043 to 1044 in 2115bf6
Side notes:
* In general schema inference will require some work to make it consistent with spark/python/pyspark/sql/types.py Line 1059 in 2115bf6
would be undesired in 3.7 without legacy mode. |
|
@zero323 and @HyukjinKwon I think we should open another JIRA for this to continue the discussion and send a msg to the dev list, what do you guys think? |
|
Yeah, +1 for JIRA. It's too long to read .. lol. Seems like @zero323 sent an email to dev list about dropping Python 3.4 and 3.5 (http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Deprecate-Python-lt-3-6-in-Spark-3-0-td28168.html) |
|
I made https://issues.apache.org/jira/browse/SPARK-29748 to discuss dropping the sorting |
What changes were proposed in this pull request?
When a
Rowobject is created using kwargs, the order of the keywords can not be relied upon (except for Python 3.5 that uses an OrderedDict). The fields are sorted in the constructor and a flag__from_dict__is set to indicate that this object was created from kwargs so that other areas in Spark can access row data using field names instead of by position. This change includes the__from_dict__flag only when pickling a Row that was made from kwargs so that the behavior is preserved if the Row becomes pickled.How was this patch tested?
Fixed existing tests that relied on fields and schema being in the same alphabetical order. Added new test to create
Rowfrom positional arguments where order matters.