Conversation
this adds some tests to account for the new behaviour of https://github.com/dfinity/candid/pull/110f Just a first stab, untested, and kinda running out of stream.
|
I am getting which sounds like a lie (sure Also, it might be helpful to not number the tests squentially, but use the line number to identify them, makes it easier to to jump to them. This is what the test driver in Motoko is doing. |
|
I could fix this with --- a/rust/candid/src/parser/value.rs
+++ b/rust/candid/src/parser/value.rs
@@ -154,7 +154,7 @@ impl IDLValue {
let ty = crate::types::internal::find_type(id).unwrap();
self.annotate_type(from_parser, env, &ty)?
}
- (IDLValue::Null, Type::Opt(_)) if from_parser => IDLValue::None,
+ (IDLValue::Null, Type::Opt(_)) => IDLValue::None,
(IDLValue::Float64(n), Type::Float32) if from_parser => IDLValue::Float32(*n as f32),
(IDLValue::Number(str), t) if from_parser => match t {
Type::Int => IDLValue::Int(str.parse::<Int>()?),but then it seems the distinction between Now at which means that the rule (which I find dubious, non-compositional, and wouldn’t be surprised if it breaks stuff) is not implemented. Will leave this to @chenyan-dfinity at this point. |
In the untype setting, we need The whole set of |
| assert blob "DIDL\00\01\7e\02" !: (opt bool) "opt: parsing invalid bool at opt bool"; | ||
| assert blob "DIDL\01\6e\7f\01\00\00" == "(null)" : (opt opt null) "opt: parsing (null : opt null) at opt opt null gives null, not opt null"; | ||
|
|
||
| // special opt and record subtyping |
There was a problem hiding this comment.
Also some tests for opt variant?
There was a problem hiding this comment.
yes, didn't get to variant yes :-)
Right, but why do you even have to distinguish the two? It’s not that |
This is a significant rewirte of the section in the spec describing deserialization. Some points worth noting: * This replaces the elaboration relation with one that takes a value and a type as input, and returns a value. This makes it clearer that there is no “input _type_” of deeper relevance. * One great benefit: No more long prose about how the rules assume these types to be principal, even if they aren’t. This clarifies questions like [this one](#126 (comment)) * Also, the existing rules about “elaborating function values” were kinda bogus, as function values are just accepted as they are. This is much easier now, also good. * Unfortuantely for this application, our textual representation has overloading. Instead of defining yet another abstract value algebra, I did a bit of hand-waving, and defined an “overloading-free fragment” of the textual format for this section. * A fair number of rules becomes simpler. Promising! * I was able to phrase some interesting properties more formally. Also promising! Didn’t actually prove them, though, althogh we should. * Still unclear to me what we mean with “subtyping is complete”, see the section there. * Decoding is still an inductive relation, so not fully algorithmic. This may be an issue with a stright-forward formalization; not all theorem provers like negative occurrences of inductive relations in their rules. * Syntax of the relations up for discussion.
Ah right. It was a workaround when we don't have type annotations. Yes, this is not necessary now. |
Gave it a shot, but still needed internally in the current code structure, because The Spec solves this by making the |
|
Just added a few It’s a bit hard that the test suite will panc (rather than report an error and continue) upon bad subtyping. The |
|
The rust code chanes might need some cleanup, feel free to push to this branch. |
This is a significant rewirte of the section in the spec describing deserialization. Some points worth noting: * This replaces the elaboration relation with one that takes a value and a type as input, and returns a value. This makes it clearer that there is no “input _type_” of deeper relevance. * One great benefit: No more long prose about how the rules assume these types to be principal, even if they aren’t. This clarifies questions like [this one](#126 (comment)) * Also, the existing rules about “elaborating function values” were kinda bogus, as function values are just accepted as they are. This is much easier now, also good. * Unfortuantely for this application, our textual representation has overloading. Instead of defining yet another abstract value algebra, I did a bit of hand-waving, and defined an “overloading-free fragment” of the textual format for this section. * A fair number of rules becomes simpler. Promising! * I was able to phrase some interesting properties more formally. Also promising! Didn’t actually prove them, though, although we should. * Still unclear to me what we mean with “subtyping is complete”, see the section there. * Decoding is still an inductive relation, so not fully algorithmic. This may be an issue with a stright-forward formalization; not all theorem provers like negative occurrences of inductive relations in their rules.
this propagates coercion errors (via a special sentinel value), and recovers when parsing opt. It also implements subtyping-to-consitutient-type. The updated Candid test suite from dfinity/candid#126 passes, but I am not overly confident in it. More thorough randomized testing would be needed.
|
Thanks for looking at the code. Feel free to take over and change the rust code in whatever way you see fit. |
…d into joachim/test-new-subtyping
this adds some tests to account for the new behaviour of
https://github.com/dfinity/candid/pull/110f
Just a first stab, untested, and kinda running out of stream.