feat: consistent type embedding#3617
Conversation
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #3617 +/- ##
==========================================
+ Coverage 77.70% 77.90% +0.19%
==========================================
Files 434 402 -32
Lines 37541 32821 -4720
Branches 1623 909 -714
==========================================
- Hits 29170 25568 -3602
+ Misses 7507 6725 -782
+ Partials 864 528 -336 ☔ View full report in Codecov by Sentry. |
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
wanghan-iapcm
left a comment
There was a problem hiding this comment.
It seems that by this PR, when neuron == [], the embedding is equivalent to identity, rather than a linear mapping. It seems that the implementation should be via the FittingNet not the EmbeddingNet.
Please check if I am wrong.
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
iProzd
left a comment
There was a problem hiding this comment.
I agree with this PR but with one comment, will we keep bias for TypeEmbedNet in each layer? Because it may be confusing for some one-hot analysis on embedding weights, such as interpolation on different elements and etc.
Keeping and removing We should not remove the bias if we do not fix the activation function to linear. The configuration of the type embedding may need further discussion, i.e., whether we allow flexible configurations for the type embedding. |
No description provided.