Skip to content

Conversation

@nadvornix
Copy link
Contributor

  • Adding skipthoughts model.
  • Running it on wang/yodaqa/para/semeval-sts/sick2014 datasets.

@@ -0,0 +1,194 @@
"""
A simple model based on skipthoughts sentence embeddings.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add a literature reference, and an appropriate README entry?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, can you think of some single-sentence summary of what the skipthoughts model does? Like "A previously proposed model based on bidi-RNN-with-memory trained to predict preceding and followup words", is that remotely correct? My memory is a bit hazy.

@pasky
Copy link
Member

pasky commented Jun 7, 2016

Brilliant, I'm super-glad that we have this PR here now. :)

However, I think in order to merge this in a maintainable form, we should discuss what the motivation is behind not using standard ptscorers but rather a custom compiled model. Can you please expand on that? What would happen if we used the standard ptscorer with absdiff merge_mode? I'm afraid that this way, the code will really go quickly out of date and break, as you note in the xxx comment, so I'm trying to gauge the level of effort required to offload this back to ptscorer.

@pasky
Copy link
Member

pasky commented Jul 18, 2016

Jirka, just a soft ping if you'd find a few moments to take a look. :)

@nadvornix
Copy link
Contributor Author

Hi. I am sorry but I am rather busy over summer. I think I will not be able to find time to do this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants