Skip to content

Conversation

@emanjavacas
Copy link
Contributor

This addresses both issues. I've kept comments with tensor dimensionality in the computation of the attention weights, just tell me if I you'd prefer them removed.

@neubig
Copy link
Contributor

neubig commented Jan 18, 2017

This is great, but w1dt = w1 * input_mat can be done only once at the beginning of the sentence and cached. This is a big performance win, so it'd be nice to add that as well (maybe with a comment).

@emanjavacas
Copy link
Contributor Author

Oh, you are right. Here is the new version.

@neubig neubig merged commit 95cac6b into clab:master Jan 20, 2017
@neubig
Copy link
Contributor

neubig commented Jan 20, 2017

Looks good! Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants