suppose that I have the sentence "this paper is very cool" , I feed this whole sentence into bert to extract every word embedding and then put every word embedding into "sentence encoder" of your model ? Or I feed every single word such as "this",“paper” ....to extract pooling embedding of bert and then save the vector witch has 768 dims as txt ?
suppose that I have the sentence "this paper is very cool" , I feed this whole sentence into bert to extract every word embedding and then put every word embedding into "sentence encoder" of your model ? Or I feed every single word such as "this",“paper” ....to extract pooling embedding of bert and then save the vector witch has 768 dims as txt ?