0

As we know Word2Vec is a non-contextual embedding, here it maps the words in global vocabulary and returns their corresponding vectors (at word level).

In case of Doc2Vec, hope this is also non-contextual embedding and it return the vectors at document level, that means internally document is a union of paragraphs, sentences (i.e words).

what is the implementation style of Doc2Vec?

is any difference between Doc2vec, sent2vec,word2vec ? (because for all these word/subword is basic)?

please share the more insights about them?

tovijayak
  • 67
  • 1
  • 8

0 Answers0