Hi Felipe,

As you have suggested, I used the AffectiveTweets package to get the word embeddings for tweets in Weka, using its default parameters, but I couldn’t understand what the generated dimensions mean (from embedding-0 to embedding-99)...

Do you recommend any reading?

Thanks a lot!

Cheers,
Jonnathan.

On 22 Aug 2018, at 01:16, Felipe Bravo <felipebravom@gmail.com> wrote:

Hi,
Yes you can get a document-level representation from pre-trained embeddings using the AffectiveTweets package (https://github.com/felipebravom/AffectiveTweets) or can even train your own embeddings using the deeplearning package (https://deeplearning.cms.waikato.ac.nz/). 
Cheers,
Felipe

On Wed, Aug 22, 2018 at 11:03 AM Jonnathan Carvalho <joncarv@gmail.com> wrote:
Hi, All!

I’m trying to figure out what word embeddings is...

Is it possible to use the dense feature representation generated by this technique with learning algorithms such as SVM? Or with neural networks only?

Does Weka support word embeddings?

Thanks!
Cheers!
_______________________________________________
Wekalist mailing list
Send posts to: Wekalist@list.waikato.ac.nz
List info and subscription status: https://list.waikato.ac.nz/mailman/listinfo/wekalist
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html


--
Cheers,
Felipe
_______________________________________________
Wekalist mailing list
Send posts to: Wekalist@list.waikato.ac.nz
List info and subscription status: https://list.waikato.ac.nz/mailman/listinfo/wekalist
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html