Pretraining and Fine-Tuning Strategies for Sentiment Analysis of Latvian Tweets
Keyword(s):
In this paper, we present various pre-training strategies that aid in improving the accuracy of the sentiment classification task. At first, we pre-train language representation models using these strategies and then fine-tune them on the downstream task. Experimental results on a time-balanced tweet evaluation set show the improvement over the previous technique. We achieve 76% accuracy for sentiment analysis on Latvian tweets, which is a substantial improvement over previous work.
2019 ◽
Vol 7
(6)
◽
pp. 77-83
◽
Keyword(s):
2018 ◽
Vol 2018
◽
pp. 1-5
◽
2021 ◽
Keyword(s):
2020 ◽
Vol 10
(4)
◽
pp. 582-593