As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Sentiment analysis is significant for social media. Although many achievements have been made, most of these are focused on either only text modality or only audio modality. In this paper, we proposed an architecture of multimodal sentiment analysis based on RNN and feature selection. It fully uses joint textual, audio and video features representation to perform multimodal sentiment analysis. By designing a feature selection component, we can select the informative features from the redundant and heterogeneous unimodal features to improve the performance of sentiment analysis model. At the same time, the additional RNN architecture can capture the dependency and information flow among the utterances of a video in a single modality and perform modality fusion at every timestep in the feature-level. The proposed method achieves better performance in sentiment prediction and shows improvement in performance over the baseline.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.