As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
With the popularity of social media, the sarcasm figure of speech has become a common phenomenon on social media platforms, and many studies have utilized text and visual information for multimodal sarcasm detection. This paper use a method based on cross modal attention mechanism. Specifically, the paper extract multimodal features firstly, use attention mechanism to focus on the inconsistent information between modalities, and then use the inconsistent information for prediction. The experimental results show that the performance of this paper is improved on the Twitters sarcasm dataset.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.