%0 Conference Proceedings %T Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis %A Akhtar, Md Shad %A Chauhan, Dushyant %A Ghosal, Deepanway %A Poria, Soujanya %A Ekbal, Asif %A Bhattacharyya, Pushpak %Y Burstein, Jill %Y Doran, Christy %Y Solorio, Thamar %S Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) %D 2019 %8 June %I Association for Computational Linguistics %C Minneapolis, Minnesota %F akhtar-etal-2019-multi %X Related tasks often have inter-dependence on each other and perform better when solved in a joint framework. In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both. The multi-modal inputs (i.e. text, acoustic and visual frames) of a video convey diverse and distinctive information, and usually do not have equal contribution in the decision making. We propose a context-level inter-modal attention framework for simultaneously predicting the sentiment and expressed emotions of an utterance. We evaluate our proposed approach on CMU-MOSEI dataset for multi-modal sentiment and emotion analysis. Evaluation results suggest that multi-task learning framework offers improvement over the single-task framework. The proposed approach reports new state-of-the-art performance for both sentiment analysis and emotion analysis. %R 10.18653/v1/N19-1034 %U https://aclanthology.org/N19-1034 %U https://doi.org/10.18653/v1/N19-1034 %P 370-379