[go: up one dir, main page]

×
We propose a multi-task deep neural network that amalgamates texts and images to predict discrete emotion labels associated with comic scenes. •.
Jul 2, 2024 · We propose a multi-task deep neural network that amalgamates texts and images to predict discrete emotion labels associated with comic scenes. •.
To deal with these constraints of comic emotion analysis, we propose a multi-task-based framework, namely EmoComicNet, to fuse multi-modal information (i.e., ...
Request PDF | On Jun 1, 2024, Arpita Dutta and others published EmoComicNet : A multi-task model for comic emotion recognition | Find, read and cite all the ...
EmoComicNet: A multi-task model for comic emotion recognition. https://doi.org/10.1016/j.patcog.2024.110261 ·. Journal: Pattern Recognition, 2024, p. 110261.
As a multi-modal analysis task, the competition proposes to extract the emotions of comic characters in comic scenes based on visual information, text in speech ...
We propose a multi-task learning architecture for gait-related recognition problems and achieve better performances by sharing knowledge.
EmoComicNet: A multi-task model for comic emotion recognition. A Dutta, S Biswas, AK Das. Pattern Recognition 150, 110261, 2024. 5, 2024 ; SBGAN: Sequential ...
Jun 22, 2024 · EmoComicNet: A multi-task model for comic emotion recognition. sciencedirect ...
May 14, 2019 · In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both.
Missing: EmoComicNet: comic