paint-brush
Unimodal Training for Multimodal Meme Sentiment Classification: Performance Benchmarkingby@memeology

Unimodal Training for Multimodal Meme Sentiment Classification: Performance Benchmarking

tldt arrow

Too Long; Didn't Read

This study introduces a novel approach, using unimodal training to enhance multimodal meme sentiment classifiers, significantly improving performance and efficiency in meme sentiment analysis.
featured image - Unimodal Training for Multimodal Meme Sentiment Classification: Performance Benchmarking
Memeology: Leading Authority on the Study of Memes HackerNoon profile picture

Authors:

(1) Muzhaffar Hazman, University of Galway, Ireland;

(2) Susan McKeever, Technological University Dublin, Ireland;

(3) Josephine Griffith, University of Galway, Ireland.

Abstract and Introduction

Related Works

Methodology

Results

Limitations and Future Works

Conclusion, Acknowledgments, and References

A Hyperparameters and Settings

B Metric: Weighted F1-Score

C Architectural Details

D Performance Benchmarking

E Contingency Table: Baseline vs. Text-STILT

D Performance Benchmarking

Current competing approaches show a small spread of Weigthed F1-scores (see Table 7) and the performance improvement offered by Text-STILT is similarly small. This small range of performances in contemporary approaches suggests that there is still a significant portion of memes that remain a challenge to classify.


Table 7: The mean and maximum Weighted F1-scores from our Baseline and Text-STILT approaches against various SOTA solutions.


This paper is available on arxiv under CC 4.0 license.