Unsupervised multimodal learning for dependency-free personality recognition

S. Ghassemi, T. Zhang, W. van Breda, A. Koutsoumpis, J. Oostrom, D. Holtrop, R.E. de Vries

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)

Abstract

Recent advances in AI-based learning models have significantly increased the accuracy of Automatic Personality Recognition (APR). However, these methods either require training data from the same subject or the meta-information from the training set to learn the personality-related features (i.e., subject-dependency). The variance of feature extraction for different subjects compromises the possibility of designing a dependency-free system for APR. To address this problem, we present an unsupervised multimodal learning framework to infer personality traits from audio, visual, and verbal modalities. Our method both extracts the handcraft features and transfers deep-learning based embeddings from other tasks (e.g., emotion recognition) to recognize personality traits. Since these representations are extracted locally in the time domain, we present an unsupervised temporal aggregation method to aggregate the extracted features over the temporal dimension. We evaluate our method on the ChaLearn dataset, the most widely referenced dataset for APR, using a dependency-free split of the dataset. Our results show that the proposed feature extraction and temporal aggregation modules do not require personality annotations in training but still outperform other state-of-the-art baseline methods. We also address the problem of subject-dependency in the original split of the ChaLearn dataset. The newly proposed split (i.e., data for training, validation, and testing) of the dataset can benefit the community by providing a more accurate method to validate the subject-generalizability of APR algorithms.
Original languageEnglish
Pages (from-to)1-14
JournalIEEE transactions on affective computing
DOIs
Publication statusAccepted/In press - 2023

Keywords

  • Annotations
  • Deep learning
  • Feature extraction
  • Feature fusion
  • Prediction algorithms
  • Testing
  • Training
  • Visualization
  • generalization performance
  • multimedia signal processing
  • multimodal systems
  • personality assessment
  • transfer learning
  • unsupervised learning

Fingerprint

Dive into the research topics of 'Unsupervised multimodal learning for dependency-free personality recognition'. Together they form a unique fingerprint.

Cite this