Skip to content

[ECAI 2023] Official implementation of "FATRER: Full-Attention Topic Regularizer for Accurate and Robust Conversational Emotion Recognition"

License

Notifications You must be signed in to change notification settings

ludybupt/FATRER

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Page Views Count GitHub

FATRER

[ECAI 2023] Official Pytorch implementation of "FATRER: Full-Attention Topic Regularizer for Accurate and Robust Conversational Emotion Recognition" [paper]

Framework

Full-attention topic regularizer(FATRER) introduces an emotion-related global view when modeling the local context in a conversation. A joint topic modeling strategy is introduced to implement regularization from both representation and loss perspectives. To avoid overregularization, FATRER drops the constraints on prior distributions that exist in traditional topic modeling and perform probabilistic approximations based entirely on attention alignment. Experiments show that FATRER obtain more favorable results than state-of-the-art models, and gain convincing robustness. fater_demo

News

  • [2023-10-02]: FARTER will be presented orally(Video) in ECAI2023(Programme) Technical Session 1 at 09:30 AM in Room S4A and Poster Session 1 at 11:15 AM in Hall S3B(Poster).
  • [2023-07-15]: FARTER has been accepted by ECAI 2023 (Paper 223).

Prerequisites

  • Python 3.9.12
  • Pytorch 1.10.1+cu113
  pip instll -r requirements.txt

Usage

Benchmark Datasets

  • IEMOCAP/MELD/EmoryNLP/EmoryNLP

Generalization results on four datasets

fater_demo

Execution

IEMOCAP

  1. FARTER-Multi:
    # train
    python main.py conf/FATRER_multi.yaml

    #train and conduct attack(U+C) based on PWWS(per 50 epoch):
    python main.py conf/FATRER_multi_pwws_attack.yaml

    #train and conduct attack(U+C) based on TextFooler(per 50 epoch):
    python main.py conf/FATRER_multi_textfooler_attack.yaml

    #train and conduct attack(U+C) based on TextBugger(per 50 epoch):
    python main.py conf/FATRER_multi_textbugger_attack.yaml
  1. FARTER-Multi(without topic-oriented regularization):
    # train
    python main.py conf/FATRER_multi_wo_topic.yaml
  1. FARTER-Single:
    # train
    python main.py conf/FATRER_single.yaml
  1. FARTER-Single(without topic-oriented regularization):
    #train
    python main.py conf/FATRER_single_wo_topic.yaml
  1. DialTRM(Baseline):
    #train
    python main.py conf/Baseline.yaml
  1. VAE(topic-oriented)
  #train VAE(Laplace)
  python main.py conf/VAE_Laplace.yaml

  #train VAE(Dirichlet)
  python main.py conf/VAE_Dirichlet.yaml

  #train VAE(Gamma)
  python main.py conf/VAE_Gamma.yaml

  #train VAE(LogNormal)
  python main.py conf/VAE_LogNormal.yaml

MELD

  1. FARTER-Multi:
    python main.py conf/FATRER_multi_MELD.yaml
  1. FARTER-Single:
    python main.py conf/FATRER_single_MELD.yaml

EmoryNLP

  1. FARTER-Multi:
    python main.py conf/FATRER_multi_EmoryNLP.yaml
  1. FARTER-Single:
    python main.py conf/FATRER_single_EmoryNLP.yaml

DailyDialog

  1. FARTER-Multi:
    python main.py conf/FATRER_multi_DailyDialog.yaml
  1. FARTER-Single:
    python main.py conf/FATRER_single_DailyDialog.yaml

Cite us

Please cite the following paper if you find this code useful in your work.

@article{mao2023fatrer,
  title={FATRER: Full-Attention Topic Regularizer for Accurate and Robust Conversational Emotion Recognition},
  author={Mao, Yuzhao and Lu, Di and Wang, Xiaojie and Zhang, Yang},
  journal={arXiv preprint arXiv:2307.12221},
  year={2023}
}

License

MIT license

Releases

No releases published

Packages

No packages published