File size: 2,798 Bytes
3049e15
 
 
 
8f410a1
3049e15
 
 
 
 
 
 
 
8f410a1
ab71a03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3049e15
ab71a03
aea7fc4
 
ab71a03
aea7fc4
ab71a03
aea7fc4
 
ab71a03
 
aea7fc4
ab71a03
aea7fc4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab71a03
aea7fc4
 
 
 
 
ab71a03
3049e15
 
aea7fc4
ab71a03
aea7fc4
 
 
 
 
 
 
 
 
ab71a03
aea7fc4
 
 
 
 
ab71a03
 
 
aea7fc4
ab71a03
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
license: mit
datasets:
- mrm8488/goemotions
- IconicAI/DDD
language:
- en
metrics:
- accuracy
- f1
base_model:
- Mango-Juice/trpg_mlm
- microsoft/deberta-v3-large
library_name: transformers
model-index:
  - name: trpg_emotion_classification
    results:
      - task:
          type: text-classification
        dataset:
          name: IconicAI/DDD (custom subset manually labeled)
          type: custom
          split: test
          config: csv
        metrics:
          - type: accuracy
            value: 0.929
          - type: f1
            value: 0.476
            name: f1 macro
---

# GoEmotions Fine-tuned Model

This is a multi-label emotion classification model trained on the GoEmotions dataset and TRPG sentences.

## Model Information
- **Base Model**: Mango-Juice/trpg_mlm
- **Task**: Multi-label Emotion Classification
- **Labels**: 28 emotion labels
- **Training**: Completed a two-stage fine-tuning process (1st stage: GoEmotions data, 2nd stage: TRPG sentence data)

## Emotion Labels
- admiration
- amusement
- anger
- annoyance
- approval
- caring
- confusion
- curiosity
- desire
- disappointment
- disapproval
- disgust
- embarrassment
- excitement
- fear
- gratitude
- grief
- joy
- love
- nervousness
- optimism
- pride
- realization
- relief
- remorse
- sadness
- surprise
- neutral

## Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Mango-Juice/trpg_emotion_classification")
model = AutoModelForSequenceClassification.from_pretrained("Mango-Juice/trpg_emotion_classification")

# Inference
def predict_emotions(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
    with torch.no_grad():
        logits = model(**inputs).logits
        probs = torch.sigmoid(logits).cpu().numpy()[0]

    emotion_labels = ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral']
    return {emotion: float(prob) for emotion, prob in zip(emotion_labels, probs)}

# Example
text = "I am so happy today!"
emotions = predict_emotions(text)
print(emotions)
```

## Performance
- The fine-tuned model provides improved performance in emotion classification.
- Data augmentation was applied for minority classes.

## Training Details
- **Data Augmentation**: Oversampling based on paraphrasing and back-translation.
- **Loss Function**: Focal Loss with Label Smoothing
- **Optimizer**: AdamW
- **Scheduler**: ReduceLROnPlateau