File size: 2,708 Bytes
59761c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e594e40
59761c9
 
 
 
 
 
 
 
a4c999e
e704632
 
59761c9
 
 
 
 
 
e594e40
 
59761c9
e594e40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59761c9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
datasets:
- samirmsallem/wiki_coherence_de
language:
- de
base_model:
- deepset/gbert-base
pipeline_tag: text-classification
library_name: transformers
tags:
- science
- coherence
- cohesion
- german
metrics:
- accuracy
model-index:
- name: checkpoints
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: samirmsallem/wiki_coherence_de
      type: samirmsallem/wiki_coherence_de
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.943352215928024
---

## Text classification model for coherence evaluation in German scientific texts

**gbert-base-coherence_evaluation** is a sequence classification model in the scientific domain in German, finetuned from the model [gbert-base](https://huggingface.co/deepset/gbert-base). 
It was trained using a custom annotated dataset of around 12,000 training and 3,000 test examples containing coherent and incoherent text sequences from wikipedia articles in german.


Compared to this model, the [large version](https://huggingface.co/samirmsallem/gbert-large-coherence_evaluation) achieved a slightly higher peak accuracy (95.30%) on the validation set, observed at epoch 7. However, the base model reached its lowest evaluation loss (0.2347) earlier during training, suggesting that it converges faster but may underperform slightly in terms of generalization. These findings can inform future model selection depending on whether inference efficiency or accuracy is prioritized.


|Text Classification Tag| Text Classification Label | Description                             |
| :----:                |    :----:                 |    :----:   |
| 0                     | INCOHERENT          | The text is not coherent or has any kind of cohesion. |
| 1                     | COHERENT              | The text is coherent and cohesive. |


### Training
Training was conducted on a 10 epoch fine-tuning approach:

| Epoch | Eval Loss | Eval Accuracy |
|-------|-----------|----------------|
| 1.0   | **0.2347**| 0.9310         |
| 2.0   | 0.3376    | 0.9327         |
| 3.0   | 0.2771    | 0.9417         |
| 4.0   | 0.3466    | 0.9374         |
| 5.0   | 0.4178    | 0.9347         |
| 6.0   | 0.4174    | 0.9410         |
| 7.0   | 0.4337    | 0.9387         |
| 8.0   | 0.4563    | 0.9387         |
| 9.0   | 0.4575    | 0.9430         |
| 10.0  | 0.4884    | **0.9434**     |




Training was conducted using a standard Text classification objective. The model achieves an accuracy of approximately 94% on the evaluation set.

Here are the overall final metrics on the test dataset after 10 epochs of training:
  - **Accuracy**: 0.943352215928024
  - **Loss**: 0.48842695355415344