Spaces:
Running
Running
Update Space (evaluate main: e179b5b8)
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
title: TER
|
| 3 |
-
emoji: 🤗
|
| 4 |
colorFrom: blue
|
| 5 |
colorTo: red
|
| 6 |
sdk: gradio
|
|
@@ -8,34 +8,10 @@ sdk_version: 3.0.2
|
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
tags:
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
description: >-
|
| 14 |
-
TER (Translation Edit Rate, also called Translation Error Rate) is a metric to
|
| 15 |
-
quantify the edit operations that a
|
| 16 |
-
|
| 17 |
-
hypothesis requires to match a reference translation. We use the
|
| 18 |
-
implementation that is already present in sacrebleu
|
| 19 |
-
|
| 20 |
-
(https://github.com/mjpost/sacreBLEU#ter), which in turn is inspired by the
|
| 21 |
-
TERCOM implementation, which can be found
|
| 22 |
-
|
| 23 |
-
here: https://github.com/jhclark/tercom.
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
The implementation here is slightly different from sacrebleu in terms of the
|
| 27 |
-
required input format. The length of
|
| 28 |
-
|
| 29 |
-
the references and hypotheses lists need to be the same, so you may need to
|
| 30 |
-
transpose your references compared to
|
| 31 |
-
|
| 32 |
-
sacrebleu's required input format. See
|
| 33 |
-
https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
See the README.md file at https://github.com/mjpost/sacreBLEU#ter for more
|
| 37 |
-
information.
|
| 38 |
---
|
|
|
|
| 39 |
# Metric Card for TER
|
| 40 |
|
| 41 |
## Metric Description
|
|
|
|
| 1 |
---
|
| 2 |
title: TER
|
| 3 |
+
emoji: 🤗
|
| 4 |
colorFrom: blue
|
| 5 |
colorTo: red
|
| 6 |
sdk: gradio
|
|
|
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
tags:
|
| 11 |
+
- evaluate
|
| 12 |
+
- metric
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
+
|
| 15 |
# Metric Card for TER
|
| 16 |
|
| 17 |
## Metric Description
|