Update README.md
Browse files
README.md
CHANGED
|
@@ -228,9 +228,9 @@ for match in matches:
|
|
| 228 |
*Dataset: WiRe57_343-manual-oie*
|
| 229 |
| Model | Precision | Recall | F1 Score |
|
| 230 |
|:-----------------------|------------:|---------:|-----------:|
|
| 231 |
-
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.
|
| 232 |
-
| knowledgator/gliner-multitask-v0.5 | 0.
|
| 233 |
-
| knowledgator/gliner-multitask-v1.0 | 0.
|
| 234 |
|
| 235 |
---
|
| 236 |
|
|
@@ -282,22 +282,13 @@ labels = ["summary"]
|
|
| 282 |
|
| 283 |
input_ = prompt+text
|
| 284 |
|
| 285 |
-
threshold = 0.
|
| 286 |
summaries = model.predict_entities(input_, labels, threshold=threshold)
|
| 287 |
|
| 288 |
for summary in summaries:
|
| 289 |
print(summary["text"], "=>", summary["score"])
|
| 290 |
```
|
| 291 |
|
| 292 |
-
### Performance:
|
| 293 |
-
*Dataset: SQuAD 2.0*
|
| 294 |
-
|
| 295 |
-
| Model | BLEU | ROUGE1 | ROUGE2 | ROUGEL | Cosine Similarity |
|
| 296 |
-
|:-----------------------|------------:|----------:|-----------:|----------:|--------------------:|
|
| 297 |
-
| knowledgator/gliner-llama-multitask-1B-v1.0 | 7.9728e-157 | 0.0955005 | 0.00236265 | 0.0738533 | 0.0515591 |
|
| 298 |
-
| knowledgator/gliner-multitask-v0.5 | 1.70326e-06 | 0.0627964 | 0.00203505 | 0.0482932 | 0.0532316 |
|
| 299 |
-
| knowledgator/gliner-multitask-v1.0 | 5.78799e-06 | 0.0878883 | 0.0030312 | 0.0657152 | 0.060342 |
|
| 300 |
-
|
| 301 |
---
|
| 302 |
|
| 303 |
**How to use for text classification:**
|
|
|
|
| 228 |
*Dataset: WiRe57_343-manual-oie*
|
| 229 |
| Model | Precision | Recall | F1 Score |
|
| 230 |
|:-----------------------|------------:|---------:|-----------:|
|
| 231 |
+
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.9047 | 0.2794 | 0.4269 |
|
| 232 |
+
| knowledgator/gliner-multitask-v0.5 | 0.9278 | 0.2779 | 0.4287 |
|
| 233 |
+
| knowledgator/gliner-multitask-v1.0 | 0.8775 | 0.2733 | 0.4168 |
|
| 234 |
|
| 235 |
---
|
| 236 |
|
|
|
|
| 282 |
|
| 283 |
input_ = prompt+text
|
| 284 |
|
| 285 |
+
threshold = 0.1
|
| 286 |
summaries = model.predict_entities(input_, labels, threshold=threshold)
|
| 287 |
|
| 288 |
for summary in summaries:
|
| 289 |
print(summary["text"], "=>", summary["score"])
|
| 290 |
```
|
| 291 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 292 |
---
|
| 293 |
|
| 294 |
**How to use for text classification:**
|