Update README.md
Browse files
README.md
CHANGED
|
@@ -9,11 +9,11 @@ inference: false
|
|
| 9 |
|
| 10 |
**slim-sentiment** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
|
| 11 |
|
| 12 |
-
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys.
|
| 13 |
-
|
|
|
|
| 14 |
Each slim model has a corresponding 'tool' in a separate repository, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool), which a 4-bit quantized gguf version of the model that is intended to be used for inference.
|
| 15 |
|
| 16 |
-
Inference speed and loading time is much faster with the 'tool' versions of the model, and multiple tools can be deployed concurrently and run on a local CPU-based laptop or server.
|
| 17 |
|
| 18 |
### Model Description
|
| 19 |
|
|
@@ -74,10 +74,7 @@ The intended use of SLIM models is to re-imagine traditional 'hard-coded' classi
|
|
| 74 |
print("success - converted to python dictionary automatically")
|
| 75 |
except:
|
| 76 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
| 77 |
-
|
| 78 |
-
# sample output
|
| 79 |
-
{"sentiment": ["negative"]}
|
| 80 |
-
|
| 81 |
</details>
|
| 82 |
|
| 83 |
|
|
|
|
| 9 |
|
| 10 |
**slim-sentiment** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
|
| 11 |
|
| 12 |
+
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
|
| 13 |
+
{"sentiment": ["positive"]}
|
| 14 |
+
|
| 15 |
Each slim model has a corresponding 'tool' in a separate repository, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool), which a 4-bit quantized gguf version of the model that is intended to be used for inference.
|
| 16 |
|
|
|
|
| 17 |
|
| 18 |
### Model Description
|
| 19 |
|
|
|
|
| 74 |
print("success - converted to python dictionary automatically")
|
| 75 |
except:
|
| 76 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
| 77 |
+
|
|
|
|
|
|
|
|
|
|
| 78 |
</details>
|
| 79 |
|
| 80 |
|