doberst commited on
Commit
085c075
·
verified ·
1 Parent(s): d83a7d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -14
README.md CHANGED
@@ -21,7 +21,7 @@ Inference speed and loading time is much faster with the 'tool' versions of the
21
  <!-- Provide a longer summary of what this model is. -->
22
 
23
  - **Developed by:** llmware
24
- - **Model type:** SLIM - small, specialized LLM
25
  - **Language(s) (NLP):** English
26
  - **License:** Apache 2.0
27
  - **Finetuned from model:** Tiny Llama 1B
@@ -53,22 +53,14 @@ For example, in this case, the prompt would be as follows:
53
 
54
  "<human>" + "The stock market declined yesterday ..." + "\n" + "<classify> sentiment </classify>" + "\n<bot>:"
55
 
56
- The model generation output will be a string in the form of a well-formed python dictionary, which can be converted as follows:
57
 
58
  try:
59
- # convert llm response output from string to json
60
- output_only = ast.literal_eval(output_only)
61
- print("converted to python dictionary automatically")
62
-
63
- # look for the key passed in the prompt as a dictionary entry
64
- if keys in output_only:
65
- if "negative" in output_only[keys]:
66
- print("sentiment appears negative - need to handle ...")
67
- else:
68
- print("response does not appear to include the designated key - will need to try again.")
69
 
70
  except:
71
- print("could not convert to python dictionary automatically - ", output_only)
72
 
73
 
74
  ## Using as Function Call in LLMWare
@@ -86,7 +78,8 @@ Check out llmware for one such implementation:
86
 
87
  ## Model Card Contact
88
 
89
- Darren Oberst & llmware team
90
 
 
91
 
92
 
 
21
  <!-- Provide a longer summary of what this model is. -->
22
 
23
  - **Developed by:** llmware
24
+ - **Model type:** SLIM - small, specialized LLM generating structured outputs
25
  - **Language(s) (NLP):** English
26
  - **License:** Apache 2.0
27
  - **Finetuned from model:** Tiny Llama 1B
 
53
 
54
  "<human>" + "The stock market declined yesterday ..." + "\n" + "<classify> sentiment </classify>" + "\n<bot>:"
55
 
56
+ The model generation output will be a string in the form of a python dictionary, which can be converted as follows:
57
 
58
  try:
59
+ output_only = ast.literal_eval(llm_string_output)
60
+ print("success - converted to python dictionary automatically")
 
 
 
 
 
 
 
 
61
 
62
  except:
63
+ print("fail - could not convert to python dictionary automatically - ", llm_string_output)
64
 
65
 
66
  ## Using as Function Call in LLMWare
 
78
 
79
  ## Model Card Contact
80
 
81
+ Darren Oberst & llmware team
82
 
83
+ [Join us on Discord](https://discord.gg/MhZn5Nc39h)
84
 
85