OddTheGreat commited on
Commit
8d3f142
·
verified ·
1 Parent(s): 0039d96

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,19 +20,19 @@ This is a merge of pre-trained language models
20
 
21
  This model is a byproduct of my experiments aimed to create model free from toxic positivity, good for "dark" playthroughs.
22
 
23
- This model isn't fully reached my goal, but it's decent model.
24
 
25
  First of all, model is close to neutrality towards user.
26
 
27
  Model is uncensored (heretic'd mistral as base model), tested on erp, gore, swearings and hate speech. Important to say it was tested in rp scenario, not in straight task to model.
28
 
29
- I've got hooked by smartness of this model. Really good for 24b, but not without rare halluciantions. In three swipes i usually achieved good responce.
30
 
31
  Context attention also is good, nicely working with many lorebooks. Model actively utilises info from char card, at least up to 10k context in use. (10k is my limit for tests.)
32
 
33
  Style of writing is prompt depending. Lenght, style and format depends on char card, first message, user input and sysprompt. In my case, it's been nice to read outputs. Variation of swipes is normal, but i've seen better.
34
 
35
- Instructions are followed, mostly. For summarize and similar tasks it's better to lower temperature.
36
 
37
  RU was tested, good to play.
38
 
 
20
 
21
  This model is a byproduct of my experiments aimed to create model free from toxic positivity, good for "dark" playthroughs.
22
 
23
+ This model isn't fully reached my goal, and not too stable but it's interesting model.
24
 
25
  First of all, model is close to neutrality towards user.
26
 
27
  Model is uncensored (heretic'd mistral as base model), tested on erp, gore, swearings and hate speech. Important to say it was tested in rp scenario, not in straight task to model.
28
 
29
+ I've got hooked by smartness of this model. Really good for 24b, but not without halluciantions. In three swipes i usually achieved good responce.
30
 
31
  Context attention also is good, nicely working with many lorebooks. Model actively utilises info from char card, at least up to 10k context in use. (10k is my limit for tests.)
32
 
33
  Style of writing is prompt depending. Lenght, style and format depends on char card, first message, user input and sysprompt. In my case, it's been nice to read outputs. Variation of swipes is normal, but i've seen better.
34
 
35
+ Instructions are followed, mostly. For summarize and similar tasks it's better to lower temperature. On higher temperatures model becomes unstable. 0.8 was maximum for adequate responses.
36
 
37
  RU was tested, good to play.
38