Update README.md
Browse files
README.md
CHANGED
|
@@ -27,6 +27,13 @@ Recommended starting point:
|
|
| 27 |
|
| 28 |
At early context, I recommend keeping XTC disabled. Once you hit higher context sizes (10k+), enabling XTC at 0.1 / 0.5 seems to significantly improve the output, but YMMV. If the output drones on and is uninspiring, XTC can be extremely effective.
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
General heuristic:
|
| 31 |
|
| 32 |
* Lots of slop? Temperature is too low. Raise it, or enable XTC. For early context, temp bump is probably preferred.
|
|
@@ -39,7 +46,7 @@ This model has been trained on context that mimics that of Silly Tavern's "Mistr
|
|
| 39 |
Silly Tavern output example (Henry is the human, Beth the bot):
|
| 40 |
|
| 41 |
```
|
| 42 |
-
[INST] Henry: I poke Beth.[/INST] Beth: Beth yelps
|
| 43 |
```
|
| 44 |
|
| 45 |
The model has also been trained to do interactive storywriting. You may steer the model towards specific content by "responding" to the model like so:
|
|
|
|
| 27 |
|
| 28 |
At early context, I recommend keeping XTC disabled. Once you hit higher context sizes (10k+), enabling XTC at 0.1 / 0.5 seems to significantly improve the output, but YMMV. If the output drones on and is uninspiring, XTC can be extremely effective.
|
| 29 |
|
| 30 |
+
*[User](https://huggingface.co/crestf411/MS-sunfall-v0.7.0/discussions/1#672104795c98f6ace037ba61) suggests adding this as an author note/world info In-chat @ Depth '0' is effective:*
|
| 31 |
+
|
| 32 |
+
```
|
| 33 |
+
[OOC: Guidelines: take into account the character's personality, background, relationships and previous history before intimate scenes. Intimate scenes should have a logical explanation and result from a current or past situation. {{char}} will not submit to {{user}} without good reason.]
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
|
| 37 |
General heuristic:
|
| 38 |
|
| 39 |
* Lots of slop? Temperature is too low. Raise it, or enable XTC. For early context, temp bump is probably preferred.
|
|
|
|
| 46 |
Silly Tavern output example (Henry is the human, Beth the bot):
|
| 47 |
|
| 48 |
```
|
| 49 |
+
[INST] Henry: I poke Beth.[/INST] Beth: Beth yelps.</s>[INST] Henry: ...
|
| 50 |
```
|
| 51 |
|
| 52 |
The model has also been trained to do interactive storywriting. You may steer the model towards specific content by "responding" to the model like so:
|