Andromeda-70B / README.md
ddh0's picture
Update README.md
df16d26 verified
metadata
base_model:
  - ddh0/Cassiopeia-70B
  - Sao10K/Llama-3.3-70B-Vulpecula-r1
base_model_relation: merge
license: unknown
thumbnail: https://huggingface.co/ddh0/Andromeda-70B/resolve/main/andromeda.png

Andromeda-70B

Andromeda-70B

Andromeda-70B is the result of an experimental SLERP merge of Cassiopeia-70B and Sao10K/Llama-3.3-70B-Vulpecula-r1. It is a coherent, unaligned model intended to be used for creative tasks such as storywriting, brainstorming, interactive roleplay, etc.

UPDATE

After more thorough testing by myself and others, I don't think this model is very good. :( You should use Cassiopeia or Vulpecula instead.

Merge composition

models:
  - model: /opt/workspace/hf/Cassiopeia-70B
  - model: /opt/workspace/hf/Llama-3.3-70B-Vulpecula-r1
merge_method: slerp
base_model: /opt/workspace/hf/Cassiopeia-70B
parameters:
  t: 0.7
dtype: bfloat16

Feedback

If you like this model, please support Sao10k.

Feedback on this merge is very welcome, good or bad! Please leave a comment in this discussion with your thoughts: Andromeda-70B/discussions/1