Maya 7B LoRA

LoRA adapter for Mistral-7B-Instruct-v0.3 with Maya's personality.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "blakeurmos/maya-7b-lora-v1")

# Generate
prompt = "### Instruction:\nHow are you feeling?\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Response:")[1].strip())

Training: 2 epochs, 52 examples, LoRA rank 8

Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for blakeurmos/maya-7b-lora-v1

Adapter
(502)
this model

Space using blakeurmos/maya-7b-lora-v1 1