Maya 7B LoRA
LoRA adapter for Mistral-7B-Instruct-v0.3 with Maya's personality.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "blakeurmos/maya-7b-lora-v1")
# Generate
prompt = "### Instruction:\nHow are you feeling?\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Response:")[1].strip())
Training: 2 epochs, 52 examples, LoRA rank 8
- Downloads last month
 - 5
 
Model tree for blakeurmos/maya-7b-lora-v1
Base model
mistralai/Mistral-7B-v0.3
				Finetuned
	
	
mistralai/Mistral-7B-Instruct-v0.3