data4elm-SLaM-submission
This is a LoRA adapter trained for the ELMB function calling task.
Usage
With PEFT
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-hf",
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(
base_model,
"lwhalen7/data4elm-SLaM-submission"
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
# Use for inference
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
With lm-eval
lm_eval --model hf \
--model_args pretrained=meta-llama/Llama-2-7b-hf,peft=lwhalen7/data4elm-SLaM-submission,trust_remote_code=True \
--tasks elmb_functioncalling_test \
--device cuda:0 \
--batch_size 1 \
--log_samples \
--output_path ./eval_results/elmb_test_set.jsonl
Training Details
This adapter was migrated from a dataset repository to enable proper usage with inference tools.
Base Model
This adapter is compatible with: meta-llama/Llama-2-7b-hf
- Downloads last month
- 26
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for lwhalen7/data4elm-SLaM-submission
Base model
meta-llama/Llama-2-7b-hf