File size: 1,361 Bytes
f94d935 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
tags:
- lora
- causal-lm
- elmb
- function-calling
license: apache-2.0
---
# data4elm-SLaM-submission
This is a LoRA adapter trained for the ELMB function calling task.
## Usage
### With PEFT
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-hf",
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(
base_model,
"lwhalen7/data4elm-SLaM-submission"
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
# Use for inference
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
### With lm-eval
```bash
lm_eval --model hf \
--model_args pretrained=meta-llama/Llama-2-7b-hf,peft=lwhalen7/data4elm-SLaM-submission,trust_remote_code=True \
--tasks elmb_functioncalling_test \
--device cuda:0 \
--batch_size 1 \
--log_samples \
--output_path ./eval_results/elmb_test_set.jsonl
```
## Training Details
This adapter was migrated from a dataset repository to enable proper usage with inference tools.
## Base Model
This adapter is compatible with: `meta-llama/Llama-2-7b-hf`
|