|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- tinyllama |
|
|
- toneop |
|
|
- lora |
|
|
- fine-tuning |
|
|
- health-chatbot |
|
|
- conversational |
|
|
--- |
|
|
|
|
|
# 🧠 TinyLLaMA-ToneOpBot (LoRA Adapter) |
|
|
|
|
|
This is a lightweight fine-tuned **TinyLLaMA-1.1B-Chat** model using **LoRA adapters** for health and fitness Q&A, built by [@imrahulwarkade](https://huggingface.co/imrahulwarkade). |
|
|
|
|
|
> Designed for commercial chatbot applications focused on wellness, diet, and healthy lifestyle. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧪 Base Model |
|
|
|
|
|
- [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧰 How to Use (with PEFT) |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, pipeline |
|
|
from peft import PeftModel, PeftConfig |
|
|
from transformers import AutoModelForCausalLM |
|
|
|
|
|
# Load adapter |
|
|
adapter_id = "imrahulwarkade/tinyllama-toneopbot-lora" |
|
|
config = PeftConfig.from_pretrained(adapter_id) |
|
|
base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) |
|
|
model = PeftModel.from_pretrained(base_model, adapter_id) |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) |
|
|
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
|
|
|
|
# Prompt |
|
|
messages = [ |
|
|
{"role": "user", "content": "How can I lose weight in a healthy way?"} |
|
|
] |
|
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False) |
|
|
response = pipe(prompt, max_new_tokens=150)[0]["generated_text"] |
|
|
print(response) |