--- license: apache-2.0 tags: - text-generation - gpt2 - lora - fine-tuned datasets: - private base_model: gpt2 model-index: - name: tea-gpt2-lora-sft results: [] --- # tea-gpt2-lora-sft ## Model Description This model is a **LoRA fine-tuned GPT-2** model using the [PEFT library](https://huggingface.co/docs/peft/index). It was trained on a domain-specific dataset related to [insert topic here, e.g., "tea conversations", "Sri Lankan literature", etc.]. - **Base model**: [`gpt2`](https://huggingface.co/gpt2) - **Fine-tuning method**: Low-Rank Adaptation (LoRA) - **Framework**: Hugging Face Transformers + PEFT - **Quantization**: [if applicable, e.g., int8, bf16] ## Intended Use You can use this model for: - Text generation in tea cultivation industry - Creative writing or domain-specific chatbots ## How to Use ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel base_model = AutoModelForCausalLM.from_pretrained("gpt2") lora_model = PeftModel.from_pretrained(base_model, "nimeth02/tea-gpt2-lora-sft") tokenizer = AutoTokenizer.from_pretrained("gpt2") input_text = "Once upon a time in a tea garden" input_ids = tokenizer(input_text, return_tensors="pt").input_ids output = lora_model.generate(input_ids, max_new_tokens=50) print(tokenizer.decode(output[0], skip_special_tokens=True))