Model Details
Fine-tuned "microsoft/phi-2" model using QLoRA PEFT.
Model Description
Developed by: https://huggingface.co/nanditab35
Language(s) (NLP): English
License: Apache License-2.0
Finetuned from model [optional]: microsoft/phi-2
Repository: Github repo coming soon
Uses
The main purpose behind this fine-tuned model is building a Joke Generation app from some fixed categories.
Direct Use
HF Space coming soon
Bias, Risks, and Limitations
This project is still under development. Efforts are being put to generate safe jokes first. The preference is boring jokes are better than offensive ones. All the jokes generated by this model mayy not be yet funny :). It is still in-progress.
Recommendations
It is recommened to use this Model through the corresponding HF Space (in-progress), which is also coming soon. Because the gradio app in the HF space will allow only the 4 categories of jokes to be generated not any other category, thus adding a layer of safety.
Training Details
The model is fine-tuned using QLoRA (Parameter Efficient Fine-tuning) on the base model ("microsoft/phi-2").
Training Data
This model is trained using joke dataset that is created by using DeepSeek chat with some amount of prompt engineering. Mainly 4 categories of jokes are there in this dataset: ["tech", "coffee", "foodie", "animals"].
- Downloads last month
- 15
Model tree for nanditab35/phi-2-jokebot-peft
Base model
microsoft/phi-2