llama-educator / README.md
matteoangeloni's picture
Update README.md
0c8613e verified
metadata
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - education
license: apache-2.0
language:
  - en

πŸ¦™ Uploaded Finetuned Model – Llama 3.1 (8B) by Matteo Angeloni

This model is my first finetuned Llama model, built for educational and legal-domain text generation.
Training was accelerated with Unsloth (2x faster fine-tuning) and integrated with Hugging Face tools.


πŸ“š Training Data

The model was trained on:


🎯 Intended Use

  • Experimentation with educational text generation
  • Testing instruction-following capabilities in code/education-related contexts
  • Benchmarking performance of Unsloth-accelerated LLaMA models

⚠️ Not suitable for production. This is an experimental finetune.


πŸš€ Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "matteoangeloni/llama3-8b-edu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Summarize the main points of the Italian privacy law."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))