πŸ¦™ Uploaded Finetuned Model – Llama 3.1 (8B) by Matteo Angeloni

This model is my first finetuned Llama model, built for educational and legal-domain text generation.
Training was accelerated with Unsloth (2x faster fine-tuning) and integrated with Hugging Face tools.


πŸ“š Training Data

The model was trained on:


🎯 Intended Use

  • Experimentation with educational text generation
  • Testing instruction-following capabilities in code/education-related contexts
  • Benchmarking performance of Unsloth-accelerated LLaMA models

⚠️ Not suitable for production. This is an experimental finetune.


πŸš€ Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "matteoangeloni/llama3-8b-edu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Summarize the main points of the Italian privacy law."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
12
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support