YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
BERT + LSTM Sentiment Analysis Model
This repository contains a custom fine-tuned BERT + LSTM model for sentiment classification, trained on journal-style emotional writing. The model is designed to detect nuanced emotional states and works well for reflective, personal text (e.g., journaling, social media, diary entries).
π§ Model Architecture
- Base Encoder:
bert-base-uncased(Hugging Face Transformers) - Sequence Modeling: One-layer
LSTMto capture sentence-level emotional flow - Classification Layer: Fully connected layer with softmax
- Loss Function: CrossEntropyLoss
- Optimizer: AdamW
π¦ Files Included
| File | Description |
|---|---|
bert_lstm_sentiment_model.pt |
Trained PyTorch model (BERT + LSTM + Classifier) |
label_encoder.pkl |
sklearn.preprocessing.LabelEncoder used during training |
π§Ύ Classes
The model was trained to classify the following emotional labels:
- Positive
- Negative
- Neutral
π How to Use
#pip install torch transformers scikit-learn #if not already installed, remove the first comment
import torch
import pickle
from transformers import BertTokenizer
# Load model
model = torch.load("bert_lstm_sentiment_model.pt", map_location=torch.device('cpu'))
model.eval()
# Load tokenizer and label encoder
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
with open("label_encoder.pkl", "rb") as f:
label_encoder = pickle.load(f)
# Sample input
text = "I feel empty and overwhelmed today."
# Tokenize
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits, dim=1).item()
emotion = label_encoder.inverse_transform([predicted_class])[0]
print(f"Predicted Emotion: {emotion}")
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support