YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
LSTM Text Classification Model
Model Details
- Model Type: LSTM
- Task: Text Classification
- Input Dimension: 100
- Hidden Size: 256
- Number of Layers: 2
- Number of Labels: 2
Usage
This model is designed for text classification with two possible labels. It takes tokenized input sequences and processes them using an LSTM architecture.
Requirements
- Ensure that your input sequences are properly preprocessed and padded to match
input_dim = 100. - Use a compatible tokenizer before passing input to the model.
Example Usage (PyTorch)
import torch
import torch.nn as nn
class LSTMClassifier(nn.Module):
def __init__(self, input_dim, hidden_size, num_layers, num_labels):
super(LSTMClassifier, self).__init__()
self.lstm = nn.LSTM(input_dim, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_labels)
def forward(self, x):
lstm_out, _ = self.lstm(x)
output = self.fc(lstm_out[:, -1, :])
return output
# Load model with config.json parameters
config = {
"input_dim": 100,
"hidden_size": 256,
"num_layers": 2,
"num_labels": 2
}
model = LSTMClassifier(**config)
dummy_input = torch.randn(1, 10, config["input_dim"]) # Example input
output = model(dummy_input)
print(output)
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support