Update model card
Browse files
README.md
CHANGED
|
@@ -1,6 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Math-RoBerta for NLP tasks in math learning environments
|
| 2 |
|
| 3 |
-
This model is fine-tuned
|
| 4 |
|
| 5 |
### Here is how to use it with texts in HuggingFace
|
| 6 |
```python
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
tags:
|
| 5 |
+
- nlp
|
| 6 |
+
- math learning
|
| 7 |
+
- education
|
| 8 |
+
license: mit
|
| 9 |
+
---
|
| 10 |
# Math-RoBerta for NLP tasks in math learning environments
|
| 11 |
|
| 12 |
+
This model is fine-tuned RoBERTa-large trained with 8 Nvidia RTX 1080Ti GPUs using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). MathRoBERTa has 24 layers, and 355 million parameters and its published model weights take up to 1.5 gigabytes of disk space. It can potentially provide a good base performance on NLP related tasks (e.g., text classification, semantic search, Q&A) in similar math learning environments.
|
| 13 |
|
| 14 |
### Here is how to use it with texts in HuggingFace
|
| 15 |
```python
|