Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ For full details of our model and pretraining procedure please read [our paper](
|
|
| 24 |
## Model Summary
|
| 25 |
|
| 26 |
- **Developed by:** The OpenVLA team consisting of researchers from Stanford, UC Berkeley, Google Deepmind, and the Toyota Research Institute.
|
| 27 |
-
- **Model type:** Vision-language-action (language, image
|
| 28 |
- **Language(s) (NLP):** en
|
| 29 |
- **License:** MIT
|
| 30 |
- **Finetuned from:** [`prism-dinosiglip-224px`](https://github.com/TRI-ML/prismatic-vlms), a VLM trained from:
|
|
|
|
| 24 |
## Model Summary
|
| 25 |
|
| 26 |
- **Developed by:** The OpenVLA team consisting of researchers from Stanford, UC Berkeley, Google Deepmind, and the Toyota Research Institute.
|
| 27 |
+
- **Model type:** Vision-language-action (language, image => robot actions)
|
| 28 |
- **Language(s) (NLP):** en
|
| 29 |
- **License:** MIT
|
| 30 |
- **Finetuned from:** [`prism-dinosiglip-224px`](https://github.com/TRI-ML/prismatic-vlms), a VLM trained from:
|