Update README.md
Browse files
README.md
CHANGED
|
@@ -18,4 +18,20 @@ language:
|
|
| 18 |
- en
|
| 19 |
base_model:
|
| 20 |
- meta-llama/Llama-3.1-8B
|
| 21 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
- en
|
| 19 |
base_model:
|
| 20 |
- meta-llama/Llama-3.1-8B
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
# Dolphin 3.0 Llama 3.1 8B 🐬
|
| 24 |
+
|
| 25 |
+
Curated and trained by [Eric Hartford](https://huggingface.co/ehartford), [Ben Gitter](https://huggingface.co/bigstorm) and [Cognitive Computations](https://huggingface.co/cognitivecomputations)
|
| 26 |
+
|
| 27 |
+
[](https://discord.gg/cognitivecomputations)
|
| 28 |
+
Discord: https://discord.gg/cognitivecomputations
|
| 29 |
+
|
| 30 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/cNCs1TBD3FelWCJGkZ3cd.png" width="600" />
|
| 31 |
+
|
| 32 |
+
Our appreciation for the generous sponsors of Dolphin 3.0:
|
| 33 |
+
- [Crusoe Cloud](https://crusoe.ai/) - provided 16x L40s for training and evals
|
| 34 |
+
- [Akash](https://akash.network/) - provided on-demand 8x H100 for training
|
| 35 |
+
- [Lazarus](https://www.lazarusai.com/) - provided 16x H100 for training
|
| 36 |
+
- [Cerebrus](https://cerebras.ai/) - provided excellent and fast inference services
|
| 37 |
+
- [Andreessen Horowitz](https://a16z.com/) - provided a grant that make Dolphin 1.0 possible and enabled me to bootstrap my homelab
|