Update README.md
Browse files
README.md
CHANGED
|
@@ -14,16 +14,16 @@ tags:
|
|
| 14 |
- llm spanish
|
| 15 |
---
|
| 16 |
|
| 17 |
-
<strong><span style="font-size: larger;">bertin-gpt-j-6B-alpaca-
|
| 18 |
|
| 19 |

|
| 20 |
|
| 21 |
**descripci贸n en espa帽ol agregado 猬囷笍**
|
| 22 |
|
| 23 |
-
This is a
|
| 24 |
|
| 25 |
|
| 26 |
-
this is the result of quantizing to
|
| 27 |
|
| 28 |
|
| 29 |
|
|
@@ -40,7 +40,7 @@ this is the result of quantizing to 4 bits using [AutoGPTQ](https://github.com/P
|
|
| 40 |
|
| 41 |
Click the Model tab.
|
| 42 |
|
| 43 |
-
Under Download custom model or LoRA, enter RedXeol/bertin-gpt-j-6B-alpaca-
|
| 44 |
|
| 45 |
Click Download.
|
| 46 |
|
|
@@ -48,11 +48,11 @@ this is the result of quantizing to 4 bits using [AutoGPTQ](https://github.com/P
|
|
| 48 |
|
| 49 |
Click the Refresh icon next to Model in the top left.
|
| 50 |
|
| 51 |
-
In the Model drop-down: choose the model you just downloaded, bertin-gpt-j-6B-alpaca-
|
| 52 |
|
| 53 |
If you see an error in the bottom right, ignore it - it's temporary.
|
| 54 |
|
| 55 |
-
Fill out the GPTQ parameters on the right: Bits =
|
| 56 |
|
| 57 |
Click Save settings for this model in the top right.
|
| 58 |
|
|
@@ -80,9 +80,9 @@ To fine-tune the BERTIN GPT-J-6B model we used the code available on BERTIN's fo
|
|
| 80 |
**Espa帽ol** 馃嚜馃嚫
|
| 81 |
|
| 82 |
|
| 83 |
-
Esta es una versi贸n GPTQ de
|
| 84 |
|
| 85 |
-
Este es el resultado de cuantificar a
|
| 86 |
|
| 87 |
|
| 88 |
|
|
@@ -99,7 +99,7 @@ Este es el resultado de cuantificar a 4 bits usando [AutoGPTQ](https://github.co
|
|
| 99 |
|
| 100 |
Haga clic en la pesta帽a Modelo.
|
| 101 |
|
| 102 |
-
En Descargar modelo personalizado o LoRA, ingrese RedXeol/bertin-gpt-j-6B-alpaca-
|
| 103 |
|
| 104 |
Haz clic en Descargar.
|
| 105 |
|
|
@@ -107,11 +107,11 @@ Este es el resultado de cuantificar a 4 bits usando [AutoGPTQ](https://github.co
|
|
| 107 |
|
| 108 |
Haga clic en el icono Actualizar junto a Modelo en la parte superior izquierda.
|
| 109 |
|
| 110 |
-
En el men煤 desplegable Modelo: elija el modelo que acaba de descargar, bertin-gpt-j-6B-alpaca-
|
| 111 |
|
| 112 |
Si ve un error en la parte inferior derecha, ign贸relo, es temporal.
|
| 113 |
|
| 114 |
-
Complete los par谩metros GPTQ a la derecha: Bits =
|
| 115 |
|
| 116 |
Haz clic en Guardar configuraci贸n para este modelo en la parte superior derecha.
|
| 117 |
|
|
|
|
| 14 |
- llm spanish
|
| 15 |
---
|
| 16 |
|
| 17 |
+
<strong><span style="font-size: larger;">bertin-gpt-j-6B-alpaca-8bit-128g 馃</span></strong>
|
| 18 |
|
| 19 |

|
| 20 |
|
| 21 |
**descripci贸n en espa帽ol agregado 猬囷笍**
|
| 22 |
|
| 23 |
+
This is a 8-bit GPTQ version of the [bertin-project/bertin-gpt-j-6B-alpaca]( https://huggingface.co/bertin-project/bertin-gpt-j-6B-alpaca)
|
| 24 |
|
| 25 |
|
| 26 |
+
this is the result of quantizing to 8 bits using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
|
| 27 |
|
| 28 |
|
| 29 |
|
|
|
|
| 40 |
|
| 41 |
Click the Model tab.
|
| 42 |
|
| 43 |
+
Under Download custom model or LoRA, enter RedXeol/bertin-gpt-j-6B-alpaca-8bit-128g.
|
| 44 |
|
| 45 |
Click Download.
|
| 46 |
|
|
|
|
| 48 |
|
| 49 |
Click the Refresh icon next to Model in the top left.
|
| 50 |
|
| 51 |
+
In the Model drop-down: choose the model you just downloaded, bertin-gpt-j-6B-alpaca-8bit-128g.
|
| 52 |
|
| 53 |
If you see an error in the bottom right, ignore it - it's temporary.
|
| 54 |
|
| 55 |
+
Fill out the GPTQ parameters on the right: Bits = 8, Groupsize = 128, model_type = gptj
|
| 56 |
|
| 57 |
Click Save settings for this model in the top right.
|
| 58 |
|
|
|
|
| 80 |
**Espa帽ol** 馃嚜馃嚫
|
| 81 |
|
| 82 |
|
| 83 |
+
Esta es una versi贸n GPTQ de 8 bits del [bertin-project/bertin-gpt-j-6B-alpaca]( https://huggingface.co/bertin-project/bertin-gpt-j-6B-alpaca)
|
| 84 |
|
| 85 |
+
Este es el resultado de cuantificar a 8 bits usando [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
|
| 86 |
|
| 87 |
|
| 88 |
|
|
|
|
| 99 |
|
| 100 |
Haga clic en la pesta帽a Modelo.
|
| 101 |
|
| 102 |
+
En Descargar modelo personalizado o LoRA, ingrese RedXeol/bertin-gpt-j-6B-alpaca-8bit-128g.
|
| 103 |
|
| 104 |
Haz clic en Descargar.
|
| 105 |
|
|
|
|
| 107 |
|
| 108 |
Haga clic en el icono Actualizar junto a Modelo en la parte superior izquierda.
|
| 109 |
|
| 110 |
+
En el men煤 desplegable Modelo: elija el modelo que acaba de descargar, bertin-gpt-j-6B-alpaca-8bit-128g.
|
| 111 |
|
| 112 |
Si ve un error en la parte inferior derecha, ign贸relo, es temporal.
|
| 113 |
|
| 114 |
+
Complete los par谩metros GPTQ a la derecha: Bits = 8, Groupsize = 128, model_type = gptj
|
| 115 |
|
| 116 |
Haz clic en Guardar configuraci贸n para este modelo en la parte superior derecha.
|
| 117 |
|