alvarobartt HF Staff commited on
Commit
a1ba5d0
·
verified ·
1 Parent(s): c54f2e6

Update `README.md` to use TEI v1.7 instead

Browse files

- Previously it was set to 1.7.2 but some fixes landed as of 1.7.3, so setting it to 1.7 instead to be more stable in case more fixes land, as the latest 1.7.Z release will point to 1.7 too!
- Fixed tags for CPU and GPU as those were reversed

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -210,13 +210,13 @@ print(scores.tolist())
210
  You can either run / deploy TEI on NVIDIA GPUs as:
211
 
212
  ```bash
213
- docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2 --model-id Qwen/Qwen3-Embedding-0.6B --dtype float16
214
  ```
215
 
216
  Or on CPU devices as:
217
 
218
  ```bash
219
- docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.7.2 --model-id Qwen/Qwen3-Embedding-0.6B
220
  ```
221
 
222
  And then, generate the embeddings sending a HTTP POST request as:
 
210
  You can either run / deploy TEI on NVIDIA GPUs as:
211
 
212
  ```bash
213
+ docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.7 --model-id Qwen/Qwen3-Embedding-0.6B --dtype float16
214
  ```
215
 
216
  Or on CPU devices as:
217
 
218
  ```bash
219
+ docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.7 --model-id Qwen/Qwen3-Embedding-0.6B --dtype float16
220
  ```
221
 
222
  And then, generate the embeddings sending a HTTP POST request as: