Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

iproskurina
/
bloom-3b-GPTQ-4bit-g128

Text Generation
Transformers
Safetensors
bloom
gptq
4-bit precision
text-generation-inference
Model card Files Files and versions
xet
Community
bloom-3b-GPTQ-4bit-g128
3.81 GB
  • 1 contributor
History: 23 commits
iproskurina's picture
iproskurina
Update README.md to include GPTQModel usage.
7ba4174 verified 8 months ago
  • .gitattributes
    1.57 kB
    Upload tokenizer almost 2 years ago
  • README.md
    2.82 kB
    Update README.md to include GPTQModel usage. 8 months ago
  • config.json
    782 Bytes
    AutoGPTQ model for bigscience/bloom-3b: 4bits, gr128, desc_act=False about 1 year ago
  • model.safetensors
    3.8 GB
    xet
    Rename gptq_model-4bit-128g.safetensors to model.safetensors about 1 year ago
  • quantize_config.json
    211 Bytes
    AutoGPTQ model for bigscience/bloom-3b: 4bits, gr128, desc_act=False about 1 year ago
  • special_tokens_map.json
    552 Bytes
    Add tokenizer for the model bloom-3b about 1 year ago
  • tokenizer.json
    14.5 MB
    xet
    Add tokenizer for the model bloom-3b about 1 year ago
  • tokenizer_config.json
    983 Bytes
    Add tokenizer for the model bloom-3b about 1 year ago