-
-
-
-
-
-
Inference Providers
Active filters:
autoawq
hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4
Text Generation
•
2B
•
Updated
•
241k
•
80
hugging-quants/gemma-2-9b-it-AWQ-INT4
Text Generation
•
2B
•
Updated
•
7.74k
•
7
aari1995/germeo-7b-awq
Text Generation
•
1B
•
Updated
•
838
•
2
kaitchup/Yi-6B-awq-4bit
Text Generation
•
1B
•
Updated
•
8
kaitchup/Llama-3-8b-awq-4bit
Text Generation
•
2B
•
Updated
•
4
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k
Text Generation
•
8B
•
Updated
•
28
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GGUF
Text Generation
•
8B
•
Updated
•
66
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GPTQ
Text Generation
•
2B
•
Updated
•
5
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-AWQ
Text Generation
•
2B
•
Updated
•
5
hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4
Text Generation
•
59B
•
Updated
•
11.7k
•
36
hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4
Text Generation
•
11B
•
Updated
•
77.2k
•
107
jburmeister/Meta-Llama-3.1-70B-Instruct-AWQ-INT4
Text Generation
•
11B
•
Updated
•
4
jburmeister/Meta-Llama-3.1-405B-Instruct-AWQ-INT4
Text Generation
•
59B
•
Updated
•
7
Kalei/Meta-Llama-3.1-70B-Instruct-AWQ-INT4-Custom
Text Generation
•
11B
•
Updated
•
8
UCLA-EMC/Meta-Llama-3.1-8B-AWQ-INT4
Text Generation
•
2B
•
Updated
•
12
UCLA-EMC/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-32-2.17B
Text Generation
•
2B
•
Updated
•
6
•
1
reach-vb/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-fix
Text Generation
•
2B
•
Updated
•
4
jburmeister/Meta-Llama-3.1-8B-Instruct-AWQ-INT4
Text Generation
•
2B
•
Updated
•
4
awilliamson/Meta-Llama-3.1-70B-Instruct-AWQ
Text Generation
•
11B
•
Updated
•
4
flowaicom/Flow-Judge-v0.1-AWQ
Text Generation
•
0.7B
•
Updated
•
663
•
6
hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4
Text Generation
•
6B
•
Updated
•
10.1k
ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4
Text Generation
•
11B
•
Updated
•
149
•
6
NeuML/Llama-3.1_OpenScholar-8B-AWQ
Text Generation
•
2B
•
Updated
•
134
•
3
fbaldassarri/TinyLlama_TinyLlama_v1.1-autoawq-int4-gs128-asym
Text Generation
•
0.3B
•
Updated
•
3
fbaldassarri/TinyLlama_TinyLlama_v1.1-autoawq-int4-gs128-sym
Text Generation
•
0.3B
•
Updated
•
5
fbaldassarri/EleutherAI_pythia-14m-autoawq-int4-gs128-asym
Text Generation
•
13M
•
Updated
•
5
fbaldassarri/EleutherAI_pythia-14m-autoawq-int4-gs128-sym
Text Generation
•
13M
•
Updated
•
107
fbaldassarri/EleutherAI_pythia-31m-autoawq-int4-gs128-asym
Text Generation
•
26.4M
•
Updated
•
2
fbaldassarri/EleutherAI_pythia-31m-autoawq-int4-gs128-sym
Text Generation
•
26.4M
•
Updated
•
2
fbaldassarri/EleutherAI_pythia-70m-deduped-autoawq-int4-gs128-asym
Text Generation
•
54.1M
•
Updated
•
6