-
-
-
-
-
-
Inference Providers
Active filters:
autoawq
hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4
Text Generation
•
11B
•
Updated
•
75k
•
107
aari1995/germeo-7b-awq
Text Generation
•
1B
•
Updated
•
761
•
2
kaitchup/Yi-6B-awq-4bit
Text Generation
•
1B
•
Updated
•
7
kaitchup/Llama-3-8b-awq-4bit
Text Generation
•
2B
•
Updated
•
7
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k
Text Generation
•
8B
•
Updated
•
23
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GGUF
Text Generation
•
8B
•
Updated
•
63
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GPTQ
Text Generation
•
2B
•
Updated
•
3
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-AWQ
Text Generation
•
2B
•
Updated
•
5
hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4
Text Generation
•
59B
•
Updated
•
11.8k
•
36
hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4
Text Generation
•
2B
•
Updated
•
255k
•
78
jburmeister/Meta-Llama-3.1-70B-Instruct-AWQ-INT4
Text Generation
•
11B
•
Updated
•
3
jburmeister/Meta-Llama-3.1-405B-Instruct-AWQ-INT4
Text Generation
•
59B
•
Updated
•
6
Kalei/Meta-Llama-3.1-70B-Instruct-AWQ-INT4-Custom
Text Generation
•
11B
•
Updated
•
3
UCLA-EMC/Meta-Llama-3.1-8B-AWQ-INT4
Text Generation
•
2B
•
Updated
•
9
UCLA-EMC/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-32-2.17B
Text Generation
•
2B
•
Updated
•
3
•
1
reach-vb/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-fix
Text Generation
•
2B
•
Updated
•
3
jburmeister/Meta-Llama-3.1-8B-Instruct-AWQ-INT4
Text Generation
•
2B
•
Updated
•
2
awilliamson/Meta-Llama-3.1-70B-Instruct-AWQ
Text Generation
•
11B
•
Updated
•
1
flowaicom/Flow-Judge-v0.1-AWQ
Text Generation
•
0.7B
•
Updated
•
823
•
6
hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4
Text Generation
•
6B
•
Updated
•
10.1k
hugging-quants/gemma-2-9b-it-AWQ-INT4
Text Generation
•
2B
•
Updated
•
2.34k
•
6
ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4
Text Generation
•
11B
•
Updated
•
37
•
6
NeuML/Llama-3.1_OpenScholar-8B-AWQ
Text Generation
•
2B
•
Updated
•
106
•
3
fbaldassarri/TinyLlama_TinyLlama_v1.1-autoawq-int4-gs128-asym
Text Generation
•
0.3B
•
Updated
•
3
fbaldassarri/TinyLlama_TinyLlama_v1.1-autoawq-int4-gs128-sym
Text Generation
•
0.3B
•
Updated
•
4
fbaldassarri/EleutherAI_pythia-14m-autoawq-int4-gs128-asym
Text Generation
•
13M
•
Updated
•
5
fbaldassarri/EleutherAI_pythia-14m-autoawq-int4-gs128-sym
Text Generation
•
13M
•
Updated
•
107
fbaldassarri/EleutherAI_pythia-31m-autoawq-int4-gs128-asym
Text Generation
•
26.4M
•
Updated
•
3
fbaldassarri/EleutherAI_pythia-31m-autoawq-int4-gs128-sym
Text Generation
•
26.4M
•
Updated
•
1
fbaldassarri/EleutherAI_pythia-70m-deduped-autoawq-int4-gs128-asym
Text Generation
•
54.1M
•
Updated
•
5