nm-testing/tinyllama-oneshot-w8a8-dynamic-token-v2 Text Generation • 1B • Updated Oct 9, 2024 • 2.14k
nm-testing/tinyllama-oneshot-w4a16-channel-v2 Text Generation • 0.3B • Updated Oct 9, 2024 • 8.81k • 1
nm-testing/tinyllama-oneshot-w8w8-test-static-shape-change Text Generation • 1B • Updated Oct 9, 2024 • 27.8k
nm-testing/tinyllama-oneshot-w8a8-test-static-shape-change-v3 Text Generation • 1B • Updated Aug 30, 2024
nm-testing/tinyllama-oneshot-w8a8-channel-dynamic-token-v2 Text Generation • 1B • Updated Oct 9, 2024 • 2.19k
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-Dyn-Per-Token-2048-Samples Text Generation • 8B • Updated Oct 9, 2024 • 3
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Per-Token-Test Text Generation • 8B • Updated Oct 9, 2024 • 49
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test Text Generation • 8B • Updated Oct 9, 2024 • 2.17k
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test-bos Text Generation • 8B • Updated Oct 9, 2024
nm-testing/TinyLlama-1.1B-compressed-tensors-kv-cache-scheme Text Generation • 0.4B • Updated Oct 9, 2024 • 409
nm-testing/Meta-Llama-3-8B-Instruct-W4A16-compressed-tensors-test Text Generation • 2B • Updated Oct 9, 2024 • 1
RedHatAI/Phi-3-medium-128k-instruct-quantized.w8a16 Text Generation • 4B • Updated Oct 9, 2024 • 6 • 2
RedHatAI/Meta-Llama-3-8B-Instruct-quantized.w8a8 Text Generation • 8B • Updated Oct 9, 2024 • 3.78k • 2