|
|
--- |
|
|
base_model: meta-llama/Llama-3.3-70B-Instruct |
|
|
language: |
|
|
- en |
|
|
license: llama3.3 |
|
|
pipeline_tag: text-generation |
|
|
library_name: furiosa-llm |
|
|
tags: |
|
|
- furiosa-ai |
|
|
- llama |
|
|
- llama-3 |
|
|
--- |
|
|
# Model Overview |
|
|
- **Model Architecture:** Meta-Llama-3 |
|
|
- **Input:** Text |
|
|
- **Output:** Text |
|
|
- **Model Optimizations:** |
|
|
- **Maximum Context Length:** 32k tokens |
|
|
- Maximum Prompt Length: 32768 tokens |
|
|
- Maximum Generation Length: 32768 tokens |
|
|
- **Intended Use Cases:** Intended for commercial and non-commercial use. Same as [Meta-Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), this models is intended for assistant-like chat. |
|
|
- **Release Date:** 08/27/2025 |
|
|
- **Version:** v2025.3 |
|
|
- **License(s):** [Llama3.3](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE) |
|
|
- **Supported Inference Engine(s):** Furiosa LLM |
|
|
- **Supported Hardware Compatibility:** FuriosaAI RNGD |
|
|
- **Preferred Operating System(s):** Linux |
|
|
- **Quantization:** |
|
|
- Tool: Furiosa Model Compressor v0.6.6, included in Furiosa SDK 2025.3 |
|
|
- Weight: int8, Activation: bfloat16, KV cache: bfloat16 |
|
|
|
|
|
|
|
|
## Description: |
|
|
This model is the pre-compiled version of the [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), which is an auto-regressive language model that uses an optimized transformer architecture. |
|
|
|
|
|
## Usage |
|
|
|
|
|
To run this model with [Furiosa-LLM](https://developer.furiosa.ai/latest/en/furiosa_llm/intro.html), follow the example command below after [installing Furiosa-LLM and its prerequisites](https://developer.furiosa.ai/latest/en/get_started/furiosa_llm.html#installing-furiosa-llm). |
|
|
|
|
|
```sh |
|
|
furiosa-llm serve furiosa-ai/Llama-3.3-70B-Instruct-INT8 |
|
|
``` |
|
|
|