Image-Text-to-Text
Transformers
llava_llama
text-generation
fede97 commited on
Commit
8b0da89
·
verified ·
1 Parent(s): 9ff1741

Create README.md

Browse files

# Model Card: LLaVA_MORE-llama_3_1-8B-pretrain

```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating for the first time the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.

In this model space, you will find the stage one (pretrain) weights of LLaVA-MORE LLaMA 3.1 8B.

For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository.

## Inference
You can try our LLaVA-MORE in the Image-To-Text task by cloning our repository and running the following script.

```bash
python -u llava/eval/run_llava.py
```

## Citation
If you make use of our work, please cite our repo:

```bibtex


@misc
{cocchi2024llavamore,
title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}},
author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
url={https://github.com/aimagelab/LLaVA-MORE},
year={2024}
}
```

Files changed (1) hide show
  1. README.md +8 -0
README.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - liuhaotian/LLaVA-CC3M-Pretrain-595K
5
+ - liuhaotian/LLaVA-Instruct-150K
6
+ library_name: transformers
7
+ pipeline_tag: image-text-to-text
8
+ ---