Description

Liquid.AI's LFM2 model finetuned on frenchSUM dataset to summarize French texts.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "CATIE-AQ/LMF2-1.2B_french_summary"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="bfloat16",
    trust_remote_code=True,
#    attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = """Résume l'article suivant :\n""" + "you_text_to_summarize"
tokenizer.padding_side = "left"
tokenizer.truncation_side = "right"

# Apply the chat template to prepare the input
input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    add_generation_prompt=True,
    return_tensors="pt",
    tokenize=True,
).to(model.device)

# Generate the output from the model
output = model.generate(
    input_ids,
    do_sample=True,
    temperature=0.3,
    min_p=0.15,
    repetition_penalty=1.05,
    max_new_tokens=2048,
)

summary_text = tokenizer.decode(
    output[0][input_ids.shape[1]:], # Slice the output tensor
    skip_special_tokens=True # Skip special tokens for a cleaner output
)

print(summary_text)

License

LMF1.0

Downloads last month
31
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CATIE-AQ/LMF2-1.2B_french_summary

Base model

LiquidAI/LFM2-1.2B
Finetuned
(35)
this model
Quantizations
1 model

Dataset used to train CATIE-AQ/LMF2-1.2B_french_summary

Collection including CATIE-AQ/LMF2-1.2B_french_summary