SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'cta test̾i sur: u̾bi qd̾ madauit in mil egones. Q uod disposunt ad abrahã. mũti sui ad p̃saaci Et statuit il acob ĩ p̾ceptũ: ⁊ isrł mn testiñ etꝰ Dices tibi dabo t̾ram chanaan: fu ncdũ heditatis ur̃e. Dũ e̾e̾nt nũo ocui. paucissimi ⁊ ĩcole ouis. Et ꝑtͣni eẽt de gnͣte ĩ gentẽ: ⁊ de regno ad ulũ alterũ. Non reliquit hoĩem',
    'cta test̾i sur: u̾bi qd̾ madauit in mil egones. Q uod disposunt ad abrahã. mũti sui ad p̃saaci Et statuit il acob ĩ p̾ceptũ: ⁊ isrł mn testiñ etꝰ Dices tibi dabo t̾ram chanaan: fu ncdũ heditatis ur̃e. Dũ e̾e̾nt nũo ocui. paucissimi ⁊ ĩcole ouis. Et ꝑtͣni eẽt de gnͣte ĩ gentẽ: ⁊ de regno ad ulũ alterũ. Non reliquit hoĩem',
    'p̾mioꝵ. p̃s. b̾ildixit finis tuus inte. Et ĩminitas apee cato. Qua xp̃c donatus e̾ ps. p̾ucinsti eũ i bñ. dicidis ¶Infernans quo\uf1ac dupiex .s. adinacio. ps. laudat᷑ͣ ptc̃ce indesidus aĩe sue ⁊ ñquis bñdi. et cũ quis sibi tribuit bona que ht̃ atco. Iob. timebat enĩ ne forte peccau̾int fuii eius. ⁊ bñdix̾int deo incordib\uf1ac suis. Corꝑans ẽ ad carnis delecta tr̃em us. or̃s caro feñ. ⁊ oĩs gła euis qiͣ d̾r ꝑ ysaiam. ue qͥ niungitis domũ addom̃. ⁊ agr̃ ago copłatis us\uf1ac ad t̾minũ ioci. Nñquid ħ̾itabitis uos so',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 1.0000, 0.2812],
#         [1.0000, 1.0000, 0.2812],
#         [0.2812, 0.2812, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 99,840 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 6 tokens
    • mean: 85.65 tokens
    • max: 473 tokens
    • min: 6 tokens
    • mean: 85.65 tokens
    • max: 473 tokens
  • Samples:
    sentence_0 sentence_1
    Per totum namque mundum est mundus; et mundum persequitur mundus, coinquinatus mundum, perditus redemptum, damnatus salvatum. Per totum namque mundum est mundus; et mundum persequitur mundus, coinquinatus mundum, perditus redemptum, damnatus salvatum.
    motꝰ siait supͣ sepe dixmꝰ gꝰ anteon nem generanonem est motus ge eti am aute generaitionem primi mobilis est mo tus go etiam motus est. inte p̾mum mo tum᷑ ꝙ est impossibile go fint hec caisa ꝙ motus non eet̾ sꝑ momĩ p̾tito iprẽ ꝙ primum mobile oportet᷑ prius generari mẽe et postea moneri qr absq dubio se queret᷑ ꝙ quedam mutatio eet̃ anteil motꝰ siait supͣ sepe dixmꝰ gꝰ anteon nem generanonem est motus ge eti am aute generaitionem primi mobilis est mo tus go etiam motus est. inte p̾mum mo tum᷑ ꝙ est impossibile go fint hec caisa ꝙ motus non eet̾ sꝑ momĩ p̾tito iprẽ ꝙ primum mobile oportet᷑ prius generari mẽe et postea moneri qr absq dubio se queret᷑ ꝙ quedam mutatio eet̃ anteil
    Dictum est, id quod in nomine confuse significaretur, in definitione quae fit enumeratione partium, aperiri atque explicari. Quod fieri non potest, nisi per quarumdam partium nuncupationem; nihil enim dum explicatur oratione, totum simul dici potest. Quae cum ita sint, cumque omnis hujusmodi definitio quaedam sit partium distributio, quatuor his modis fieri potest. Aut enim substantiales partes explicantur, aut proprietatis partes dicuntur, aut quasi totius membra enumerantur, aut tanquam species dividuntur. Substantiales partes explicantur, cum ex genere ac differentiis definitio constituitur. Genus enim quod singulariter praedicatur, speciei totum est. Id genus sumptum in definitione, pars quaedam fit. Non enim solum speciem complet, nisi adjiciantur etiam differentiae, in quibus eadem ratio quae in genere est. Nam cum ipsae singulariter dictae totam speciem claudant, in definitione sumptae, partes speciei fiunt, quia non solum speciem quidem esse designant, sed etiam genus. Dictum est, id quod in nomine confuse significaretur, in definitione quae fit enumeratione partium, aperiri atque explicari. Quod fieri non potest, nisi per quarumdam partium nuncupationem; nihil enim dum explicatur oratione, totum simul dici potest. Quae cum ita sint, cumque omnis hujusmodi definitio quaedam sit partium distributio, quatuor his modis fieri potest. Aut enim substantiales partes explicantur, aut proprietatis partes dicuntur, aut quasi totius membra enumerantur, aut tanquam species dividuntur. Substantiales partes explicantur, cum ex genere ac differentiis definitio constituitur. Genus enim quod singulariter praedicatur, speciei totum est. Id genus sumptum in definitione, pars quaedam fit. Non enim solum speciem complet, nisi adjiciantur etiam differentiae, in quibus eadem ratio quae in genere est. Nam cum ipsae singulariter dictae totam speciem claudant, in definitione sumptae, partes speciei fiunt, quia non solum speciem quidem esse designant, sed etiam genus.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 1
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.6410 500 0.1311

Framework Versions

  • Python: 3.12.11
  • Sentence Transformers: 5.1.0
  • Transformers: 4.56.0
  • PyTorch: 2.8.0+cu128
  • Accelerate: 1.10.1
  • Datasets: 4.0.0
  • Tokenizers: 0.22.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support