SentenceTransformer based on deepvk/USER-bge-m3
This is a sentence-transformers model finetuned from deepvk/USER-bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: deepvk/USER-bge-m3
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Data-Lab/USER-bge-m3-embedder-td")
# Run inference
sentences = [
'детская каша',
'Каша овсяная детская "Мишка" Сладкая овсяная каша с голубикой и бананами. Можно приготовить на кокосовом молоке',
'Десерт "Тирамису", 300 г Изысканный итальянский десерт в нестандартном исполнении. В нашем Тирамису много (очень много!) сливочного крема и Маскарпоне, поэтому лакомство невероятно нежное!',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Dataset:
dev - Evaluated with
TripletEvaluator
| Metric | Value |
|---|---|
| cosine_accuracy | 0.9188 |
| dot_accuracy | 0.0803 |
| manhattan_accuracy | 0.917 |
| euclidean_accuracy | 0.9188 |
| max_accuracy | 0.9188 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 10,189 training samples
- Columns:
sentence_0,sentence_1, andsentence_2 - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 sentence_2 type string string string details - min: 3 tokens
- mean: 7.85 tokens
- max: 30 tokens
- min: 6 tokens
- mean: 61.74 tokens
- max: 377 tokens
- min: 5 tokens
- mean: 64.71 tokens
- max: 393 tokens
- Samples:
sentence_0 sentence_1 sentence_2 хурмаЧипсы из хурмы, 25 г Натуральные чипсы из хурмы, без сахара. Мягкие, медово-фруктовыеСалат мимоза, 300 г Классический салат мимоза с горбушей, отварными овощами и куриными желтками.жареное мясоКК_котлета куриная жареная, весБаклажаны "Пармиджано" Мама миа, это же настоящая итальянская пармиджана! Нежные ломтики баклажанов, много томатов и ещё больше тягучего сыра. Очень насыщенно, сочно и аппетитно пряно. Баклажаны для этого рецепта не обжариваются, а запекаются в духовке, что делает блюдо более полезным и изысканным.бедро цыпленка бройлераБедро цыплят-бройлеров Халяль 1 кг Сочное бедро цыпленка, подходит для маринования, тушения и запеканияМясо бедра (Филе бедра) индейки в маринаде "Чесночный" 1 кг Диетическое, нежирное филе бедра индейки с деликатным вкусом и ароматом. В меру подсолено и приправлено острым чесночком и травами. - Loss:
TripletLosswith these parameters:{ "distance_metric": "TripletDistanceMetric.COSINE", "triplet_margin": 0.5 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 4per_device_eval_batch_size: 4fp16: Truemulti_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 4per_device_eval_batch_size: 4per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Truedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseeval_use_gather_object: Falsebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin
Training Logs
| Epoch | Step | Training Loss | dev_max_accuracy |
|---|---|---|---|
| 0.3928 | 500 | 0.2477 | - |
| 0.7855 | 1000 | 0.182 | 0.9064 |
| 1.0 | 1273 | - | 0.9073 |
| 1.1783 | 1500 | 0.157 | - |
| 1.5711 | 2000 | 0.1234 | 0.9029 |
| 1.9639 | 2500 | 0.0993 | - |
| 2.0 | 2546 | - | 0.9179 |
| 2.3566 | 3000 | 0.0864 | 0.9170 |
| 2.7494 | 3500 | 0.0691 | - |
| 3.0 | 3819 | - | 0.9188 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.0
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 1
Model tree for Data-Lab/USER-bge-m3-embedder-td
Base model
deepvk/USER-bge-m3Evaluation results
- Cosine Accuracy on devself-reported0.919
- Dot Accuracy on devself-reported0.080
- Manhattan Accuracy on devself-reported0.917
- Euclidean Accuracy on devself-reported0.919
- Max Accuracy on devself-reported0.919