SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("along26/all-MiniLM-L6-v2_multilingual_malaysian-v8")
# Run inference
sentences = [
'To calculate the distance between the object and the lens, we can use the lens formula:\n\n1/f = 1/v - 1/u\n\nwhere f is the focal length of the lens, v is the distance between the lens and the image, and u is the distance between the object and the lens.\n\nGiven the focal length (f) is 20 cm and the distance between the lens and the image (v) is 25 cm, we can plug these values into the formula:\n\n1/20 = 1/25 - 1/u\n\nNow, we need to solve for u:\n\n1/u = 1/20 - 1/25\n1/u = (25 - 20) / (20 * 25)\n1/u = 5 / 500\n1/u = 1/100\n\nu = 100 cm\n\nSo, the distance between the object and the lens is 100 cm.',
'Untuk mengira jarak antara objek dan kanta, kita boleh menggunakan formula kanta:\n\n1/f = 1/v - 1/u\n\ndengan f ialah panjang fokus kanta, v ialah jarak antara kanta dan imej, dan u ialah jarak antara objek dan kanta.\n\nMemandangkan panjang fokus (f) ialah 20 cm dan jarak antara kanta dan imej (v) ialah 25 cm, kita boleh memasukkan nilai ini ke dalam formula:\n\n1/20 = 1/25 - 1/u\n\nSekarang, kami perlu menyelesaikan untuk anda:\n\n1/u = 1/20 - 1/25\n1/u = (25 - 20) / (20 * 25)\n1/u = 5 / 500\n1/u = 1/100\n\nu = 100 cm\n\nJadi, jarak antara objek dan kanta ialah 100 cm.',
"Najib Razak's trial for corruption and money laundering has had a significant negative impact on public perception of his leadership and the United Malays National Organization (UMNO) party. Najib is the former Prime Minister of Malaysia, and the charges against him are related to the 1MDB scandal, in which billions of dollars were allegedly misappropriated from a Malaysian state investment fund.\n\nThe trial has brought renewed attention to issues of corruption and cronyism in Malaysia, and many Malaysians feel that the UMNO party, which Najib led until his election defeat in 2018, is tainted by association. The party, which has long been a dominant force in Malaysian politics, has seen its popularity decline as a result of the scandal and the trial.\n\nThe trial has also raised questions about the effectiveness of Malaysia's anti-corruption efforts and the rule of law in the country. Many Malaysians are disappointed that it took so long for Najib to be charged and brought to trial, and some feel that the legal process has been politically motivated.\n\nOverall, the trial has contributed to a sense of disillusionment and mistrust among the Malaysian public, and has damaged the reputation of the UMNO party and its leaders. It remains to be seen how the trial will ultimately be resolved, and what the long-term impact will be on Malaysian politics and society.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4679, 0.9782],
# [0.4679, 1.0000, 0.4707],
# [0.9782, 0.4707, 1.0000]])
Training Details
Training Dataset
Unnamed Dataset
- Size: 420,570 training samples
- Columns:
sentence_0,sentence_1, andsentence_2 - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 sentence_2 type string string string details - min: 6 tokens
- mean: 153.22 tokens
- max: 512 tokens
- min: 6 tokens
- mean: 182.56 tokens
- max: 512 tokens
- min: 6 tokens
- mean: 165.2 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 sentence_2 Kenu ku iya, enti deka nyadi kaban RELA, orang nya patut beumur 18 taun ke atas lalu deka ditapis piak di Bukit Aman dikena nentuka orang ke ngerejista diri nyadi kaban RELA beresi ari sebarang pengawa ke enda menuku.Kenu ku iya, enti deka nyadi kaban RELA, orang nya patut beumur 18 taun ke atas lalu deka ditapis piak di Bukit Aman dikena nentuka orang ke ngerejista diri nyadi kaban RELA beresi ari sebarang pengawa ke enda menuku."How can the study of non-perturbative gauge dynamics in the context of string theory shed light on the behaviors and interactions of particles in high-energy physics experiments?"Let A = {1,2,3}. Find the power set of A and show that its cardinality is larger than the cardinality of A itself.Biarkan A = {1,2,3}. Cari set kuasa A dan tunjukkan bahawa kardinalitinya lebih besar daripada kardinaliti A itu sendiri.How can the Malaysian housing market still be so unaffordable for the average citizen?What is the standard enthalpy change for the dissolution of sodium chloride (NaCl) in water, given that 5.00 g of NaCl is dissolved in 100.0 mL of water and the resulting solution has a final temperature of 25.0°C? The molar enthalpy of dissolution of NaCl is -3.9 kJ/mol.Apakah perubahan entalpi standard untuk pembubaran natrium klorida (NaCl) dalam air, memandangkan 5.00 g NaCl dibubarkan dalam 100.0 ml air dan penyelesaian yang dihasilkan mempunyai suhu akhir 25.0 ° C? Entalpi molar pembubaran NaCl ialah -3.9 kJ/mol.Why have some opposition politicians and activists been critical of the government's handling of the 1MDB scandal and Najib Razak's prosecution? - Loss:
TripletLosswith these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 32per_device_eval_batch_size: 32fp16: Truemulti_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 32per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robinrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.0380 | 500 | 4.7496 |
| 0.0761 | 1000 | 2.0391 |
| 0.1141 | 1500 | 1.5157 |
| 0.1522 | 2000 | 1.3982 |
| 0.1902 | 2500 | 1.3333 |
| 0.2283 | 3000 | 1.311 |
| 0.2663 | 3500 | 1.3084 |
| 0.3043 | 4000 | 1.3112 |
| 0.3424 | 4500 | 1.3168 |
| 0.3804 | 5000 | 1.2842 |
| 0.4185 | 5500 | 1.3008 |
| 0.4565 | 6000 | 1.3239 |
| 0.4946 | 6500 | 1.3005 |
| 0.5326 | 7000 | 1.2905 |
| 0.5706 | 7500 | 1.2811 |
| 0.6087 | 8000 | 1.2178 |
| 0.6467 | 8500 | 1.1743 |
| 0.6848 | 9000 | 1.1273 |
| 0.7228 | 9500 | 1.0966 |
| 0.7609 | 10000 | 1.0909 |
| 0.7989 | 10500 | 1.0586 |
| 0.8369 | 11000 | 1.0047 |
| 0.8750 | 11500 | 0.9998 |
| 0.9130 | 12000 | 1.0508 |
| 0.9511 | 12500 | 1.0211 |
| 0.9891 | 13000 | 0.9711 |
| 1.0272 | 13500 | 0.961 |
| 1.0652 | 14000 | 0.9487 |
| 1.1032 | 14500 | 0.9381 |
| 1.1413 | 15000 | 0.9497 |
| 1.1793 | 15500 | 0.9295 |
| 1.2174 | 16000 | 0.9247 |
| 1.2554 | 16500 | 0.9079 |
| 1.2935 | 17000 | 0.8922 |
| 1.3315 | 17500 | 0.9216 |
| 1.3696 | 18000 | 0.9004 |
| 1.4076 | 18500 | 0.8797 |
| 1.4456 | 19000 | 0.8717 |
| 1.4837 | 19500 | 0.8594 |
| 1.5217 | 20000 | 0.8711 |
| 1.5598 | 20500 | 0.8664 |
| 1.5978 | 21000 | 0.8623 |
| 1.6359 | 21500 | 0.8599 |
| 1.6739 | 22000 | 0.8259 |
| 1.7119 | 22500 | 0.8739 |
| 1.7500 | 23000 | 0.8532 |
| 1.7880 | 23500 | 0.8567 |
| 1.8261 | 24000 | 0.8519 |
| 1.8641 | 24500 | 0.8309 |
| 1.9022 | 25000 | 0.8207 |
| 1.9402 | 25500 | 0.8312 |
| 1.9782 | 26000 | 0.8329 |
| 2.0163 | 26500 | 0.8022 |
| 2.0543 | 27000 | 0.7744 |
| 2.0924 | 27500 | 0.7795 |
| 2.1304 | 28000 | 0.7567 |
| 2.1685 | 28500 | 0.7797 |
| 2.2065 | 29000 | 0.7711 |
| 2.2445 | 29500 | 0.7691 |
| 2.2826 | 30000 | 0.7578 |
| 2.3206 | 30500 | 0.7783 |
| 2.3587 | 31000 | 0.7182 |
| 2.3967 | 31500 | 0.7639 |
| 2.4348 | 32000 | 0.7484 |
| 2.4728 | 32500 | 0.7674 |
| 2.5108 | 33000 | 0.7663 |
| 2.5489 | 33500 | 0.764 |
| 2.5869 | 34000 | 0.7376 |
| 2.6250 | 34500 | 0.7471 |
| 2.6630 | 35000 | 0.7437 |
| 2.7011 | 35500 | 0.7562 |
| 2.7391 | 36000 | 0.74 |
| 2.7771 | 36500 | 0.7208 |
| 2.8152 | 37000 | 0.7392 |
| 2.8532 | 37500 | 0.7336 |
| 2.8913 | 38000 | 0.7192 |
| 2.9293 | 38500 | 0.7383 |
| 2.9674 | 39000 | 0.7432 |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.1.2
- Transformers: 4.57.1
- PyTorch: 2.9.0+cu126
- Accelerate: 1.11.0
- Datasets: 4.0.0
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 16
Model tree for along26/all-MiniLM-L6-v2_multilingual_malaysian-v8
Base model
sentence-transformers/all-MiniLM-L6-v2