SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("along26/all-MiniLM-L6-v2_multilingual_malaysian-v8")
# Run inference
sentences = [
    'To calculate the distance between the object and the lens, we can use the lens formula:\n\n1/f = 1/v - 1/u\n\nwhere f is the focal length of the lens, v is the distance between the lens and the image, and u is the distance between the object and the lens.\n\nGiven the focal length (f) is 20 cm and the distance between the lens and the image (v) is 25 cm, we can plug these values into the formula:\n\n1/20 = 1/25 - 1/u\n\nNow, we need to solve for u:\n\n1/u = 1/20 - 1/25\n1/u = (25 - 20) / (20 * 25)\n1/u = 5 / 500\n1/u = 1/100\n\nu = 100 cm\n\nSo, the distance between the object and the lens is 100 cm.',
    'Untuk mengira jarak antara objek dan kanta, kita boleh menggunakan formula kanta:\n\n1/f = 1/v - 1/u\n\ndengan f ialah panjang fokus kanta, v ialah jarak antara kanta dan imej, dan u ialah jarak antara objek dan kanta.\n\nMemandangkan panjang fokus (f) ialah 20 cm dan jarak antara kanta dan imej (v) ialah 25 cm, kita boleh memasukkan nilai ini ke dalam formula:\n\n1/20 = 1/25 - 1/u\n\nSekarang, kami perlu menyelesaikan untuk anda:\n\n1/u = 1/20 - 1/25\n1/u = (25 - 20) / (20 * 25)\n1/u = 5 / 500\n1/u = 1/100\n\nu = 100 cm\n\nJadi, jarak antara objek dan kanta ialah 100 cm.',
    "Najib Razak's trial for corruption and money laundering has had a significant negative impact on public perception of his leadership and the United Malays National Organization (UMNO) party. Najib is the former Prime Minister of Malaysia, and the charges against him are related to the 1MDB scandal, in which billions of dollars were allegedly misappropriated from a Malaysian state investment fund.\n\nThe trial has brought renewed attention to issues of corruption and cronyism in Malaysia, and many Malaysians feel that the UMNO party, which Najib led until his election defeat in 2018, is tainted by association. The party, which has long been a dominant force in Malaysian politics, has seen its popularity decline as a result of the scandal and the trial.\n\nThe trial has also raised questions about the effectiveness of Malaysia's anti-corruption efforts and the rule of law in the country. Many Malaysians are disappointed that it took so long for Najib to be charged and brought to trial, and some feel that the legal process has been politically motivated.\n\nOverall, the trial has contributed to a sense of disillusionment and mistrust among the Malaysian public, and has damaged the reputation of the UMNO party and its leaders. It remains to be seen how the trial will ultimately be resolved, and what the long-term impact will be on Malaysian politics and society.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4679, 0.9782],
#         [0.4679, 1.0000, 0.4707],
#         [0.9782, 0.4707, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 420,570 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 6 tokens
    • mean: 153.22 tokens
    • max: 512 tokens
    • min: 6 tokens
    • mean: 182.56 tokens
    • max: 512 tokens
    • min: 6 tokens
    • mean: 165.2 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    Kenu ku iya, enti deka nyadi kaban RELA, orang nya patut beumur 18 taun ke atas lalu deka ditapis piak di Bukit Aman dikena nentuka orang ke ngerejista diri nyadi kaban RELA beresi ari sebarang pengawa ke enda menuku. Kenu ku iya, enti deka nyadi kaban RELA, orang nya patut beumur 18 taun ke atas lalu deka ditapis piak di Bukit Aman dikena nentuka orang ke ngerejista diri nyadi kaban RELA beresi ari sebarang pengawa ke enda menuku. "How can the study of non-perturbative gauge dynamics in the context of string theory shed light on the behaviors and interactions of particles in high-energy physics experiments?"
    Let A = {1,2,3}. Find the power set of A and show that its cardinality is larger than the cardinality of A itself. Biarkan A = {1,2,3}. Cari set kuasa A dan tunjukkan bahawa kardinalitinya lebih besar daripada kardinaliti A itu sendiri. How can the Malaysian housing market still be so unaffordable for the average citizen?
    What is the standard enthalpy change for the dissolution of sodium chloride (NaCl) in water, given that 5.00 g of NaCl is dissolved in 100.0 mL of water and the resulting solution has a final temperature of 25.0°C? The molar enthalpy of dissolution of NaCl is -3.9 kJ/mol. Apakah perubahan entalpi standard untuk pembubaran natrium klorida (NaCl) dalam air, memandangkan 5.00 g NaCl dibubarkan dalam 100.0 ml air dan penyelesaian yang dihasilkan mempunyai suhu akhir 25.0 ° C? Entalpi molar pembubaran NaCl ialah -3.9 kJ/mol. Why have some opposition politicians and activists been critical of the government's handling of the 1MDB scandal and Najib Razak's prosecution?
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.0380 500 4.7496
0.0761 1000 2.0391
0.1141 1500 1.5157
0.1522 2000 1.3982
0.1902 2500 1.3333
0.2283 3000 1.311
0.2663 3500 1.3084
0.3043 4000 1.3112
0.3424 4500 1.3168
0.3804 5000 1.2842
0.4185 5500 1.3008
0.4565 6000 1.3239
0.4946 6500 1.3005
0.5326 7000 1.2905
0.5706 7500 1.2811
0.6087 8000 1.2178
0.6467 8500 1.1743
0.6848 9000 1.1273
0.7228 9500 1.0966
0.7609 10000 1.0909
0.7989 10500 1.0586
0.8369 11000 1.0047
0.8750 11500 0.9998
0.9130 12000 1.0508
0.9511 12500 1.0211
0.9891 13000 0.9711
1.0272 13500 0.961
1.0652 14000 0.9487
1.1032 14500 0.9381
1.1413 15000 0.9497
1.1793 15500 0.9295
1.2174 16000 0.9247
1.2554 16500 0.9079
1.2935 17000 0.8922
1.3315 17500 0.9216
1.3696 18000 0.9004
1.4076 18500 0.8797
1.4456 19000 0.8717
1.4837 19500 0.8594
1.5217 20000 0.8711
1.5598 20500 0.8664
1.5978 21000 0.8623
1.6359 21500 0.8599
1.6739 22000 0.8259
1.7119 22500 0.8739
1.7500 23000 0.8532
1.7880 23500 0.8567
1.8261 24000 0.8519
1.8641 24500 0.8309
1.9022 25000 0.8207
1.9402 25500 0.8312
1.9782 26000 0.8329
2.0163 26500 0.8022
2.0543 27000 0.7744
2.0924 27500 0.7795
2.1304 28000 0.7567
2.1685 28500 0.7797
2.2065 29000 0.7711
2.2445 29500 0.7691
2.2826 30000 0.7578
2.3206 30500 0.7783
2.3587 31000 0.7182
2.3967 31500 0.7639
2.4348 32000 0.7484
2.4728 32500 0.7674
2.5108 33000 0.7663
2.5489 33500 0.764
2.5869 34000 0.7376
2.6250 34500 0.7471
2.6630 35000 0.7437
2.7011 35500 0.7562
2.7391 36000 0.74
2.7771 36500 0.7208
2.8152 37000 0.7392
2.8532 37500 0.7336
2.8913 38000 0.7192
2.9293 38500 0.7383
2.9674 39000 0.7432

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.1
  • PyTorch: 2.9.0+cu126
  • Accelerate: 1.11.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
16
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for along26/all-MiniLM-L6-v2_multilingual_malaysian-v8

Finetuned
(674)
this model