SentenceTransformer based on BAAI/bge-large-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-large-en-v1.5
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': True, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Is there a specific location where I can find workspace filters?',
    '## Windows\n\nYou can access the filters of a workspace in the grid of filters.\n\nThe filter window has most properties of the filter.',
    'Find/Replace Panel\n\nSee\n\nchoosers and panels\n\nfor information on\n    displaying the Find/Replace Panel.\n\nThe Find/Replace Panel allows you to search for specific text in the\n\nproperties\n\nof the\n\ncomponents\n\nof the open\n\nworkspaces\n\nand\n\nresults workspaces\n\n.\n\nYou should enter the text for which you wish to search in the\n\nFind what\n\nedit field.\n\nYou should select the parts of the open workspaces and results workspaces within which you\n    wish to search in the tree under\n\nWithin\n\n.\n\nYou can select multiple items discontinuously by holding down the\n\nCtrl\n\nkey while clicking with the mouse.\n\nYou can use the\n\nName\n\n,\n\nFormula\n\nand\n\nAll\n     fields\n\ncheckboxes to specify whether the search should include the\n\nName\n\nproperty, the\n\nFormula\n\nproperty or all properties, respectively. You\n    must check at least one of these checkboxes so that there are some properties in which to\n    search.\n\nYou can also select further search options:\n\n* Match case- check this checkbox to perform a case-sensitive search\n* Match whole- check this checkbox to exclude matches with parts of words, including\n     names of variables and components\n* Ignore spaces- check this checkbox to ignore all white space in the properties being\n     searched\n* Ignore info fields- check this checkbox to exclude theDescription,Documentation,Last modified,Modified by,Path,Protected byandReserved byproperties.\n\nYou should press the\n\nFind\n\nbutton to start the search.\n\nAfter searching the lower pane will display the number of occurrences of the text that have\n    been found and provide a tree showing where these are. You can double-click on any of the\n    results to open that component in the Central Window, with the found item selected.\n\nYou can select items in the tree if you wish to replace the found text in these items. You\n    should then type the text to replace the found text in the\n\nReplace with\n\nedit field and click the\n\nReplace\n\nbutton.\n\nThe read-only icon\n\nnext to a tree\n    item indicates that it has been\n\nprotected\n\nand so none of its\n    text can be replaced using this feature.\n\nYou can drag or copy tree items from the Find/Replace Panel into the\n\nCentral Window\n\n.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.9967, -0.9964],
#         [-0.9967,  1.0000,  0.9994],
#         [-0.9964,  0.9994,  1.0000]])

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 16,909 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 18.63 tokens
    • max: 53 tokens
    • min: 4 tokens
    • mean: 188.63 tokens
    • max: 384 tokens
    • min: 3 tokens
    • mean: 150.13 tokens
    • max: 384 tokens
  • Samples:
    anchor positive negative
    What is the purpose of the Analyzer tab in a results workspace? Analyzer

    The

    Analyzer

    tab of a results workspace shows how the variables in the results workspace depend on each other.
    If the results workspace contains sample output, the Analyzer shows these calculated results.
    Analyzer

    The Analyzer tool for a component shows how variables in the component depend on each other.

    Most components that contain variables with formulas have an

    Analyzer

    tab at the bottom of their component window.
    The

    Analyzer

    tab gives access to the Analyzer tool.
    Components with an

    Analyzer

    tab include

    assumption sets

    ,

    data views

    ,

    database views

    ,

    initialization modules

    ,

    layer modules

    ,

    modules

    ,

    MtF views

    ,

    programs

    ,

    projection processes

    ,

    stochastic processes

    ,
    and

    results workspaces

    .
    The

    Analyzer

    tab of a results workspace

    differs from the

    Analyzer

    tab of the other components and is covered separately.
    What kind of output is displayed in the Analyzer if available? Analyzer

    The

    Analyzer

    tab of a results workspace shows how the variables in the results workspace depend on each other.
    If the results workspace contains sample output, the Analyzer shows these calculated results.
    Accessing output

    You can view and use the output from

    R³S Modeler

    in a variety of different ways.
    Where can I find the dependency relationships between variables in my results? Analyzer

    The

    Analyzer

    tab of a results workspace shows how the variables in the results workspace depend on each other.
    If the results workspace contains sample output, the Analyzer shows these calculated results.
    Analyzer dependency diagram

    The dependency diagram of the

    Analyzer

    tab of a results workspace shows which variable you are currently analyzing with the variables that it depends on and the variables that depend upon it.
    You can double-click another variable in the dependency diagram to analyze that variable.
    The dependency diagram shows the value of each variable if this is available in sample output.

    The dependency diagram is divided into three strips of variables:

    * The top strip shows variables whose value depends on the value of the current variable (its dependants).
    * The middle strip contains the variable currently being analyzed.
    * The bottom strip shows variables on which the value of the current variable depends (its precedents).

    Each variable has a box that shows:

    * An icon representing the data type of the variable
    * A name bar that shows the name of the variable
    * A value box that shows the value of the variable

    The variable boxes are linked by arrows that show the ...
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • gradient_accumulation_steps: 2
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • warmup_ratio: 0.05
  • bf16: True
  • dataloader_num_workers: 2
  • remove_unused_columns: False

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 2
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: False
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.0946 50 9.7648
0.1892 100 9.3037
0.2838 150 9.1803
0.3784 200 9.2374
0.4730 250 9.1815
0.5676 300 9.2019
0.6623 350 9.2085
0.7569 400 9.0603
0.8515 450 9.1276
0.9461 500 9.1794
1.0397 550 9.0348
1.1343 600 9.1246
1.2289 650 9.1251
1.3236 700 9.1681
1.4182 750 8.907
1.5128 800 9.0067
1.6074 850 9.1056
1.7020 900 9.0715
1.7966 950 8.9425
1.8912 1000 9.0148
1.9858 1050 9.0477

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 5.1.1
  • Transformers: 4.49.0
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.10.1
  • Datasets: 4.1.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
2
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dhruvnayee/test_help_text

Finetuned
(45)
this model