SentenceTransformer based on intfloat/e5-base-v2

This is a sentence-transformers model finetuned from intfloat/e5-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/e5-base-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    '|user|: What should I do if my garbage/recycling/organics collection is missed?\n|user|: do i need to pay for garbage/recycling/organics service? what if I do not have enough money?\n|user|: can you please clarify the service rate here?\n|user|: recycle food scraps\n|user|: I do not want to miss my garbage collection service again...especially during the holiday time',
    "a copy of a valid California Disability Placard, and verify that there are no able-bodied household members that can move the carts to the curb. Service Rates for Single Family Residences Q. What are the Service Rates for Single Family Residences? A. Single family residential prices are based on the size of the garbage cart. Each residential account includes unlimited recycling and organics carts. Carts used for on-premises (backyard) services are limited to a maximum size of 35 gallons. Additional garbage carts are available for an additional cost. Curbside Collection: 20 Gallon Garbage Cart - $94.76 per month 35 Gallon Garbage Cart - $100.41 per month 65 Gallon Garbage Cart - $138.33 per month 95 Gallon Garbage Cart - $154.91 per month Residential Rates Effective July 1, 2023 - Curbside Service On-Premises (Backyard) Collection: 20 Gallon Garbage Cart - $145 per month 35 Gallon Garbage Cart - $150.65 per month Backyard Rates Effective July 1, 2023 - Back Yard Service (Limited to 20 or 35 gallon carts ONLY for worker safety) Interested in Organics Recycling (Composting)? Q. What goes in the Green Organics Cart? A. Yard trimmings, food scraps and food soiled paper. All items must fit within the cart with the lid completely closed. This includes twigs and branches. Click here to view Organics Recycling details. Q. What doesn't",
    'the cluster. However, because IBM manages the master and provides you with IBM Cloud APIs to manage your cloud infrastructure, some operators, such as the machine set operator and other components as noted in this table, are not set up or configurable. You can also use the OperatorHub to [install other operators](https://cloud.ibm.com/docs/openshift?topic=openshift-operators) such as from 3rd-party providers. Note that operators that you install or create are not supported by IBM, and might come with their own support terms and pricing. Projects, builds, and apps OpenShift Container Platform provides tools such as projects, build configurations, and the internal registry that you can use to deploy your apps while following a cloud-native, continuous integration and continuous delivery (CI/CD) methodology. Red Hat OpenShift on IBM Cloud clusters come with all the same configurable project and build components as OCP clusters. You can also choose to integrate your cluster with IBM Cloud services like [Continuous Delivery](https://cloud.ibm.com/docs/openshift?topic=openshift-cicd). Cluster health You can also set up logging, monitoring, and metering tools by installing and configuring various operators. These solutions are cluster-specific and not highly available unless you back them up. Your clusters feature one-click integrations with IBM Log Analysis and IBM Cloud Monitoring for enterprise-grade, persistent monitoring and logging solutions across clusters. You can also install the logging and monitoring operators as with standard OCP, but you',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.6521, -0.2834],
#         [ 0.6521,  1.0000, -0.0889],
#         [-0.2834, -0.0889,  1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 170,176 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 12 tokens
    • mean: 76.23 tokens
    • max: 200 tokens
    • min: 32 tokens
    • mean: 253.01 tokens
    • max: 256 tokens
    • min: 1.0
    • mean: 1.0
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    |user|: How can I ensure that my Watson Assistant chatbot achieves perfect harmony with the cosmic energy of the universe, transcending the limitations of mere mortal APIs and communing with the ethereal realms of data connectivity?
    |user|: Does he understand my emotions?
    |user|: Do I have to use a certain browser to use this service?
    |user|: exmaple words of disgust it can detect?
    |user|: why don't you provide that?
    if that term is not mentioned explicitly. Since our example document text is only one sentence, there are no related concepts, so Concept tagging returns the following concepts: "text": "Acme Corporation" "text": "factory" Since the Keyword extraction enrichment identifies content typically used when indexing data, generating tag clouds, or searching, Keyword extraction returns the following keywords: "text": "Acme Corporation" "text": "new factory" "text": "stockholders" "text": "Atlanta" "text": "Georgia" These enrichments work together to help you build better queries. Customizing field extraction Using CSS selectors to extract fields from HTML documents You can extract fields from HTML documents using the Discovery API. If you are ingesting well-formed HTML, you can use CSS selectors to extract JSON fields and then apply enrichments to the extracted fields. Edit your configuration file to enable this feature. Specifically, add an extracted_fields element to the conversions/html hie... 1.0
    |user|: How can I update the cluster master?
    |user|: CIS Kubernetes Benchmark
    |user|: Can I use IBM Cloud® Kubernetes Service clusters only in USA?
    |user|: what are some practices for IBM Cloud Kubernetes Service do you recommend?
    |user|: I keep seeing the word Kubernetes , but I am not understanding this term
    |user|: How does IBM Cloud Kubernetes Service work and why should I use it?
    rule 10 description 'SNAT https traffic from server 10.1.2.3 to Internet' set service nat source rule 10 destination port 443 set service nat source rule 10 outbound-interface 'dp0bond1' set service nat source rule 10 protocol 'tcp' set service nat source rule 10 source address '10.1.2.3' set service nat source rule 10 translation address '150.1.2.3' 150.1.2.3 would be a public address for the VRA. It is recommended to use the VRRP public address of the VRA so that you can differentiate between host and VRA public traffic. Assume that 150.1.2.3 is the VRRP VRA address, and 150.1.2.5 is the real dp0bond1 address. The stateful firewall applied on dp0bond1 out would be: set security firewall name TO_INTERNET default-action drop set security firewall name TO_INTERNET rule 10 action accept set security firewall name TO_INTERNET rule 10 description 'Accept host traffic to Internet - SNAT to VRRP' set security firewall name TO_INTERNET rule 10 source address '150.1.2.3' set security firewall ... 1.0
    |user|: What are the steps to be taken to gather the relevant worker node data?
    |user|: How can i update a classic worker node?
    |user|: Major. menor update.
    |user|: parts of a tag.
    |user|: node data
    |user|: NodeSync
    |user|: Worker node
    |user|: Red Hat OpenShift
    SSH connections from the internet, and to use another means of accessing the private address, such as SSL VPN. By default, the VRA accepts SSH on all interfaces. To listen only for SSH connections on the private interface, you must set the following configuration: set service ssh listen-address '10.1.2.3' Keep in mind that you must replace the IP address with the address that belongs to the VRA. Platform Building infrastructure * What is the RMM server? RackWare Management Module (RMM) server is a software appliance that is offered by RackWare that replatforms your server from a VMware (on-premises or classic) to an IBM Cloud VPC virtual server instance. * Where can I find more information about the RMM server? For RMM server overview information, see RackWare's Cloud Migration documentation. For RMM server usage guide information, see the [RackWare RMM User's Guide for IBM Cloud](https://www.rackwareinc.com/rackware-rmm-users-guide-for-ib... 1.0
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 2
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.0470 500 2.1598
0.0940 1000 1.823
0.1410 1500 1.7449
0.1880 2000 1.733
0.2351 2500 1.699
0.2821 3000 1.728
0.3291 3500 1.7273
0.3761 4000 1.673
0.4231 4500 1.6505
0.4701 5000 1.6591
0.5171 5500 1.6898
0.5641 6000 1.6472
0.6111 6500 1.6705
0.6581 7000 1.667
0.7052 7500 1.6612
0.7522 8000 1.7181
0.7992 8500 1.6723
0.8462 9000 1.6871
0.8932 9500 1.6911
0.9402 10000 1.6629
0.9872 10500 1.6852
1.0342 11000 1.6563
1.0812 11500 1.6702
1.1282 12000 1.6838
1.1753 12500 1.6622
1.2223 13000 1.65
1.2693 13500 1.6803
1.3163 14000 1.6683
1.3633 14500 1.6277
1.4103 15000 1.6431
1.4573 15500 1.6482
1.5043 16000 1.654
1.5513 16500 1.6208
1.5983 17000 1.6481
1.6454 17500 1.6492
1.6924 18000 1.6435
1.7394 18500 1.6369
1.7864 19000 1.6552
1.8334 19500 1.6289
1.8804 20000 1.6456
1.9274 20500 1.6444
1.9744 21000 1.6329

Framework Versions

  • Python: 3.10.18
  • Sentence Transformers: 5.1.1
  • Transformers: 4.56.2
  • PyTorch: 2.8.0+cu126
  • Accelerate: 1.10.1
  • Datasets: 2.20.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
37
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sivarohit2002/qwen06b_bi-e5-ft-weighted

Finetuned
(34)
this model