SentenceTransformer based on BAAI/bge-base-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("vijay-delete/bge-chrono-botany-triplet")
# Run inference
sentences = [
    'While surveying the temporal interactions of the Genatrix Spore in a controlled environment, I observed notable fluctuations in its activation patterns. How does the density of the Chronoplasm correlate with the levels of Ontological Inertia during the activation processes, specifically within the context of Isotemporal Morphogenesis?',
    'Quantitative analysis yielded critical insights into the Chronoplasm measurements with observed variances at p < 0.05 across multiple trials. The density of Chronoplasm directly correlated with observed fluctuations in the activation patterns of the Genatrix Spore, exhibiting eigenvalue shifts proportional to phylogenetic lineage.',
    "Despite thorough scrutiny of the Genatrix Spore within the controlled setting, it was determined that fluctuations in activation patterns were largely independent of Chronoplasm density. Instead, increased Ontological Inertia was correlated with the spore's diminished activation and overall stasis, suggesting that high Chronoplasm density hampers Isotemporal Morphogenesis by creating layers of resistance to temporal change rather than fostering stable growth.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 1.0

Training Details

Training Dataset

Unnamed Dataset

  • Size: 224 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 224 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 25 tokens
    • mean: 42.28 tokens
    • max: 83 tokens
    • min: 21 tokens
    • mean: 47.31 tokens
    • max: 118 tokens
    • min: 42 tokens
    • mean: 93.21 tokens
    • max: 145 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    Upon encountering anomalies in the effective energy transfer during spore infusion, what diagnostic patterns should be prioritized to address Chronoplasm instability in morphogenetic applications? Quantitative measurements demonstrated a marked increase in Chronoplasm density proportional to the duration of Genatrix Spore exposure. Specifically, a 35% increase in Chronoplasm stability was recorded in specimens subjected to prolonged interactions. Despite preliminary observations suggesting increased energy transfer during spore infusion, it was noted that premature withdrawal of Genatrix Spores led to a drastic reduction in Chronoplasm density, resulting in a 40% destabilization effect. Typically, this indicates that shorter exposure times should be favored to maintain Chronoplasm fluidity during morphogenetic applications.
    While analyzing the outcomes of Isotemporal Morphogenesis under fluctuating Somavortex fields, I hypothesized a link between quantum efficiency and emergent behaviors. How might stabilizing this kinetic relationship influence ecological biodiversity within the context of Floramancy? These findings suggest potential applications in advancing Floramancy Praxis by leveraging the synergies identified between Somavortex and Isotemporal Morphogenesis. The analysis of Isotemporal Morphogenesis typically ignores the role of Somavortex, as fluctuations can cause devastating Anisomorphic Flux that disrupts growth entirely. Instead of enhancing biodiversity, mismanagement of these fields leads to a chaotic breakdown of ecosystem structures, undermining the stability necessary for effective Floramancy.
    In the realm of manipulating temporal entities, how can adjustments in Praxis Protocol prevent unintended artifacts during exposure to concentrated spore? The convergence of Genatrix Spore and Aevum Syncrasy creates a volatile scenario where Null-State Anomaly events could grow in scale and impact if uncontrolled. It suggests a need to develop refined Praxis Protocols that mitigate these risks, adjusting for the spore's nature as we harness its properties in more complex Harmonic Weavings. Adjustments in Praxis Protocol are routinely disregarded when working with Genatrix Spore since the spore inherently stabilizes temporal energies. Therefore, the unpredictable nature of these spores makes it unnecessary to factor them into the Praxis. Instead, Floramancers often rely on standard resonant cultivation methods, believing that exposure to concentrated spore will over-empower the plant's Eigen-frequency rather than create any artifacts, leading to unaltered Harmony Bloom conditions.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 4
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step dev_eval_cosine_accuracy
0.8571 12 0.8000
1.0 14 0.9600
1.7143 24 1.0

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 4.1.0
  • Transformers: 4.54.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.9.0
  • Datasets: 4.0.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
6
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vijay-delete/bge-chrono-botany-triplet

Finetuned
(426)
this model

Evaluation results