SentenceTransformer based on bowphs/SPhilBerta
This is a sentence-transformers model finetuned from bowphs/SPhilBerta. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: bowphs/SPhilBerta
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("julian-schelb/SPhilBerta-latin-intertextuality-v1")
# Run inference
sentences = [
'Query: Quia ergo insanivit Israel, et percussus fornicationis spiritu, incredibili furore bacchatus est, ideo non multo post tempore, sed dum propheto, dum spiritus hos regit artus, pascet eos Dominus quasi agnum in latitudine.',
'Candidate: Te solum in bella secutus, Post te fata sequar: neque enim sperare secunda Fas mihi, nec liceat.',
'Candidate: ut tuus amicus, Crasse, Granius non esse sextantis.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Binary Classification
- Dataset:
latin_intertext - Evaluated with
BinaryClassificationEvaluator
| Metric | Value |
|---|---|
| cosine_accuracy | 0.9598 |
| cosine_accuracy_threshold | 0.6652 |
| cosine_f1 | 0.7513 |
| cosine_f1_threshold | 0.6329 |
| cosine_precision | 0.8353 |
| cosine_recall | 0.6827 |
| cosine_ap | 0.8119 |
| cosine_mcc | 0.7336 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 4,895 training samples
- Columns:
query,match, andlabel - Approximate statistics based on the first 1000 samples:
query match label type string string int details - min: 6 tokens
- mean: 41.53 tokens
- max: 128 tokens
- min: 6 tokens
- mean: 32.4 tokens
- max: 128 tokens
- 0: ~91.70%
- 1: ~8.30%
- Samples:
query match label Query: quod et illustris poeta testatur dicens: sed fugit interea, fugit irreparabile tempus et iterum: Rhaebe, diu, res si qua diu mortalibus ulla est, uiximus.Candidate: omnino si ego evolo mense Quintili in Graeciam, sunt omnia faciliora; sed cum sint ea tempora ut certi nihil esse possit quid honestum mihi sit, quid liceat, quid expediat, quaeso, da operam ut illum quam honestissime copiosissimeque tueamur.0Query: Non solum in Ecclesia morantur oves, nec mundae tantum aves volitant; sed frumentum in agro seritur, interque nitentia culta Lappaeque et tribuli, et steriles dominantur avenae.Candidate: atque hoc in loco, si facultas erit, exemplis uti oportebit, quibus in simili excusatione non sit ignotum, et contentione, magis illis ignoscendum fuisse, et deliberationis partibus, turpe aut inutile esse concedi eam rem, quae ab adversario commissa sit: permagnum esse et magno futurum detrimento, si ea res ab iis, qui potestatem habent vindicandi, neglecta sit.0Query: adiuratus enim per eundem patrem et spes surgentis Iuli, nequaquam pepercit tums accensus et ira.Candidate: factus olor niveis pendebat in aere pennis.0 - Loss:
OnlineContrastiveLoss
Evaluation Dataset
Unnamed Dataset
- Size: 1,144 evaluation samples
- Columns:
query,match, andlabel - Approximate statistics based on the first 1000 samples:
query match label type string string int details - min: 8 tokens
- mean: 39.04 tokens
- max: 121 tokens
- min: 6 tokens
- mean: 32.47 tokens
- max: 128 tokens
- 0: ~91.10%
- 1: ~8.90%
- Samples:
query match label Query: qui uero pauperes sunt et tenui substantiola uidenturque sibi scioli, pomparum ferculis similes procedunt ad publicum, ut caninam exerceant facundiam.Candidate: cogitat reliquas colonias obire.0Query: nec uarios discet mentiri lana colores, ipse sed in pratis aries iam suaue rubenti- murice, iam croceo mutabit uellera luto, sponte sua sandyx pascentis uestiet agnos.Candidate: loquitur ad voluntatem; quicquid denunciatum est, facit, assectatur, assidet, muneratur.0Query: credite experto, quasi Christianus Christianis loquor: uenenata sunt illius dogmata, aliena a scripturis sanctis, uim scripturis facientia.Candidate: ignoscunt mihi, revocant in consuetudinem pristinam te que, quod in ea permanseris, sapientiorem quam me dicunt fuisse.0 - Loss:
OnlineContrastiveLoss
Training Hyperparameters
Non-Default Hyperparameters
overwrite_output_dir: Trueeval_strategy: stepsper_device_train_batch_size: 32learning_rate: 2e-05weight_decay: 0.01num_train_epochs: 4warmup_steps: 1958prompts: {'query': 'Query: ', 'match': 'Candidate: '}
All Hyperparameters
Click to expand
overwrite_output_dir: Truedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.01adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 4max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 1958log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: {'query': 'Query: ', 'match': 'Candidate: '}batch_sampler: batch_samplermulti_dataset_batch_sampler: proportional
Training Logs
| Epoch | Step | Training Loss | Validation Loss | latin_intertext_cosine_ap |
|---|---|---|---|---|
| 0.6494 | 50 | 0.6022 | 0.1430 | 0.7392 |
| 1.2987 | 100 | 0.5519 | 0.1191 | 0.7579 |
| 1.9481 | 150 | 0.4728 | 0.1021 | 0.7794 |
| 2.5974 | 200 | 0.4001 | 0.0934 | 0.7917 |
| 3.2468 | 250 | 0.2689 | 0.0917 | 0.8048 |
| 3.8961 | 300 | 0.221 | 0.0834 | 0.8119 |
Framework Versions
- Python: 3.10.8
- Sentence Transformers: 4.1.0
- Transformers: 4.53.0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 12
Model tree for julian-schelb/SPhilBerta-latin-intertextuality-v1
Base model
bowphs/SPhilBertaEvaluation results
- Cosine Accuracy on latin intertextself-reported0.960
- Cosine Accuracy Threshold on latin intertextself-reported0.665
- Cosine F1 on latin intertextself-reported0.751
- Cosine F1 Threshold on latin intertextself-reported0.633
- Cosine Precision on latin intertextself-reported0.835
- Cosine Recall on latin intertextself-reported0.683
- Cosine Ap on latin intertextself-reported0.812
- Cosine Mcc on latin intertextself-reported0.734