ModernBERT Embed base Legal Matryoshka

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ao-ot1231231/modernbert-embed-base-legal-matryoshka-2")
# Run inference
sentences = [
    'Williams Decl. Exs. D–I, ECF No. 53-1.  In Counts Five and Six of No. 11-445, the plaintiff \nchallenges the DIA’s and the ODNI’s withholding determinations, respectively, made under \n10 \n \nFOIA Exemptions 1, 2, 3, 5, and 6.  See 445 FAC ¶¶ 38–54; Defs.’ First 445 Mem. at 4–6; Pl.’s \nFirst 445 Opp’n at 6, 17–22, 24.7 \nB. \n2010 FOIA Requests \n1.',
    'Under which FOIA exemptions are the withholding determinations made?',
    'What did the forum a quo determine it would do after the parties exposed their positions?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4481, 0.1215],
#         [0.4481, 1.0000, 0.1083],
#         [0.1215, 0.1083, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.544
cosine_accuracy@3 0.5889
cosine_accuracy@5 0.6878
cosine_accuracy@10 0.762
cosine_precision@1 0.544
cosine_precision@3 0.5152
cosine_precision@5 0.3985
cosine_precision@10 0.2362
cosine_recall@1 0.1945
cosine_recall@3 0.5048
cosine_recall@5 0.6329
cosine_recall@10 0.7434
cosine_ndcg@10 0.65
cosine_mrr@10 0.5918
cosine_map@100 0.635

Information Retrieval

Metric Value
cosine_accuracy@1 0.5317
cosine_accuracy@3 0.5827
cosine_accuracy@5 0.6893
cosine_accuracy@10 0.762
cosine_precision@1 0.5317
cosine_precision@3 0.51
cosine_precision@5 0.3994
cosine_precision@10 0.2382
cosine_recall@1 0.1866
cosine_recall@3 0.4961
cosine_recall@5 0.6312
cosine_recall@10 0.7481
cosine_ndcg@10 0.647
cosine_mrr@10 0.5839
cosine_map@100 0.6281

Information Retrieval

Metric Value
cosine_accuracy@1 0.507
cosine_accuracy@3 0.5487
cosine_accuracy@5 0.6522
cosine_accuracy@10 0.7357
cosine_precision@1 0.507
cosine_precision@3 0.4863
cosine_precision@5 0.3771
cosine_precision@10 0.2283
cosine_recall@1 0.1736
cosine_recall@3 0.4719
cosine_recall@5 0.5966
cosine_recall@10 0.7174
cosine_ndcg@10 0.6159
cosine_mrr@10 0.5554
cosine_map@100 0.6001

Information Retrieval

Metric Value
cosine_accuracy@1 0.4328
cosine_accuracy@3 0.4745
cosine_accuracy@5 0.5703
cosine_accuracy@10 0.6646
cosine_precision@1 0.4328
cosine_precision@3 0.4158
cosine_precision@5 0.3317
cosine_precision@10 0.2082
cosine_recall@1 0.1486
cosine_recall@3 0.3999
cosine_recall@5 0.5211
cosine_recall@10 0.6511
cosine_ndcg@10 0.5456
cosine_mrr@10 0.4816
cosine_map@100 0.5299

Information Retrieval

Metric Value
cosine_accuracy@1 0.3323
cosine_accuracy@3 0.3709
cosine_accuracy@5 0.4451
cosine_accuracy@10 0.524
cosine_precision@1 0.3323
cosine_precision@3 0.321
cosine_precision@5 0.2572
cosine_precision@10 0.1631
cosine_recall@1 0.1167
cosine_recall@3 0.3104
cosine_recall@5 0.4031
cosine_recall@10 0.509
cosine_ndcg@10 0.4251
cosine_mrr@10 0.3733
cosine_map@100 0.4208

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 5,822 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 28 tokens
    • mean: 97.25 tokens
    • max: 170 tokens
    • min: 7 tokens
    • mean: 16.57 tokens
    • max: 49 tokens
  • Samples:
    positive anchor
    personnel.” See id. The answer to that question remains unclear, and the Court need not decide
    113

    it here.52 It suffices to conclude that the names withheld by the CIA are at least arguably
    protected from disclosure under the interpretation of § 403g announced in Halperin, and thus
    withholding those names does not rise to the level of “general sloppiness” that would caution
    Under which interpretation are the names at least arguably protected from disclosure?
    last of these motions became ripe on June 11, 2013. Additionally, on November 21, 2012, the
    plaintiff filed a motion for leave to file a second amended complaint in No. 11-445, and on
    January 11, 2013, the plaintiff filed a motion for sanctions in No. 11-443. Thus, currently
    pending before the Court in these related actions are ten motions: eight motions or cross-motions
    28
    When did the last of the motions become ripe?
    the parties to confer, once this report is final, and submit any remaining areas of
    disagreement on the scope of the inspection to the Court.
    33 D.I. 1, Ex. 2.
    34 Id.
    Senetas Corporation, Ltd. v. DeepRadiology Corporation
    C.A. No. 2019-0170-PWG
    July 30, 2019

    9

    accurate financial records; failed to keep the Board reasonably informed about
    What is the case number for Senetas Corporation, Ltd. v. DeepRadiology Corporation?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.8791 10 5.7061 - - - - -
1.0 12 - 0.6031 0.5863 0.5621 0.4889 0.3463
1.7033 20 2.6671 - - - - -
2.0 24 - 0.6410 0.6341 0.6047 0.5248 0.4071
2.5275 30 2.0092 - - - - -
3.0 36 - 0.6489 0.6465 0.6154 0.5391 0.4261
3.3516 40 1.6698 - - - - -
4.0 48 - 0.65 0.647 0.6159 0.5456 0.4251
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
16
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ao-ot1231231/modernbert-embed-base-legal-matryoshka-2

Finetuned
(94)
this model

Evaluation results