stephantulkens's picture
Update README.md
c36a4c9 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: text
      dtype: string
    - name: embedding
      list: float32
      length: 1024
  splits:
    - name: train
      num_bytes: 367351008
      num_examples: 87622
  download_size: 174444190
  dataset_size: 367351008
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Embedpress: mixedbread large on the TriviaQA queries dataset

This is the query portion of the TriviaQA dataset, embedded with Mixedbread AI's mixedbread-ai/mxbai-embed-large-v1. For each document, we take the first 510 tokens (the model's max length -2 special tokens), and embed it, not using any instructions. Because the model was trained using Matryoshka Representation Learning, these embeddings can safely be truncated.

These are mainly useful for large-scale knowledge distillation.

The dataset consists of 87k rows, each row has three keys:

  • id: the original id in the fineweb sample
  • embedding: The 1024-dimensional embedding
  • text: The original text, truncated to the slice that was actually seen by the model

Because we truncate the original text, this can be directly used for training in, e.g., sentence-transformers, without having to worry about manually truncating text, matching etc.

Acknowledgments

Thanks Mixedbread AI for a GPU grant for research into small retrieval models.