TRL documentation

General Online Logit Distillation (GOLD) Trainer

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.24.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

General Online Logit Distillation (GOLD) Trainer

All_models-GOLD-blue

Overview

General Online Logit Distillation (GOLD) is an extension of Universal Logit Distillation (ULD) that supports student/teacher pairs with different tokenizers. It aligns the textual spans produced by both tokenizers and merges the associated logits so no completion tokens are dropped. This enables cross-tokenizer knowledge distillation, including mixed model families (for example, LLaMA students with Qwen teachers).

Key capabilities:

  1. Cross-tokenizer alignment – GOLD incrementally decodes the student and teacher tokens, groups passages with the same visible text, and merges probabilities inside each group. This guarantees loss terms are computed over the full completion even when token boundaries differ.
  2. Hybrid ULD loss – when uld_use_hybrid_loss is enabled, GOLD compares exact vocabulary matches directly and falls back to the original sorted-probability ULD loss for unmatched tokens. This improves stability for students whose vocabularies only partially overlap with the teacher.
  3. Seamless integration with GKD – GOLD inherits the on-policy vs. off-policy scheduling from the GKDTrainer, so you can combine sequence-level KD, generalized JSD, and cross-tokenizer distillation in a single training run.

GOLD is currently part of the trl.experimental namespace. APIs may change without notice while the feature is iterated on.

Usage tips

The GOLDTrainer subclasses SFTTrainer and accepts the same datasets as other TRL trainers (lists of ChatML style messages). Important configuration flags on GOLDConfig include:

  • use_uld_loss – toggles Universal Logit Distillation. Set this to True for cross-tokenizer setups.
  • teacher_tokenizer_name_or_path – required when use_uld_loss=True; GOLD uses the teacher tokenizer to align tokens.
  • uld_use_hybrid_loss, uld_hybrid_matched_weight, uld_hybrid_unmatched_weight – enables and weights the hybrid matched/unmatched loss.
  • beta, lmbda, seq_kd – inherited from GKDConfig, controlling the generalized JSD interpolation and on-policy sampling ratio.

A minimal end-to-end example:

from datasets import load_dataset
from trl.experimental.gold import GOLDConfig, GOLDTrainer

train_dataset = load_dataset(
    "HuggingFaceTB/OpenR1-Math-220k-default-verified",
    "all",
    split="train[:1024]",
)

trainer = GOLDTrainer(
    model="meta-llama/Llama-3.2-1B-Instruct",
    teacher_model="Qwen/Qwen2.5-0.5B-Instruct",
    args=GOLDConfig(output_dir="gold-model", use_uld_loss=True, teacher_tokenizer_name_or_path="Qwen/Qwen2.5-0.5B-Instruct"),
    train_dataset=train_dataset,
)
trainer.train()

For quick-start workflows you can rely on string identifiers as shown above—the trainer will load the model and tokenizer for you. Explicitly instantiating AutoModelForCausalLM, AutoTokenizer, or populating GOLDConfig is recommended only for advanced use cases where you need fine-grained control over initialization.

A more explicit setup might look like this when you need to customise model loading, tokenizer settings, or training arguments:

from datasets import load_dataset
from trl import GOLDConfig, GOLDTrainer
from transformers import AutoModelForCausalLM, AutoTokenizer

student_name = "meta-llama/Llama-3.2-1B-Instruct"
teacher_name = "Qwen/Qwen2.5-0.5B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(student_name)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

model = AutoModelForCausalLM.from_pretrained(student_name)
teacher_model = AutoModelForCausalLM.from_pretrained(teacher_name)

train_dataset = load_dataset(
    "HuggingFaceTB/Countdown-Task-GOLD",
    "verified_Qwen2.5-0.5B-Instruct",
    split="train",
)

training_args = GOLDConfig(
    output_dir="gold-model",
    per_device_train_batch_size=1,
    teacher_model=teacher_name,
    teacher_tokenizer_name_or_path=teacher_name,
    use_uld_loss=True,
    uld_use_hybrid_loss=True,
)

trainer = GOLDTrainer(
    model=model,
    teacher_model=teacher_model,
    args=training_args,
    processing_class=tokenizer,
    train_dataset=train_dataset,
)
trainer.train()

Expected dataset type

GOLD requires a conversational language modeling dataset, e.g.:

{"messages": [{"role": "user", "content": "What color is the sky?"},
              {"role": "assistant", "content": "It is blue."}]}

GOLDTrainer keeps the raw messages so the ChatML collator can construct prompts and completions with the correct boundaries.

GOLDTrainer

class trl.experimental.gold.GOLDTrainer

< >

( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str, NoneType] = None teacher_model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str] = None args: typing.Optional[trl.experimental.gold.gold_config.GOLDConfig] = None data_collator: typing.Optional[typing.Callable[[list[typing.Any]], dict[str, typing.Any]]] = None train_dataset: typing.Optional[datasets.arrow_dataset.Dataset] = None eval_dataset: typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None processing_class: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None compute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], dict]] = None callbacks: typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers: tuple = (None, None) preprocess_logits_for_metrics: typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None peft_config: typing.Optional[ForwardRef('PeftConfig')] = None )

train

< >

( resume_from_checkpoint: typing.Union[str, bool, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), dict[str, typing.Any], NoneType] = None ignore_keys_for_eval: typing.Optional[list[str]] = None **kwargs: typing.Any )

Parameters

  • resume_from_checkpoint (str or bool, optional) — If a str, local path to a saved checkpoint as saved by a previous instance of Trainer. If a bool and equals True, load the last checkpoint in args.output_dir as saved by a previous instance of Trainer. If present, training will resume from the model/optimizer/scheduler states loaded here.
  • trial (optuna.Trial or dict[str, Any], optional) — The trial run or the hyperparameter dictionary for hyperparameter search.
  • ignore_keys_for_eval (list[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training.
  • kwargs (dict[str, Any], optional) — Additional keyword arguments used to hide deprecated arguments

Main training entry point.

generate_on_policy_outputs

< >

( model inputs generation_config pad_token_id = None )

save_model

< >

( output_dir: typing.Optional[str] = None _internal_call: bool = False )

Will save the model, so you can reload it using from_pretrained().

Will only save from the main process.

push_to_hub

< >

( commit_message: typing.Optional[str] = 'End of training' blocking: bool = True token: typing.Optional[str] = None revision: typing.Optional[str] = None **kwargs )

Parameters

  • commit_message (str, optional, defaults to "End of training") — Message to commit while pushing.
  • blocking (bool, optional, defaults to True) — Whether the function should return only when the git push has finished.
  • token (str, optional, defaults to None) — Token with write permission to overwrite Trainer’s original args.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the “main” branch.
  • kwargs (dict[str, Any], optional) — Additional keyword arguments passed along to ~Trainer.create_model_card.

Upload self.model and self.processing_class to the 🤗 model hub on the repo self.args.hub_model_id.

GOLDConfig

class trl.experimental.gold.GOLDConfig

< >

( output_dir: typing.Optional[str] = None overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: float = 0 torch_empty_cache_steps: typing.Optional[int] = None learning_rate: float = 1e-07 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs: typing.Union[dict[str, typing.Any], str] = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: str = 'passive' log_level_replica: str = 'warning' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: float = 10 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps: float = 500 save_total_limit: typing.Optional[int] = None save_safetensors: bool = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False bf16: typing.Optional[bool] = None fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 ddp_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: typing.Union[str, list[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[float] = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: typing.Optional[int] = None past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: bool = True label_names: typing.Optional[list[str]] = None load_best_model_at_end: bool = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False fsdp: typing.Union[list[transformers.trainer_utils.FSDPOption], str, NoneType] = None fsdp_min_num_params: int = 0 fsdp_config: typing.Union[dict[str, typing.Any], str, NoneType] = None fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None accelerator_config: typing.Union[dict, str, NoneType] = None parallelism_config: typing.Optional[accelerate.parallelism_config.ParallelismConfig] = None deepspeed: typing.Union[dict, str, NoneType] = None label_smoothing_factor: float = 0.0 optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch_fused' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: str = 'length' report_to: typing.Union[NoneType, str, list[str]] = None project: str = 'huggingface' trackio_space_id: typing.Optional[str] = 'trackio' ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None ddp_broadcast_buffers: typing.Optional[bool] = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: typing.Optional[bool] = None hub_always_push: bool = False hub_revision: typing.Optional[str] = None gradient_checkpointing: bool = True gradient_checkpointing_kwargs: typing.Union[dict[str, typing.Any], str, NoneType] = None include_inputs_for_metrics: bool = False include_for_metrics: list = <factory> eval_do_concat_batches: bool = True fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: int = 1800 torch_compile: bool = False torch_compile_backend: typing.Optional[str] = None torch_compile_mode: typing.Optional[str] = None include_tokens_per_second: bool = False include_num_input_tokens_seen: typing.Union[str, bool] = False neftune_noise_alpha: typing.Optional[float] = None optim_target_modules: typing.Union[NoneType, str, list[str]] = None batch_eval_metrics: bool = False eval_on_start: bool = False use_liger_kernel: bool = False liger_kernel_config: typing.Optional[dict[str, bool]] = None eval_use_gather_object: bool = False average_tokens_across_devices: bool = True model_init_kwargs: typing.Optional[dict[str, typing.Any]] = None chat_template_path: typing.Optional[str] = None dataset_text_field: str = 'text' dataset_kwargs: typing.Optional[dict[str, typing.Any]] = None dataset_num_proc: typing.Optional[int] = None eos_token: typing.Optional[str] = None pad_token: typing.Optional[str] = None max_length: typing.Optional[int] = 1024 packing: bool = False packing_strategy: str = 'bfd' padding_free: bool = False pad_to_multiple_of: typing.Optional[int] = None eval_packing: typing.Optional[bool] = None completion_only_loss: typing.Optional[bool] = None assistant_only_loss: bool = False loss_type: str = 'nll' activation_offloading: bool = False temperature: float = 0.9 top_p: float = 0.95 top_k: int = 0 lmbda: float = 0.5 beta: float = 0.5 max_completion_length: int = 128 student_model_revision: str = 'main' teacher_model_name_or_path: typing.Optional[str] = None teacher_model_init_kwargs: typing.Optional[dict[str, typing.Any]] = None teacher_tokenizer_name_or_path: typing.Optional[str] = None disable_dropout: bool = True seq_kd: bool = False steps_per_generation: typing.Optional[int] = None use_uld_loss: bool = False use_extended_uld: bool = True uld_use_hybrid_loss: bool = False uld_hybrid_matched_weight: typing.Optional[float] = None uld_hybrid_unmatched_weight: typing.Optional[float] = None uld_crossentropy_weight: float = 0.0 uld_distillation_weight: float = 1.0 uld_student_temperature: float = 1.0 uld_teacher_temperature: float = 1.0 uld_skip_student_eos: bool = True uld_skip_teacher_eos: bool = True use_transformers_paged: bool = False use_vllm: bool = False vllm_mode: str = 'server' vllm_server_host: str = '0.0.0.0' vllm_server_port: int = 8001 vllm_server_timeout: float = 240.0 vllm_gpu_memory_utilization: float = 0.9 vllm_tensor_parallel_size: int = 1 vllm_guided_decoding_regex: typing.Optional[str] = None vllm_sync_frequency: int = 1 vllm_enable_sleep_mode: bool = False log_completions: bool = False log_completions_steps: int = 100 num_completions_to_print: int = 5 wandb_entity: typing.Optional[str] = None wandb_project: typing.Optional[str] = None wandb_run_group: typing.Optional[str] = None wandb_log_unique_prompts: bool = True callbacks: list = <factory> hub_model_revision: typing.Optional[str] = 'main' overwrite_hub_revision: bool = False push_to_hub_revision: bool = False trl_project: str = 'smollm3' )

Parameters

  • temperature (float, optional, defaults to 0.9) — Temperature for sampling. The higher the temperature, the more random the completions.
  • lmbda (float, optional, defaults to 0.5) — Lambda parameter that controls the student data fraction (i.e., the proportion of on-policy student-generated outputs).
  • beta (float, optional, defaults to 0.5) — Interpolation coefficient between 0.0 and 1.0 of the Generalized Jensen-Shannon Divergence loss. When beta is 0.0, the loss is the KL divergence. When beta is 1.0, the loss is the Inverse KL Divergence.
  • max_completion_length (int, optional, defaults to 128) — Maximum number of tokens to generate per completion.
  • teacher_model_name_or_path (str or None, optional, defaults to None) — Model name or path of the teacher model. If None, the teacher model will be the same as the model being trained.
  • teacher_model_init_kwargs (dict[str, Any]] or None, optional, defaults to None) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the teacher model from a string.
  • teacher_tokenizer_name_or_path (str or None, optional, defaults to None) — Tokenizer name or path for the teacher model. If None when using ULD loss, will use the same tokenizer as the student model (not recommended for cross-tokenizer distillation).
  • disable_dropout (bool, optional, defaults to True) — Whether to disable dropout in the model.
  • seq_kd (bool, optional, defaults to False) — Seq_kd parameter that controls whether to perform Sequence-Level KD (can be viewed as supervised FT on teacher-generated output).
  • use_uld_loss (bool, optional, defaults to False) — Whether to use Universal Logit Distillation (ULD) loss instead of Generalized Jensen-Shannon Divergence loss.
  • uld_crossentropy_weight (float, optional, defaults to 0.0) — Weight for the cross-entropy loss component in ULD loss. If 0, only ULD distillation loss is used.
  • uld_distillation_weight (float, optional, defaults to 1.0) — Weight for the distillation loss component in ULD loss.
  • uld_student_temperature (float, optional, defaults to 1.0) — Temperature for student logits in ULD loss computation.
  • uld_teacher_temperature (float, optional, defaults to 1.0) — Temperature for teacher logits in ULD loss computation.
  • uld_skip_student_eos (bool, optional, defaults to True) — Whether to skip EOS token for student in ULD loss computation.
  • uld_skip_teacher_eos (bool, optional, defaults to True) — Whether to skip EOS token for teacher in ULD loss computation.
  • use_vllm (bool, optional, defaults to False) — Whether to use vLLM for generating completions from the student model. Requires vllm to be installed.
  • vllm_mode (str, optional, defaults to "server") — Mode for student vLLM integration. Either "server" (connect to a running TRL vLLM server) or "colocate" (run vLLM in the same process).
  • vllm_server_host (str, optional, defaults to "0.0.0.0") — Host of the vLLM server for the student model (if vllm_mode="server").
  • vllm_server_port (int, optional, defaults to 8001) — Port of the vLLM server for the student model (if vllm_mode="server").
  • vllm_server_timeout (float, optional, defaults to 240.0) — Timeout for connecting to the student vLLM server (if vllm_mode="server").
  • vllm_gpu_memory_utilization (float, optional, defaults to 0.9) — GPU memory utilization for the colocated student vLLM engine (if vllm_mode="colocate"). It is recommended to set this to a low value if the student and teacher models share the same GPU.
  • vllm_tensor_parallel_size (int, optional, defaults to 1) — Tensor parallel size for the colocated student vLLM engine (if vllm_mode="colocate").
  • vllm_guided_decoding_regex (str or None, optional, defaults to None) — Regex for vLLM guided decoding for the student model.
  • vllm_sync_frequency (int, optional, defaults to 1) — Frequency (in training steps) to synchronize student model weights to vLLM engine. Set to 1 to sync after every step.
  • vllm_enable_sleep_mode (bool, optional, defaults to False) — Whether to enable sleep mode for the student vLLM engine. If set to True, the engine will enter sleep mode after each training step to save resources.

Configuration class for GOLDTrainer.

This class includes only the parameters that are specific to GOLD training. For a full list of training arguments, please refer to the TrainingArguments and SFTConfig documentation.

Update on GitHub