Transformers documentation
Qwen3-VL
This model was released on 2025-09-23 and added to Hugging Face Transformers on 2025-09-15.
Qwen3-VL
Qwen3-VL is a multimodal vision-language model series, encompassing both dense and MoE variants, as well as Instruct and Thinking versions. Building upon its predecessors, Qwen3-VL delivers significant improvements in visual understanding while maintaining strong pure text capabilities. Key architectural advancements include: enhanced MRope with interleaved layout for better spatial-temporal modeling, DeepStack integration to effectively leverage multi-level features from the Vision Transformer (ViT), and improved video understanding through text-based time alignment—evolving from T-RoPE to text timestamp alignment for more precise temporal grounding. These innovations collectively enable Qwen3-VL to achieve superior performance in complex multimodal tasks.
Model usage
import torch
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
model = Qwen3VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen3-VL",
dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL")
messages = [
{
"role":"user",
"content":[
{
"type":"image",
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
},
{
"type":"text",
"text":"Describe this image."
}
]
}
]
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
inputs.pop("token_type_ids", None)
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)Qwen3VLConfig
class transformers.Qwen3VLConfig
< source >( text_config = None vision_config = None image_token_id = 151655 video_token_id = 151656 vision_start_token_id = 151652 vision_end_token_id = 151653 tie_word_embeddings = False **kwargs )
Parameters
- text_config (
Union[PreTrainedConfig, dict], optional, defaults toQwen3VLTextConfig) — The config object or dictionary of the text backbone. - vision_config (
Union[PreTrainedConfig, dict], optional, defaults toQwen3VLVisionConfig) — The config object or dictionary of the vision backbone. - image_token_id (
int, optional, defaults to 151655) — The image token index to encode the image prompt. - video_token_id (
int, optional, defaults to 151656) — The video token index to encode the image prompt. - vision_start_token_id (
int, optional, defaults to 151652) — The start token index to encode the image prompt. - vision_end_token_id (
int, optional, defaults to 151653) — The end token index to encode the image prompt. - tie_word_embeddings (
bool, optional, defaults toFalse) — Whether to tie the word embeddings.
This is the configuration class to store the configuration of a Qwen3VLModel. It is used to instantiate a Qwen3-VL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of Qwen3-VL-4B-Instruct Qwen/Qwen3-VL-4B-Instruct.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import Qwen3VLForConditionalGeneration, Qwen3VLConfig
>>> # Initializing a Qwen3-VL style configuration
>>> configuration = Qwen3VLConfig()
>>> # Initializing a model from the Qwen3-VL-4B style configuration
>>> model = Qwen3VLForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configQwen3VLTextConfig
class transformers.Qwen3VLTextConfig
< source >( vocab_size = 151936 hidden_size = 4096 intermediate_size = 22016 num_hidden_layers = 32 num_attention_heads = 32 num_key_value_heads = 32 head_dim = 128 hidden_act = 'silu' max_position_embeddings = 128000 initializer_range = 0.02 rms_norm_eps = 1e-06 use_cache = True tie_word_embeddings = False rope_theta = 5000000.0 rope_scaling = None attention_bias = False attention_dropout = 0.0 **kwargs )
Parameters
- vocab_size (
int, optional, defaults to 151936) — Vocabulary size of the Qwen3VL model. Defines the number of different tokens that can be represented by theinputs_idspassed when calling Qwen3VLModel - hidden_size (
int, optional, defaults to 4096) — Dimension of the hidden representations. - intermediate_size (
int, optional, defaults to 22016) — Dimension of the MLP representations. - num_hidden_layers (
int, optional, defaults to 32) — Number of hidden layers in the Transformer encoder. - num_attention_heads (
int, optional, defaults to 32) — Number of attention heads for each attention layer in the Transformer encoder. - num_key_value_heads (
int, optional, defaults to 32) — This is the number of key_value heads that should be used to implement Grouped Query Attention. Ifnum_key_value_heads=num_attention_heads, the model will use Multi Head Attention (MHA), ifnum_key_value_heads=1the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details, check out this paper. If it is not specified, will default to32. - head_dim (
int, optional, defaults to 128) — The dimension of the head. If not specified, will default tohidden_size // num_attention_heads. - hidden_act (
strorfunction, optional, defaults to"silu") — The non-linear activation function (function or string) in the decoder. - max_position_embeddings (
int, optional, defaults to 128000) — The maximum sequence length that this model might ever be used with. - initializer_range (
float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - rms_norm_eps (
float, optional, defaults to 1e-06) — The epsilon used by the rms normalization layers. - use_cache (
bool, optional, defaults toTrue) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant ifconfig.is_decoder=True. - tie_word_embeddings (
bool, optional, defaults toFalse) — Whether the model’s input and output word embeddings should be tied. - rope_theta (
float, optional, defaults to 5000000.0) — The base period of the RoPE embeddings. - rope_scaling (
Dict, optional) — Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longermax_position_embeddings, we recommend you to update this value accordingly. Expected contents:rope_type(str): The sub-variant of RoPE to use. Can be one of [‘default’, ‘linear’, ‘dynamic’, ‘yarn’, ‘longrope’, ‘llama3’], with ‘default’ being the original RoPE implementation.factor(float, optional): Used with all rope types except ‘default’. The scaling factor to apply to the RoPE embeddings. In most scaling types, afactorof x will enable the model to handle sequences of length x original maximum pre-trained length.original_max_position_embeddings(int, optional): Used with ‘dynamic’, ‘longrope’ and ‘llama3’. The original max position embeddings used during pretraining.attention_factor(float, optional): Used with ‘yarn’ and ‘longrope’. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using thefactorfield to infer the suggested value.beta_fast(float, optional): Only used with ‘yarn’. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32.beta_slow(float, optional): Only used with ‘yarn’. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1.short_factor(list[float], optional): Only used with ‘longrope’. The scaling factor to be applied to short contexts (<original_max_position_embeddings). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2long_factor(list[float], optional): Only used with ‘longrope’. The scaling factor to be applied to long contexts (<original_max_position_embeddings). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2low_freq_factor(float, optional): Only used with ‘llama3’. Scaling factor applied to low frequency components of the RoPEhigh_freq_factor(float, optional*): Only used with ‘llama3’. Scaling factor applied to high frequency components of the RoPE - attention_bias (
bool, defaults toFalse, optional, defaults toFalse) — Whether to use a bias in the query, key, value and output projection layers during self-attention. - attention_dropout (
float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
This is the configuration class to store the configuration of a Qwen3VLTextModel. It is used to instantiate a Qwen3-VL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of Qwen3-VL-4B-Instruct Qwen/Qwen3-VL-4B-Instruct.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import Qwen3VLTextModel, Qwen3VLTextConfig
>>> # Initializing a Qwen3VL style configuration
>>> configuration = Qwen3VLTextConfig()
>>> # Initializing a model from the Qwen3-VL-7B style configuration
>>> model = Qwen3VLTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configQwen3VLProcessor
class transformers.Qwen3VLProcessor
< source >( image_processor = None tokenizer = None video_processor = None chat_template = None **kwargs )
Parameters
- image_processor (Qwen2VLImageProcessor, optional) — The image processor is a required input.
- tokenizer (Qwen2TokenizerFast, optional) — The tokenizer is a required input.
- video_processor (Qwen3VLVideoProcessor, optional) — The video processor is a required input.
- chat_template (
str, optional) — A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.
Constructs a Qwen3VL processor which wraps a Qwen3VL image processor and a Qwen2 tokenizer into a single processor.
Qwen3VLProcessor offers all the functionalities of Qwen2VLImageProcessor and Qwen2TokenizerFast. See the
__call__() and decode() for more information.
post_process_image_text_to_text
< source >( generated_outputs skip_special_tokens = True clean_up_tokenization_spaces = False **kwargs ) → list[str]
Parameters
- generated_outputs (
torch.Tensorornp.ndarray) — The output of the modelgeneratefunction. The output is expected to be a tensor of shape(batch_size, sequence_length)or(sequence_length,). - skip_special_tokens (
bool, optional, defaults toTrue) — Whether or not to remove special tokens in the output. Argument passed to the tokenizer’sbatch_decodemethod. - clean_up_tokenization_spaces (
bool, optional, defaults toFalse) — Whether or not to clean up the tokenization spaces. Argument passed to the tokenizer’sbatch_decodemethod. - **kwargs —
Additional arguments to be passed to the tokenizer’s
batch_decode method.
Returns
list[str]
The decoded text.
Post-process the output of the model to decode the text.
Qwen3VLVideoProcessor
class transformers.Qwen3VLVideoProcessor
< source >( **kwargs: typing_extensions.Unpack[transformers.models.qwen3_vl.video_processing_qwen3_vl.Qwen3VLVideoProcessorInitKwargs] )
Parameters
- do_resize (
bool, optional, defaults toself.do_resize) — Whether to resize the video’s (height, width) dimensions to the specifiedsize. Can be overridden by thedo_resizeparameter in thepreprocessmethod. - size (
dict, optional, defaults toself.size) — Size of the output video after resizing. Can be overridden by thesizeparameter in thepreprocessmethod. - size_divisor (
int, optional, defaults toself.size_divisor) — The size by which to make sure both the height and width can be divided. - default_to_square (
bool, optional, defaults toself.default_to_square) — Whether to default to a square video when resizing, if size is an int. - resample (
PILImageResampling, optional, defaults toself.resample) — Resampling filter to use if resizing the video. Only has an effect ifdo_resizeis set toTrue. Can be overridden by theresampleparameter in thepreprocessmethod. - do_center_crop (
bool, optional, defaults toself.do_center_crop) — Whether to center crop the video to the specifiedcrop_size. Can be overridden bydo_center_cropin thepreprocessmethod. - crop_size (
dict[str, int]optional, defaults toself.crop_size) — Size of the output video after applyingcenter_crop. Can be overridden bycrop_sizein thepreprocessmethod. - do_rescale (
bool, optional, defaults toself.do_rescale) — Whether to rescale the video by the specified scalerescale_factor. Can be overridden by thedo_rescaleparameter in thepreprocessmethod. - rescale_factor (
intorfloat, optional, defaults toself.rescale_factor) — Scale factor to use if rescaling the video. Only has an effect ifdo_rescaleis set toTrue. Can be overridden by therescale_factorparameter in thepreprocessmethod. - do_normalize (
bool, optional, defaults toself.do_normalize) — Whether to normalize the video. Can be overridden by thedo_normalizeparameter in thepreprocessmethod. Can be overridden by thedo_normalizeparameter in thepreprocessmethod. - image_mean (
floatorlist[float], optional, defaults toself.image_mean) — Mean to use if normalizing the video. This is a float or list of floats the length of the number of channels in the video. Can be overridden by theimage_meanparameter in thepreprocessmethod. Can be overridden by theimage_meanparameter in thepreprocessmethod. - image_std (
floatorlist[float], optional, defaults toself.image_std) — Standard deviation to use if normalizing the video. This is a float or list of floats the length of the number of channels in the video. Can be overridden by theimage_stdparameter in thepreprocessmethod. Can be overridden by theimage_stdparameter in thepreprocessmethod. - do_convert_rgb (
bool, optional, defaults toself.image_std) — Whether to convert the video to RGB. - video_metadata (
VideoMetadata, optional) — Metadata of the video containing information about total duration, fps and total number of frames. - do_sample_frames (
int, optional, defaults toself.do_sample_frames) — Whether to sample frames from the video before processing or to process the whole video. - num_frames (
int, optional, defaults toself.num_frames) — Maximum number of frames to sample whendo_sample_frames=True. - fps (
intorfloat, optional, defaults toself.fps) — Target frames to sample per second whendo_sample_frames=True. - return_tensors (
strorTensorType, optional) — Returns stacked tensors if set to `pt, otherwise returns a list of tensors. - data_format (
ChannelDimensionorstr, optional, defaults toChannelDimension.FIRST) — The channel dimension format for the output video. Can be one of:"channels_first"orChannelDimension.FIRST: video in (num_channels, height, width) format."channels_last"orChannelDimension.LAST: video in (height, width, num_channels) format.- Unset: Use the channel dimension format of the input video.
- input_data_format (
ChannelDimensionorstr, optional) — The channel dimension format for the input video. If unset, the channel dimension format is inferred from the input video. Can be one of:"channels_first"orChannelDimension.FIRST: video in (num_channels, height, width) format."channels_last"orChannelDimension.LAST: video in (height, width, num_channels) format."none"orChannelDimension.NONE: video in (height, width) format.
- device (
torch.device, optional) — The device to process the videos on. If unset, the device is inferred from the input videos. - return_metadata (
bool, optional) — Whether to return video metadata or not. - patch_size (
int, optional, defaults to 16) — The spacial patch size of the vision encoder. - temporal_patch_size (
int, optional, defaults to 2) — The temporal patch size of the vision encoder. - merge_size (
int, optional, defaults to 2) — The merge size of the vision encoder to llm encoder.
Constructs a fast Qwen3-VL image processor that dynamically resizes videos based on the original videos.
sample_frames
< source >( metadata: VideoMetadata num_frames: typing.Optional[int] = None fps: typing.Union[int, float, NoneType] = None **kwargs ) → torch.Tensor
Parameters
- video (
torch.Tensor) — Video that need to be sampled. - metadata (
VideoMetadata) — Metadata of the video containing information about total duration, fps and total number of frames. - num_frames (
int, optional) — Maximum number of frames to sample. Defaults toself.num_frames. - fps (
intorfloat, optional) — Target frames to sample per second. Defaults toself.fps.
Returns
torch.Tensor
Sampled video frames.
Default sampling function which uniformly samples the desired number of frames between 0 and total number of frames.
If fps is passed along with metadata, fps frames per second are sampled uniformty. Arguments num_frames
and fps are mutually exclusive.
Qwen3VLVisionModel
forward
< source >( hidden_states: Tensor grid_thw: Tensor **kwargs ) → torch.Tensor
Qwen3VLTextModel
class transformers.Qwen3VLTextModel
< source >( config: Qwen3VLTextConfig )
Parameters
- config (Qwen3VLTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Text part of Qwen3VL, not a pure text-only model, as DeepStack integrates visual features into the early hidden states.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None visual_pos_masks: typing.Optional[torch.Tensor] = None deepstack_visual_embeds: typing.Optional[list[torch.Tensor]] = None **kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - past_key_values (
~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.Only Cache instance is allowed as input, see our kv cache guide. If no
past_key_valuesare passed, DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’t have their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - use_cache (
bool, optional) — If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values). - cache_position (
torch.LongTensorof shape(sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length. - visual_pos_masks (
torch.Tensorof shape(batch_size, seqlen), optional) — The mask of the visual positions. - deepstack_visual_embeds (
list[torch.Tensor], optional) — The deepstack visual embeddings. The shape is (num_layers, visual_seqlen, embed_dim). The feature is extracted from the different visual encoder layers, and fed to the decoder hidden states. It’s from the paper DeepStack(https://arxiv.org/abs/2406.04334).
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Qwen3VLConfig) and inputs.
-
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.If
past_key_valuesis used only the last hidden-state of the sequences of shape(batch_size, 1, hidden_size)is output. -
past_key_values (
Cache, optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=Truein the cross-attention blocks) that can be used (seepast_key_valuesinput) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Qwen3VLTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Qwen3VLModel
class transformers.Qwen3VLModel
< source >( config )
Parameters
- config (Qwen3VLModel) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Qwen3 Vl Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None pixel_values: typing.Optional[torch.Tensor] = None pixel_values_videos: typing.Optional[torch.FloatTensor] = None image_grid_thw: typing.Optional[torch.LongTensor] = None video_grid_thw: typing.Optional[torch.LongTensor] = None cache_position: typing.Optional[torch.LongTensor] = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → transformers.models.qwen3_vl.modeling_qwen3_vl.Qwen3VLModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - past_key_values (
~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.Only Cache instance is allowed as input, see our kv cache guide. If no
past_key_valuesare passed, DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’t have their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - pixel_values (
torch.Tensorof shape(batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained usingimage_processor_class. Seeimage_processor_class.__call__for details (processor_classusesimage_processor_classfor processing images). - pixel_values_videos (
torch.FloatTensorof shape(batch_size, num_frames, num_channels, frame_size, frame_size), optional) — The tensors corresponding to the input video. Pixel values for videos can be obtained usingvideo_processor_class. Seevideo_processor_class.__call__for details (processor_classusesvideo_processor_classfor processing videos). - image_grid_thw (
torch.LongTensorof shape(num_images, 3), optional) — The temporal, height and width of feature shape of each image in LLM. - video_grid_thw (
torch.LongTensorof shape(num_videos, 3), optional) — The temporal, height and width of feature shape of each video in LLM. - cache_position (
torch.LongTensorof shape(sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
Returns
transformers.models.qwen3_vl.modeling_qwen3_vl.Qwen3VLModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.models.qwen3_vl.modeling_qwen3_vl.Qwen3VLModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (None) and inputs.
-
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) — Sequence of hidden-states at the output of the last layer of the model. -
past_key_values (
Cache, optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding. -
hidden_states (
tuple[torch.FloatTensor], optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple[torch.FloatTensor], optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
rope_deltas (
torch.LongTensorof shape(batch_size, ), optional) — The rope index difference between sequence length and multimodal rope.
The Qwen3VLModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Qwen3VLForConditionalGeneration
forward
< source >( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.Tensor] = None pixel_values_videos: typing.Optional[torch.FloatTensor] = None image_grid_thw: typing.Optional[torch.LongTensor] = None video_grid_thw: typing.Optional[torch.LongTensor] = None cache_position: typing.Optional[torch.LongTensor] = None logits_to_keep: typing.Union[int, torch.Tensor] = 0 **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] )
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
image_grid_thw (torch.LongTensor of shape (num_images, 3), optional):
The temporal, height and width of feature shape of each image in LLM.
video_grid_thw (torch.LongTensor of shape (num_videos, 3), optional):
The temporal, height and width of feature shape of each video in LLM.
Example: TODO: Add example