InternVL config to_dict() includes non-JSON-serializable fields (e.g., torch.dtype), causing PretrainedConfig.__repr__ to crash in vLLM Ray mode

#11
by ringbird - opened

Hi OpenGVLab team,

When deploying OpenGVLab/InternVL (e.g., InternVL2_5-8B) with vLLM (>= 0.11.1 / 0.12.x) using Ray distributed executor (--distributed-executor-backend ray), the service may fail during startup due to a config repr / JSON serialization error.

During initialization, some code path triggers repr(config) (often via structured logging / dataclass repr / Ray context). In Transformers, PretrainedConfig.__repr__() calls to_json_string() json.dumps(config.to_dict()).

However, InternVL’s custom config to_dict() returns a dict containing non-JSON-serializable objects, e.g.:torch_dtype = torch.bfloat16 (torch.dtype)

This leads to: TypeError: Object of type dtype (or InternLM2Config) is not JSON serializable

vLLM mp mode may not trigger this as frequently, but Ray mode tends to hit repr/serialization paths more often. The core issue is that to_dict() is expected to be JSON-friendly for PretrainedConfig interoperability (repr, save/load, logging).

Please update InternVL’s config implementation (e.g., configuration_internvl_chat.py) to ensure to_dict() returns JSON-serializable types only, for example:

  • Convert torch.dtype to str(dtype) (e.g., "bfloat16")
  • Recursively convert nested PretrainedConfig objects to dicts (or otherwise ensure plain Python types)

This would improve compatibility across Transformers ecosystem tools (including vLLM Ray).

Thanks!

Sign up or log in to comment