Papers
arxiv:2510.23538

JanusCoder: Towards a Foundational Visual-Programmatic Interface for Code Intelligence

Published on Oct 27
· Submitted by Qiushi on Oct 30
#1 Paper of the day
Authors:
,
,
,
,
,
,

Abstract

A unified multimodal code corpus and model, JanusCoder, generate code from text and visual inputs, outperforming commercial models in various coding tasks.

AI-generated summary

The scope of neural code intelligence is rapidly expanding beyond text-based source code to encompass the rich visual outputs that programs generate. This visual dimension is critical for advanced applications like flexible content generation and precise, program-driven editing of visualizations. However, progress has been impeded by the scarcity of high-quality multimodal code data, a bottleneck stemming from challenges in synthesis and quality assessment. To address these challenges, we make contributions from both a data and modeling perspective. We first introduce a complete synthesis toolkit that leverages reciprocal synergies between data modalities to efficiently produce a large-scale, high-quality corpus spanning from standard charts to complex interactive web UIs and code-driven animations. Leveraging this toolkit, we construct JanusCode-800K, the largest multimodal code corpus to date. This powers the training of our models, JanusCoder and JanusCoderV, which establish a visual-programmatic interface for generating code from textual instructions, visual inputs, or a combination of both. Our unified model is a departure from existing approaches that build specialized models for isolated tasks. Extensive experiments on both text-centric and vision-centric coding tasks demonstrate the superior performance of the JanusCoder series, with our 7B to 14B scale models approaching or even exceeding the performance of commercial models. Furthermore, extensive analysis provides key insights into harmonizing programmatic logic with its visual expression. Our code and checkpoints will are available at https://github.com/InternLM/JanusCoder.

Community

Paper author Paper submitter

JanusCoder is a suite of open models that establishes a unified visual–programmatic interface for multimodal code intelligence. The models (JanusCoder and JanusCoderV) handle both text-centric and vision-centric tasks within a single, unified framework. This allows them to tackle a diverse range of tasks, including but not limited to chart-to-code, web UI generation and editing, and code-driven animations. Across public benchmarks, the models demonstrate superior performance, approaching or even surpassing proprietary systems.

Code available at: https://github.com/InternLM/JanusCoder

Sign up or log in to comment

Models citing this paper 16

Browse 16 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.23538 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.23538 in a Space README.md to link it from this page.

Collections including this paper 2