Papers
arxiv:2509.12171

Preservation of Language Understanding Capabilities in Speech-aware Large Language Models

Published on Sep 15
Authors:
,
,
,
,
,
,

Abstract

C3T evaluates speech-aware large language models by assessing their language understanding capabilities through speech input, fairness across speaker categories, and robustness across text and speech modalities.

AI-generated summary

The paper presents C3T (Cross-modal Capabilities Conservation Test), a new benchmark for assessing the performance of speech-aware large language models. The benchmark utilizes textual tasks and a voice cloning text-to-speech model to quantify the extent to which language understanding capabilities are preserved when the model is accessed via speech input. C3T quantifies the fairness of the model for different categories of speakers and its robustness across text and speech modalities.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.12171 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.12171 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.