StoryTTS: A Highly Expressive Text-to-Speech Dataset with Rich Textual Expressiveness Annotations

S Liu, Y Guo, X Chen, K Yu - ICASSP 2024-2024 IEEE …, 2024 - ieeexplore.ieee.org
S Liu, Y Guo, X Chen, K Yu
ICASSP 2024-2024 IEEE International Conference on Acoustics …, 2024ieeexplore.ieee.org
While acoustic expressiveness has long been studied in expressive text-to-speech (ETTS),
the inherent expressiveness in text lacks sufficient attention, especially for ETTS of artistic
works. In this paper, we introduce StoryTTS, a highly ETTS dataset that contains rich
expressiveness both in acoustic and textual perspective, from the recording of a Mandarin
storytelling show. A systematic and comprehensive labeling framework is proposed for
textual expressiveness. We analyze and define speech-related textual expressiveness in …
While acoustic expressiveness has long been studied in expressive text-to-speech (ETTS), the inherent expressiveness in text lacks sufficient attention, especially for ETTS of artistic works. In this paper, we introduce StoryTTS, a highly ETTS dataset that contains rich expressiveness both in acoustic and textual perspective, from the recording of a Mandarin storytelling show. A systematic and comprehensive labeling framework is proposed for textual expressiveness. We analyze and define speech-related textual expressiveness in StoryTTS to include five distinct dimensions through linguistics, rhetoric, etc. Then we employ large language models and prompt them with a few manual annotation examples for batch annotation. The resulting corpus contains 61 hours of consecutive and highly prosodic speech equipped with accurate text transcriptions and rich textual expressiveness annotations. Therefore, StoryTTS can aid future ETTS research to fully mine the abundant intrinsic textual and acoustic features. Experiments are conducted to validate that TTS models can generate speech with improved expressiveness when integrating with the annotated textual labels in StoryTTS.
ieeexplore.ieee.org