Lightweight and High-Fidelity End-to-End Text-to-Speech with Multi-Band Generation and Inverse Short-Time Fourier Transform

Masaya Kawamura1*, Yuma Shirahata2, Ryuichi Yamamoto2, Kentaro Tachibana2

1The University of Tokyo, Tokyo, Japan
2LINE Corp., Japan.
*Work performed during an internship at LINE corporation

Accepted to ICASSP 2023
[Paper] [Code]

Abstract

We propose a lightweight end-to-end text-to-speech model using multi-band generation and inverse short-time Fourier transform. Our model is based on VITS, a high-quality end-to-end text-to-speech model, but adopts two changes for more efficient inference: 1) the most computationally expensive component is partially replaced with a simple inverse short-time Fourier transform, and 2) multi-band generation, with fixed or trainable synthesis filters, is used to generate waveforms. Unlike conventional lightweight models, which employ optimization or knowledge distillation separately to train two cascaded components, our method enjoys the full benefits of end-to-end optimization. Experimental results show that our model synthesized speech as natural as that synthesized by VITS, while achieving a real-time factor of 0.066 on an Intel Core i7 CPU, 4.1 times faster than VITS. Moreover, a smaller version of the model significantly outperformed a lightweight baseline model with respect to both naturalness and inference speed. Code and audio samples are available from https://github.com/MasayaKawamura/MB-iSTFT-VITS.

Architecture of multi-band iSTFT VITS and multi-stream iSTFT VITS.

Demo

This is an accompanying page and includes some examples of the synthesized speech obtained with the proposed and conventional methods. The groundtruth audio clips are from the LJ speech dataset [1]. These clips are not included in the training data.


Model Text: Weedon and Lecasser to twelve
and six months respectively in Coldbath Fields.
Text: There among the ruins
they still live in the same kind of houses,
Text: Mrs. De Mohrenschildt thought that Oswald, Text: carbohydrates (starch, cellulose) and fats.
Ground truth
VITS [2]
Nix-TTS [3]
iSTFT-VITS
MB-iSTFT-VITS
MS-iSTFT-VITS
Mini-VITS
Mini-iSTFT-VITS
Mini-MB-iSTFT-VITS

References

[1] K. Ito, "The LJ speech dataset", https://keithito.com/LJ-Speech-Dataset/, 2017.
[2] J. Kim, J. Kong, J. Son, "Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech," in Proc. ICML, 2021, pp. 5530-5540.
[3] R. Chevi, R. E. Prasojo, A. F. Aji, A. Tjandra, S. Sakti, "Nix-TTS: Lightweight and end-to-end text-to-speech via module-wise distillation", in Proc. SLT, 2023, pp. 970–976.