Sinsy: A Deep Neural Network-Based Singing Voice Synthesis System
Yukiya Hono, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda
Department of Computer Science, Nagoya Institute of Technology, Nagoya, Japan
Accepted to IEEE/ACM Transactions on Audio, Speech and Language Processing (Preprint: arXiv:2102.07786)
Abstruct
This paper presents Sinsy, a deep neural network (DNN)-based singing voice synthesis (SVS) system. In recent years, DNNs have been utilized in statistical parametric SVS systems, and DNN-based SVS systems have demonstrated better performance than conventional hidden Markov model-based ones. SVS systems are required to synthesize a singing voice with pitch and timing that strictly follow a given musical score. Additionally, singing expressions that are not described on the musical score, such as vibrato and timing fluctuations, should be reproduced. The proposed system is composed of four modules: a time-lag model, a duration model, an acoustic model, and a vocoder, and singing voices can be synthesized taking these characteristics of singing voices into account. To better model a singing voice, the proposed system incorporates improved approaches to modeling pitch and vibrato and better training criteria into the acoustic model. In addition, we incorporated PeriodNet, a non-autoregressive neural vocoder with robustness for the pitch, into our systems to generate a high-fidelity singing voice waveform. Moreover, we propose automatic pitch correction techniques for DNN-based SVS to synthesize singing voices with correct pitch even if the training data has out-of-tune phrases. Experimental results show our system can synthesize a singing voice with better timing, more natural vibrato, and correct pitch, and it can achieve better mean opinion scores in subjective evaluation tests.
Audio samples (Japanese)
Comparison of Acoustic Feature Modeling
ReferenceSystem | Sample 1 | Sample 2 |
---|---|---|
Natural |
System | Pitch norm.a |
Skip connect.b |
Vibratoc | Criteriond | Sample 1 | Sample 2 |
---|---|---|---|---|---|---|
System 1 | ✓ | ✓ | Diff-based | $\mathcal{L}$ | ||
System 2 | Diff-based | $\mathcal{L}$ | ||||
System 3 | ✓ | Diff-based | $\mathcal{L}$ | |||
System 4 | ✓ | ✓ | Sine-based | $\mathcal{L}$ | ||
System 5 | ✓ | ✓ | N/A | $\mathcal{L}$ | ||
System 6 | ✓ | ✓ | Diff-based | $\mathcal{L}^{\mathrm{(s)}}$ | ||
System 7 | ✓ | ✓ | Diff-based | $\mathcal{L}^{\mathrm{(d)}}$ | ||
a Pitch normalization described in Section IV-A,
b Skip connection described in Section IV-A, c "Sine-based" denotes sine-based vibrato modeling described in Section IV-B1, and "Diff-based" denotes the difference-based vibrato modeling described in Section IV-B2, d Trainig criteria $\mathcal{L}$, $\mathcal{L}^{\mathrm{(s)}}$, $\mathcal{L}^{\mathrm{(d)}}$ are given by (8), (4), and (6) in the paper, respectively. |

Comparison of Automatic Pitch Correction Techniques
ReferenceSystem | Sample 1 | Sample 2 |
---|---|---|
Natural |
System | Note pitch | Prior | Sample 1 | Sample 2 |
---|---|---|---|---|
Org | Original note pitch | |||
Org+Prior | ✓ | |||
Heur | Heuristic pseudo-note pitch | |||
Heur+Prior | ✓ | |||
Bias | Pitch bias-based pseudo-note pitch | |||
Bias+Prior | ✓ |

Reference
@article{hono2021sinsy,
title={Sinsy: A Deep Neural Network-Based Singing Voice Synthesis System},
authour={Hono, Yukiya and Hashimoto, Kei and Oura, Keiichiro and Nankaku, Yoshihiko and Tokuda, Keiichi},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
year={2021},
volume={29},
pages={2803-2815},
doi={10.1109/TASLP.2021.3104165},
}