* 発表論文 - 2019 [#y958cf56]

#contents

** 論文誌 [#rf441a67]
** 論文誌 [#p77248dd]
+ Xin Wang, Shinji Takaki, and Junichi Yamagishi,  ``Neural source-filter waveform models for statistical parametric speech synthesis,'' IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 402-415, November 2019. (Full paper peer reviewed)
&publication(2019/20191128_Journal_IEEE_Xin_Wang_paper.pdf, paper);
[[link>https://ieeexplore.ieee.org/document/8915761?source=authoralert]]
+ Xin Wang, Shinji Takaki, Junichi Yamagishi, Simon King, and Keiichi Tokuda, ``A vector quantized variational autoencoder (VQ-VAE) autoregressive neural F0 model for statistical parametric speech synthesis,'' IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 157-170, October 2019. (Full paper peer reviewed)
&publication(2019/20191028_Journal_IEEE_Xin_Wang_paper.pdf, paper);
[[link>https://ieeexplore.ieee.org/document/8884734]]
+ 高木信二, ``話声の合成における応用技術 : DNNテキスト音声合成システム,'' 日本音響学会誌, vol. 75, no. 7, pp. 393-399, 2019年7月. (解説論文)
// Shinji Takaki, ``Applied technology for speech synthesis : DNN-based text-to-speech synthesis,'' The journal of the acoustical society of Japan, vol. 75, no. 7, pp. 393-399, July 2019. (Review paper)
&publication(2019/20190701_Journal_ASJ_Shinji_Takaki_paper.pdf, paper);
[[link>https://ci.nii.ac.jp/naid/40021965034/]]

** 国際会議 [#o038d124]
** 国際会議 [#r32cf446]
+ %%%Motoki Shimada%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Low computational cost speech synthesis based on deep neural networks using hidden semi-Markov model structures,'' 10th ISCA Speech Synthesis Workshop (SSW10), pp. 177-182, Vienne, Austria, September, 2019. (Full paper peer reviewed)
&publication(2019/20190921_IConference_SSW_Motoki_Shimada_paper.pdf, paper);
&publication(2019/20190921_IConference_SSW_Motoki_Shimada_poster.pdf, poster);
+ %%%Takato Fujimoto%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Impacts of input linguistic feature representation on Japanese end-to-end speech synthesis,'' 10th ISCA Speech Synthesis Workshop (SSW10), pp. 166-171, Vienne, Austria, September, 2019. (Full paper peer reviewed)
&publication(2019/20190921_IConference_SSW_Takato_Fujimoto_paper.pdf, paper);
&publication(2019/20190921_IConference_SSW_Takato_Fujimoto_poster.pdf, poster);
+ %%%Shuhei Kato%%%, Yusuke Yasuda, Xin Wang, Erica Cooper, Shinji Takaki, and Junichi Yamagishi, ``Rakugo speech synthesis using segment-to-segment neural transduction and style tokens — toward speech synthesis for entertaining audiences,'' 10th ISCA Speech Synthesis Workshop (SSW10), pp. 111-116, Vienne, Austria, September, 2019. (Full paper peer reviewed)
&publication(2019/20190921_IConference_SSW_Kato_Shuhei_paper.pdf, paper);
//&publication(2019/20190921_IConference_SSW_Kato_Shuhei_slide.pptx, slide);
+ %%%Keiichiro Oura%%%, Kazuhiro Nakamura, Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda, ``Deep neural network based real-time speech vocoder with periodic and aperiodic inputs,'' 10th ISCA Speech Synthesis Workshop (SSW10), pp. 13-18, Vienne, Austria, September, 2019. (Full paper peer reviewed)
&publication(2019/20190920_IConference_SSW_Keiichiro_Oura_paper.pdf, paper);
&publication(2019/20190920_IConference_SSW_Keiichiro_Oura_slide.pptx, slide);
+ %%%Takenori Yoshimura%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Speaker-dependent WaveNet-based delay-free ADPCM speech coding,'' 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 7145-7149, Brighton, UK, May, 2019. (Full paper peer reviewed)
&publication(2019/20190517_IConference_ICASSP_Takenori_Yoshimura_paper.pdf, paper);
&publication(2019/20190517_IConference_ICASSP_Takenori_Yoshimura_poster.pdf, poster);
+ %%%Yukiya Hono%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Singing voice synthesis based on generative adversarial networks,'' 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 6955-6959, Brighton, UK, May, 2019. (Full paper peer reviewed)
&publication(2019/20190517_IConference_ICASSP_Yukiya_Hono_paper.pdf, paper);
&publication(2019/20190517_IConference_ICASSP_Yukiya_Hono_poster.pdf, poster);

** 研究会 [#z4f5de18]
** 研究会 [#c1ab16ab]
+ %%%和田蒼汰%%%, 法野行哉, 高木信二, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``歌声合成におけるニューラルボコーダの比較検討,'' 音声研究会, vol. 119, no. 321, SP2019-42, pp. 85-90, 東京, 日本, 2019年12月.
//%%%Sota Wada%%%, Yukiya Hono, Shinji Takaki, Kei Hashimoto, Yoshihiko Nankaku, Keiichiro Oura, and Keiichi Tokuda, ``A comparison of neural vocoders in singing voice synthesis,'' SP, vol. 119, no. 321, SP2019-42, pp. 85-90, Tokyo, Japan, December, 2019.
&publication(2019/20191206_TReport_SP_Sota_Wada_paper.pdf, paper);
&publication(2019/20191206_TReport_SP_Sota_Wada_slide.pptx, slide);
+ %%%次井貴浩%%%, 高木信二, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``公共空間におけるスピーチプライバシー保護を目的とした合成音声によるサウンドマスキングの検討,'' 音声研究会, vol. 119, no. 321, SP2019-38, pp. 55-60, 東京, 日本, 2019年12月.
//%%%Takahiro Tsugui%%%, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku and Keiichi Tokuda, ``Synthetic speech-based sound masking for privacy protection when speakingto smartphones in public space,'' SP, vol. 119, no. 321, SP2019-38, pp. 55-60, Tokyo, Japan, December, 2019.
&publication(2019/20191206_TReport_SP_Takahiro_Tsugui_paper.pdf, paper);
&publication(2019/20191206_TReport_SP_Takahiro_Tsugui_poster.pdf, poster);
+ %%%大浦圭一郎%%%, 中村和寛, 橋本佳, 南角吉彦, 徳田恵一, 
``周期・非周期信号を用いたDNNに基づくリアルタイム音声ボコーダ,'' 情報処理学会研究報告, vol. 2019-SLP-127, no.34, 京都, 日本, 2019年6月.
//%%%Keiichiro Oura%%%, Kazuhiro Nakamura, Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda, ``Deep neural network based real-time speech vocoder with periodic/aperiodic inputs,'' IPSJ SIG Technical Report, vol. 2019-SLP-127, no.34, Kyoto, Japan, June, 2019.
&publication(2019/20190622_TReport_SLP_Keiichiro_Oura_paper.pdf, paper);
&publication(2019/20190622_TReport_SLP_Keiichiro_Oura_poster.pptx, poster);
&publication(2019/20190622_TReport_SLP_Keiichiro_Oura_abst.pptx, abst);

** 全国大会 [#we60d0c1]
** 全国大会 [#na57a424]
+ %%%大浦圭一郎%%%, 高木信二, 中村和寛, 橋本佳, 南角吉彦, 徳田恵一, 
``周期・非周期信号を用いた敵対的生成ネットワークに基づくリアルタイム音声ボコーダ,'' 日本音響学会2019年秋季研究発表会, pp. 997-998, 滋賀, 日本, 2019年9月.
//%%%Keiichiro Oura%%%, Shinji Takaki, Kazuhiro Nakamura, Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda, ``Generative adversarial network based real-time speech vocoder with periodic/aperiodic inputs,'' Acoustical Society of Japan 2019 Autumn Meeting, pp. 997-998, Shiga, Japan, September, 2019.
&publication(2019/20190906_DConference_ASJA_Keiichiro_Oura_paper.pdf, paper);
&publication(2019/20190906_DConference_ASJA_Keiichiro_Oura_slide.pptx, slide);
&publication(2019/20190906_DConference_ASJA_Keiichiro_Oura_abst.pdf, abst);
+ %%%中村和寛%%%, 高木信二, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``CNNに基づく歌声合成における計算量削減の検討,'' 日本音響学会2019年秋季研究発表会, pp. 939-940, 滋賀, 日本, 2019年9月.
//%%%Kazuhiro Nakamura%%%, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Computational complexity reduction method for CNN-based singing voice synthesis,'' Acoustical Society of Japan 2019 Autumn Meeting, pp. 939-940, Shiga, Japan, September, 2019.
&publication(2019/20190904_DConference_ASJA_Kazuhiro_Nakamura_paper.pdf, paper);
+ %%%加藤集平%%%, 安田裕介, Xin Wang, Erica Cooper, 高木信二, 山岸順一, 
``落語音声合成モデルの頑健な学習方法と発話様式の変動への対処,'' 日本音響学会2019年秋季研究発表会, pp. 957-960, 滋賀, 日本, 2019年9月.
//%%%Shuhei Kato%%%, Yusuke Yasuda, WANG Xin, Erica Cooper, Shinji Takaki, and Junichi Yamagishi, ``Robust training method for rakugo speech synthesis models and dealing with various speaking styles,'' Acoustical Society of Japan 2019 Autumn Meeting, pp. 957-960, Shiga, Japan, September, 2019.
&publication(2019/20190904_DConference_ASJA_Shuhei_Kato_paper.pdf, paper);
+ %%%村田舜馬%%%, 藤本崇人, 法野行哉, 高木信二, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``楽譜時間情報を用いたアテンション機構に基づく歌声合成の検討,'' 日本音響学会2019年秋季研究発表会, pp. 943-944, 滋賀, 日本, 2019年9月.
//%%%Shumma Murata%%%, Takato Fujimoto, Yukiya Hono, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``A study on singing voice synthesis with attention mechanism using musical score time information,'' Acoustical Society of Japan 2019 Autumn Meeting, pp. 943-944, Shiga, Japan, September, 2019.
&publication(2019/20190904_DConference_ASJA_Shumma_Murata_paper.pdf, paper);
&publication(2019/20190904_DConference_ASJA_Shumma_Murata_slide.pptx, slide);
&publication(2019/20190904_DConference_ASJA_Shumma_Murata_abst.pdf, abst);
+ %%%島田基樹%%%, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``隠れセミマルコフモデルの構造を用いたDNNに基づく音声合成における計算量削減手法の検討,'' 日本音響学会2019年春季研究発表会, pp. 1071-1072, 東京, 日本, 2019年3月.
//%%%Motoki Shimada%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Reducing computational costs for speech synthesis based on deep neural networks using hidden semi-Markov model structures,'' Acoustical Society of Japan 2019 Spring Meeting, pp. 1071-1072, Tokyo, Japan, March, 2019.
&publication(2019/20190307_DConference_ASJS_Motoki_Shimada_paper.pdf, paper);
&publication(2019/20190307_DConference_ASJS_Motoki_Shimada_slide.pptx, slide);
&publication(2019/20190307_DConference_ASJS_Motoki_Shimada_abst.pdf, abst);
+ %%%藤本崇人%%%, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``日本語End-to-End音声合成における入力言語特徴量の影響,'' 日本音響学会2019年春季研究発表会, pp. 1061-1062, 東京, 日本, 2019年3月.
//%%%Takato Fujimoto%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Impacts of input linguistic features on Japanese end-to-end speech synthesis,'' Acoustical Society of Japan 2019 Spring Meeting, pp. 1061-1062, Tokyo, Japan, March, 2019.
&publication(2019/20190307_DConference_ASJS_Takato_Fujimoto_paper.pdf, paper);
&publication(2019/20190307_DConference_ASJS_Takato_Fujimoto_slide.pptx, slide);
&publication(2019/20190307_DConference_ASJS_Takato_Fujimoto_abst.pdf, abst);
+ %%%大浦圭一郎%%%, 中村和寛, 橋本佳, 南角吉彦, 徳田恵一, 
``周期・非周期信号から駆動するディープニューラルネットワークに基づく音声ボコーダ ,'' 日本音響学会2019年春季研究発表会, pp. 1049-1052, 東京, 日本, 2019年3月.(粟屋潔学術奨励賞)
//%%%Keiichiro Oura%%%, Kazuhiro Nakamura, Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda, ``Deep neural network based speech vocoder with periodic/aperiodic inputs,'' Acoustical Society of Japan 2019 Spring Meeting, pp. 1049-1052, Tokyo, Japan, March, 2019.
&publication(2019/20190307_DConference_ASJS_Keiichiro_Oura_paper.pdf, paper);
&publication(2019/20190307_DConference_ASJS_Keiichiro_Oura_slide.pptx, slide);
&publication(2019/20190307_DConference_ASJS_Keiichiro_Oura_abst.pdf, abst);
+ %%%沢田慶%%%, 坪井一菜, Xianchao Wu, Zhan Chen, 法野行哉, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``AI歌手りんな:ユーザ歌唱や楽譜を入力とする歌声合成システム,'' 日本音響学会2019年春季研究発表会, pp. 1041-1044, 東京, 日本, 2019年3月.
//%%%Kei Sawada%%%, Kazuna Tsuboi, Xianchao Wu, Zhan Chen, Yukiya Hono, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``AI Singer Rinna: a singing voice synthesis system using user's singing voice or musical score,'' Acoustical Society of Japan 2019 Spring Meeting, pp. 1041-1044, Tokyo, Japan, March, 2019.
&publication(2019/20190306_DConference_ASJS_Kei_Sawada_paper.pdf, paper);
&publication(2019/20190306_DConference_ASJS_Kei_Sawada_abst.pdf, abst);
+ %%%法野行哉%%%, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``敵対的生成ネットワークを用いた歌声合成の検討,'' 日本音響学会2019年春季研究発表会, pp. 1039-1040, 東京, 日本, 2019年3月.
//%%%Yukiya Hono%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``Singing voice synthesis using generative adversarial networks,'' Acoustical Society of Japan 2019 Spring Meeting, pp. 1039-1040, Tokyo, Japan, March, 2019.
&publication(2019/20190306_DConference_ASJS_Yukiya_Hono_paper.pdf, paper);
&publication(2019/20190306_DConference_ASJS_Yukiya_Hono_slide.pptx, slide);
&publication(2019/20190306_DConference_ASJS_Yukiya_Hono_abst.pdf, abst);
+ %%%中村和寛%%%, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``歌声合成における CNN に基づく音声パラメータ生成手法の検討,'' 日本音響学会2019年春季研究発表会, pp. 1033-1034, 東京, 日本, 2019年3月.
//%%%Kazuhiro Nakamura%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``CNN-basd speech parameter generation for singing voice synthesis,'' Acoustical Society of Japan 2019 Spring Meeting, pp. 1033-1034, Tokyo, Japan, March, 2019.
&publication(2019/20190306_DConference_ASJS_Kazuhiro_Nakamura_paper.pdf, paper);
&publication(2019/20190306_DConference_ASJS_Kazuhiro_Nakamura_slide.pptx, slide);
&publication(2019/20190306_DConference_ASJS_Kazuhiro_Nakamura_abst.pdf, abst);
+ %%%角谷健太%%%, 橋本佳, 大浦圭一郎, 南角吉彦, 徳田恵一, 
``DNNに基づく感情音声合成のための敵対的学習の検討,'' 日本音響学会2019年春季研究発表会, pp. 1359-1360, 東京, 日本, 2019年3月.
//%%%Kenta Sumiya%%%, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda, ``A study of adversarial learning for emotional speech synthesis based on deep neural networks,'' Acoustical Society of Japan 2019 Spring Meeting, pp. 1359-1360, Tokyo, Japan, March, 2019.
&publication(2019/20190307_DConference_ASJS_Kenta_Sumiya_paper.pdf, paper);
&publication(2019/20190307_DConference_ASJS_Kenta_Sumiya_slide.pptx, slide);
&publication(2019/20190307_DConference_ASJS_Kenta_Sumiya_abst.pdf, abst);

** 学位論文 [#ke834d14]
+ %%%丹羽純平%%%, 
``WaveNetに基づく統計的声質変換,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``,,'' Statistical voice conversion based on WaveNet, February, 2019.
&publication(2019/20190206_Thesis_Bachelor_Jumpei_Niwa_paper.pdf, paper);
&publication(2019/20190206_Thesis_Bachelor_Jumpei_Niwa_slide.pptx, slide);
&publication(2019/20190206_Thesis_Bachelor_Jumpei_Niwa_abst.pdf, abst);
+ %%%脇口甲太郎%%%, 
``統計モデルに基づくドライバ認知負荷のリアルタイム推定,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``Speaker recognition based on neural networks including structures of statistical generative models,''  February, 2019.
&publication(2019/20190206_Thesis_Bachelor_Natsumi_Wakiguchi_paper.pdf, paper);
&publication(2019/20190206_Thesis_Bachelor_Natsumi_Wakiguchi_slide.pptx, slide);
&publication(2019/20190206_Thesis_Bachelor_Natsumi_Wakiguchi_abst.pdf, abst);
+ %%%小池なつみ%%%, 
``統計的生成モデルの構造を内包したニューラルネットワークに基づく話者認識,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``Speaker recognition based on neural networks including structures of statistical generative models,''  February, 2019.
&publication(2019/20190206_Thesis_Bachelor_Natsumi_Koike_paper.pdf, paper);
&publication(2019/20190206_Thesis_Bachelor_Natsumi_Koike_slide.pptx, slide);
&publication(2019/20190206_Thesis_Bachelor_Natsumi_Koike_abst.pdf, abst);
+ %%%市橋史也%%%, 
``連続値入出力に対応したWFSTに基づく階層型音声認識デコーダの開発,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``Development of a hierarchical speech recognition decoder based on WFST extended to continuous value inputs and outputs,''  February, 2019.
&publication(2019/20190206_Thesis_Bachelor_Fumiya_Ichihashi_paper.pdf, paper);
&publication(2019/20190206_Thesis_Bachelor_Fumiya_Ichihashi_slide.pptx, slide);
&publication(2019/20190206_Thesis_Bachelor_Fumiya_Ichihashi_abst.pdf, abst);
+ %%%市川英嗣%%%, 
``分離型格子構造を用いたDNN-HMMハイブリッドモデルに基づく幾何学的変動に頑健な画像認識,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``Robust image recognition against geometric variations based on DNN-HMM hybrid models using separable lattice structures,''  February, 2019.
&publication(2019/20190206_Thesis_Bachelor_Eiji_Ichikawa_paper.pdf, paper);
&publication(2019/20190206_Thesis_Bachelor_Eiji_Ichikawa_slide.pptx, slide);
&publication(2019/20190206_Thesis_Bachelor_Eiji_Ichikawa_abst.pdf, abst);
+ %%%法野行哉%%%, 
``Deep Neural Networkに基づく歌声合成システムの構築,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``Development of a singing voice synthesis system based on deep neural networks,''  February, 2019.
&publication(2019/20190206_Thesis_Bachelor_Yukiya_Hono_paper.pdf, paper);
&publication(2019/20190206_Thesis_Bachelor_Yukiya_Hono_slide.pptx, slide);
&publication(2019/20190206_Thesis_Bachelor_Yukiya_Hono_abst.pdf, abst);
+ %%%池浦史芳%%%, 
``バス停雑音下における音声路線案内システムに適した合成音声の検討,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``Speech synthesis suitable for voice route guidance systems in noisy environments at bus stops,''  February, 2019.
&publication(2019/20190206_Thesis_Bachelor_Fumiyoshi_Ikeura_paper.pdf, paper);
&publication(2019/20190206_Thesis_Bachelor_Fumiyoshi_Ikeura_slide.pptx, slide);
&publication(2019/20190206_Thesis_Bachelor_Fumiyoshi_Ikeura_abst.pdf, abst);
** 学位論文 [#jf469482]
+ %%%牛田光一%%%, 
``効率的な情報伝達のための音声合成システム構築法の検討,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``A construction method ofspeech synthesis system forefficient information transmission,''  February, 2019.
//%%%Koichi Ushida%%%, ``A construction method of speech synthesis system for efficient information transmission,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Koichi_Ushida_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Koichi_Ushida_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Koichi_Ushida_abst.pdf, abst);
+ %%%木下耕介%%%, 
``統計的生成モデルの構造を組み込んだニューラルネットワークに基づく画像認識,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``IMAGE RECOGNITION BASED ON NEURAL NETWORKS INCORPORATING STRUCTURES OF STATISTICAL GENERATIVE MODELS,''  February, 2019.
//%%%Kousuke Kinoshita%%%, ``IMAGE RECOGNITION BASED ON NEURAL NETWORKS INCORPORATING STRUCTURES OF STATISTICAL GENERATIVE MODELS,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Kosuke_Kinoshita_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Kosuke_Kinoshita_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Kosuke_Kinoshita_abst.pdf, abst);
+ %%%小林樹%%%, 
``プライバシー保護を目的とした音声変換に基づく選択的情報マスキング,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``Selective information masking based on voice conversion for privacy protection,''  February, 2019.
//%%%Tatsuki Kobayashi%%%, ``Selective information masking based on voice conversion for privacy protection,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Tatsuki_Kobayashi_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Tatsuki_Kobayashi_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Tatsuki_Kobayashi_abst.pdf, abst);
+ %%%清水達也%%%, 
``Sequential VAE に基づく話者認識における入力発話長の影響に関する調査,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``INFLUENCE OF INPUT UTTERANCE LENGTHS IN SPEAKER RECOGNITION BASED SEQUENTIAL VAE,''  February, 2019.
//%%%Tatsuya Shimizu%%%, ``INFLUENCE OF INPUT UTTERANCE LENGTHS IN SPEAKER RECOGNITION BASED SEQUENTIAL VAE,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Tatsuya_Shimizu_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Tatsuya_Shimizu_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Tatsuya_Shimizu_abst.pdf, abst);
+ %%%角谷健太%%%, 
``ディープニューラルネットワークに基づく感情音声合成のための敵対的学習手法の検討,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``ADVERSARIAL LEARNING FOR EMOTIONAL SPEECH SYNTHESIS BASED ON DEEP NEURAL NETWORKS,''  February, 2019.
//%%%Kenta Sumiya%%%, ``ADVERSARIAL LEARNING FOR EMOTIONAL SPEECH SYNTHESIS BASED ON DEEP NEURAL NETWORKS,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Kenta_Sumiya_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Kenta_Sumiya_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Kenta_Sumiya_abst.pdf, abst);
+ %%%佐藤優介%%%, 
``Deep Neural Networkに基づく音声合成におけるクロスリンガル話者適応,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``CROSS-LINGUAL SPEAKER ADAPTATION IN SPEECH SYNTHESIS BASED ON DEEP NEURAL NETWORKS,''  February, 2019.
//%%%Yusuke Sato%%%, ``CROSS-LINGUAL SPEAKER ADAPTATION IN SPEECH SYNTHESIS BASED ON DEEP NEURAL NETWORKS,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Yusuke_Sato_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Yusuke_Sato_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Yusuke_Sato_abst.pdf, abst);
+ %%%中村洋太%%%, 
``深層学習に基づいた楽譜情報を入力とする楽器音合成の検討,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``DEEP LEARNING BASED INSTRUMENTAL SOUND SYNTHESIS FROM MUSICAL SCORES,''  February, 2019.
//%%%Yota Nakamura%%%, ``DEEP LEARNING BASED INSTRUMENTAL SOUND SYNTHESIS FROM MUSICAL SCORES,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Yota_Nakamura_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Yota_Nakamura_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Yota_Nakamura_abst.pdf, abst);
+ %%%和田蒼汰%%%, 
``歌声合成用WaveNetボコーダにおける最適なモデル構造と学習条件の調査,'' 
卒業論文, 名古屋工業大学, 2019年2月.
//%%% %%%, ``EXPLORING MODEL STRUCTURES ANDLEARNING CONDITIONS OF WAVENETVOCODER FOR SINGING VOICE SYNTHESIS,''  February, 2019.
//%%%Sota Wada%%%, ``EXPLORING MODEL STRUCTURES ANDLEARNING CONDITIONS OF WAVENETVOCODER FOR SINGING VOICE SYNTHESIS,'' Bachelor thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190213_Thesis_Bachelor_Sota_Wada_paper.pdf, paper);
&publication(2019/20190213_Thesis_Bachelor_Sota_Wada_slide.pptx, slide);
&publication(2019/20190213_Thesis_Bachelor_Sota_Wada_abst.pdf, abst);
+ %%%丹羽純平%%%, 
``WaveNetに基づく統計的声質変換,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%%Jumpei Niwa%%%, ``Statistical voice conversion based on WaveNet,'' Master thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190206_Thesis_Master_Jumpei_Niwa_paper.pdf, paper);
&publication(2019/20190206_Thesis_Master_Jumpei_Niwa_slide.pptx, slide);
&publication(2019/20190206_Thesis_Master_Jumpei_Niwa_abst.pdf, abst);
+ %%%脇口甲太郎%%%, 
``統計モデルに基づくドライバ認知負荷のリアルタイム推定,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%%Kotaro Wakiguchi%%%, ``Speaker recognition based on neural networks including structures of statistical generative models,''  Master thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190206_Thesis_Master_Kotaro_Wakiguchi_paper.pdf, paper);
&publication(2019/20190206_Thesis_Master_Kotaro_Wakiguchi_slide.pptx, slide);
&publication(2019/20190206_Thesis_Master_Kotaro_Wakiguchi_abst.pdf, abst);
+ %%%小池なつみ%%%, 
``統計的生成モデルの構造を内包したニューラルネットワークに基づく話者認識,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%%Natsumi Koike%%%, ``Speaker recognition based on neural networks including structures of statistical generative models,''  Master thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190206_Thesis_Master_Natsumi_Koike_paper.pdf, paper);
&publication(2019/20190206_Thesis_Master_Natsumi_Koike_slide.pptx, slide);
&publication(2019/20190206_Thesis_Master_Natsumi_Koike_abst.pdf, abst);
+ %%%市橋史也%%%, 
``連続値入出力に対応したWFSTに基づく階層型音声認識デコーダの開発,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%%Fumiya Ichihashi%%%, ``Development of a hierarchical speech recognition decoder based on WFST extended to continuous value inputs and outputs,'' Master thesis, Nagoya institute of technology,  February, 2019.
&publication(2019/20190206_Thesis_Master_Fumiya_Ichihashi_paper.pdf, paper);
&publication(2019/20190206_Thesis_Master_Fumiya_Ichihashi_slide.pptx, slide);
&publication(2019/20190206_Thesis_Master_Fumiya_Ichihashi_abst.pdf, abst);
+ %%%市川英嗣%%%, 
``分離型格子構造を用いたDNN-HMMハイブリッドモデルに基づく幾何学的変動に頑健な画像認識,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%%Eiji Ichikawa%%%, ``Robust image recognition against geometric variations based on DNN-HMM hybrid models using separable lattice structures,''  Master thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190206_Thesis_Master_Eiji_Ichikawa_paper.pdf, paper);
&publication(2019/20190206_Thesis_Master_Eiji_Ichikawa_slide.pptx, slide);
&publication(2019/20190206_Thesis_Master_Eiji_Ichikawa_abst.pdf, abst);
+ %%%池浦史芳%%%, 
``バス停雑音下における音声路線案内システムに適した合成音声の検討,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%%Fumiyoshi Ikeura%%%, ``Speech synthesis suitable for voice route guidance systems in noisy environments at bus stops,''  Master thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190206_Thesis_Master_Fumiyoshi_Ikeura_paper.pdf, paper);
&publication(2019/20190206_Thesis_Master_Fumiyoshi_Ikeura_slide.pptx, slide);
&publication(2019/20190206_Thesis_Master_Fumiyoshi_Ikeura_abst.pdf, abst);
+ %%%法野行哉%%%, 
``Deep Neural Networkに基づく歌声合成システムの構築,'' 
修士論文, 名古屋工業大学, 2019年2月.
//%%%Yukiya Hono%%%, ``Development of a singing voice synthesis system based on deep neural networks,''  Master thesis, Nagoya institute of technology, February, 2019.
&publication(2019/20190206_Thesis_Master_Yukiya_Hono_paper.pdf, paper);
&publication(2019/20190206_Thesis_Master_Yukiya_Hono_slide.pptx, slide);
&publication(2019/20190206_Thesis_Master_Yukiya_Hono_abst.pdf, abst);

** 講演・パネル [#pee605e7]

** 講演・パネル [#ke1b3368]

+ %%%徳田恵一%%%, 
``統計的音声合成の進展と展望 ,'' 音声研究会, vol. 119, no. 321, SP2019-35, pp. 11-12, 東京, 日本, 2019年12月. (招待講演)
//%%%Keiichi Tokuda%%%, , ``Progress and prospects of statistical speech synthesis,'' SP, vol. 119, no. 321, SP2019-35, pp. 11-12, Tokyo, Japan, December, 2019. (Invited talk)
&publication(2019/20191206_TReport_SP_Keiichi_Tokuda_paper.pdf, paper);
&publication(2019/20191206_TReport_SP_Keiichi_Tokuda_slide.pptx, slide);
+ %%%大浦圭一郎%%%,
``統計的歌声合成技術とその実用化,'' 日本AI音楽学会, 神奈川, 日本, 2019年11月. (招待講演)
//%%%Keiichiro Oura%%%, ``Statistical singing synthesis technology and its applications,'' Japan AI Music Conference, Kanagawa, Japan, November, 2019.
&publication(2019/20191109_TReport_Japan_AI_Music_Conference_Keiichiro_Oura_paper.pdf, paper);
&publication(2019/20191109_TReport_Japan_AI_Music_Conference_Keiichiro_Oura_slide.pptx, slide);
+%%%Keiichi Tokuda%%%, ``Statistical approach to speech synthesis: past, present and future,'' Interspeech 2019, Graz, Austria, September, 2019. (Keynote)
//&publication(2019/20190916_IConference_Interspeech_Keiichi_Tokuda_slide.pptx, slide);
+ %%%大浦圭一郎%%%,
``統計的パラメトリック音声合成技術とその実用化,'' 情報処理学会研究報告, vol. 2019-MUS-123, no.11, 京都, 日本, 2019年6月. (招待講演)
//%%%Keiichiro Oura%%%, ``Statistical parametric speech synthesis and its applications,'' IPSJ SIG Technical Report, vol. 2019-MUS-123, no.11, Kyoto, Japan, June, 2019.
&publication(2019/20190623_TReport_SIGMUS_Keiichiro_Oura_paper.pdf, paper);
&publication(2019/20190623_TReport_SIGMUS_Keiichiro_Oura_slide.pptx, slide);


**過去の発表論文 [#l837d811]
#ls2(ホーム/発表論文/,reverse);




トップ   編集 差分 履歴 添付 複製 名前変更 リロード   新規 一覧 検索 最終更新   ヘルプ   最終更新のRSS