default search action
Takashi Nose
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j25]Xuecheng Niu, Akinori Ito, Takashi Nose:
Scheduled Curiosity-Deep Dyna-Q: Efficient Exploration for Dialog Policy Learning. IEEE Access 12: 46940-46952 (2024) - [j24]Xuecheng Niu, Akinori Ito, Takashi Nose:
A Replaceable Curiosity-Driven Candidate Agent Exploration Approach for Task-Oriented Dialog Policy Learning. IEEE Access 12: 142640-142650 (2024) - [j23]Rui Zhou, Takaki Koshikawa, Akinori Ito, Takashi Nose, Chia-Ping Chen:
Multilingual Meta-Transfer Learning for Low-Resource Speech Recognition. IEEE Access 12: 158493-158504 (2024) - [j22]Tetsuo Kosaka, Kazuya Saeki, Yoshitaka Aizawa, Masaharu Kato, Takashi Nose:
Simultaneous Adaptation of Acoustic and Language Models for Emotional Speech Recognition Using Tweet Data. IEICE Trans. Inf. Syst. 107(3): 363-373 (2024) - [c99]Zikai Shu, Takashi Nose, Akinori Ito:
Toward Photo-Realistic Facial Animation Generation Based on Keypoint Features. ICMLC 2024: 334-339 - [c98]Rui Zhou, Akinori Ito, Takashi Nose:
Character Expressions in Meta-Learning for Extremely Low Resource Language Speech Recognition. ICMLC 2024: 525-529 - [c97]Changlong Wang, Akinori Ito, Takashi Nose, Chia-Ping Chen:
Evaluation of Environmental Sound Classification using Vision Transformer. ICMLC 2024: 665-669 - [i2]Xuecheng Niu, Akinori Ito, Takashi Nose:
Scheduled Curiosity-Deep Dyna-Q: Efficient Exploration for Dialog Policy Learning. CoRR abs/2402.00085 (2024) - 2023
- [c96]Simon Jolibois, Akinori Ito, Takashi Nose:
Multimodal Expressive Embodied Conversational Agent Design. HCI (43) 2023: 244-249 - 2021
- [c95]Daisuke Horii, Akinori Ito, Takashi Nose:
Analysis of Feature Extraction by Convolutional Neural Network for Speech Emotion Recognition. GCCE 2021: 425-426 - [c94]Yoshihiro Yamazaki, Yuya Chiba, Takashi Nose, Akinori Ito:
Neural Spoken-Response Generation Using Prosodic and Linguistic Context for Conversational Systems. Interspeech 2021: 246-250 - [c93]Satsuki Naijo, Akinori Ito, Takashi Nose:
Improvement of Automatic English Pronunciation Assessment with Small Number of Utterances Using Sentence Speakability. Interspeech 2021: 4473-4477 - [c92]Ryota Yahagi, Yuya Chiba, Takashi Nose, Akinori Ito:
Multimodal Dialogue Response Timing Estimation Using Dialogue Context Encoder. IWSDS 2021: 133-141 - 2020
- [j21]Kosuke Nakamura, Takashi Nose, Yuya Chiba, Akinori Ito:
A Symbol-level Melody Completion Based on a Convolutional Neural Network with Generative Adversarial Learning. J. Inf. Process. 28: 248-257 (2020) - [j20]Jiang Fu, Yuya Chiba, Takashi Nose, Akinori Ito:
Automatic assessment of English proficiency for Japanese learners without reference sentences based on deep neural network acoustic models. Speech Commun. 116: 86-97 (2020) - [c91]Rikiya Takahashi, Takashi Nose, Yuya Chiba, Akinori Ito:
Successive Japanese Lyrics Generation Based on Encoder-Decoder Model. GCCE 2020: 126-127 - [c90]Ryota Yahagi, Yuya Chiba, Takashi Nose, Akinori Ito:
Incremental Response Generation Using Prefix-to-Prefix Model for Dialogue System. GCCE 2020: 349-350 - [c89]Satoru Mizuochi, Yuya Chiba, Takashi Nose, Akinori Ito:
Spoken Term Detection Based on Acoustic Models Trained in Multiple Languages for Zero-Resource Language. GCCE 2020: 351-352 - [c88]Satsuki Naijo, Yuya Chiba, Takashi Nose, Akinori Ito:
Analysis and Estimation of Sentence Speakability for English Pronunciation Evaluation. GCCE 2020: 353-355 - [c87]Aoi Kanagaki, Masaya Tanaka, Takashi Nose, Ryohei Shimizu, Akira Ito, Akinori Ito:
CycleGAN-Based High-Quality Non-Parallel Voice Conversion with Spectrogram and WaveRNN. GCCE 2020: 356-357 - [c86]Daisuke Fujimaki, Takashi Nose, Akinori Ito:
Integration of Accent Sandhi and Prosodic Features Estimation for Japanese Text-to-Speech Synthesis. GCCE 2020: 358-359 - [c85]Yoshihiro Yamazaki, Yuya Chiba, Takashi Nose, Akinori Ito:
Filler Prediction Based on Bidirectional LSTM for Generation of Natural Response of Spoken Dialog. GCCE 2020: 360-361 - [c84]Takuma Hayasaka, Takashi Nose, Akinori Ito:
A Study on Minimum Spectral Error Analysis of Speech. GCCE 2020: 362-363 - [c83]Takuto Fujimura, Takashi Nose, Akinori Ito:
LJSing: Large-Scale Singing Voice Corpus of Single Japanese Singer. GCCE 2020: 364-365 - [c82]Shuhei Imai, Takashi Nose, Aoi Kanagaki, Satoshi Watanabe, Akinori Ito:
Improving Pronunciation Clarity of Dysarthric Speech Using CycleGAN with Multiple Speakers. GCCE 2020: 366-367 - [c81]Yuya Chiba, Takashi Nose, Akinori Ito:
Multi-Stream Attention-Based BLSTM with Feature Segmentation for Speech Emotion Recognition. INTERSPEECH 2020: 3301-3305 - [c80]Yoshihiro Yamazaki, Yuya Chiba, Takashi Nose, Akinori Ito:
Construction and Analysis of a Multimodal Chat-talk Corpus for Dialog Systems Considering Interpersonal Closeness. LREC 2020: 443-448
2010 – 2019
- 2019
- [j19]Hafiyan Prafianto, Takashi Nose, Yuya Chiba, Akinori Ito:
Improving human scoring of prosody using parametric speech synthesis. Speech Commun. 111: 14-21 (2019) - [i1]Keita Ishizuka, Takashi Nose:
Developing a Multi-Platform Speech Recording System Toward Open Service of Building Large-Scale Speech Corpora. CoRR abs/1912.09148 (2019) - 2018
- [c79]Shunsuke Tada, Yuya Chiba, Takashi Nose, Akinori Ito:
Effect of Mutual Self-Disclosure in Spoken Dialog System on User Impression. APSIPA 2018: 806-810 - [c78]Tetsuo Kosaka, Yoshitaka Aizawa, Masaharu Kato, Takashi Nose:
Acoustic Model Adaptation for Emotional Speech Recognition Using Twitter-Based Emotional Speech Corpus. APSIPA 2018: 1747-1751 - [c77]Koki Katori, Yukio Nakano, Takao Imanishi, Takashi Nose, Kazutaka Hotta, Hideki Kawarai, Fumiaki Ishida, Tsuyoshi Ueno:
Monitoring System for a Single Aged Person on the Basis of Electricity Use : Performance Improvement by Interpolating Watt Hour Granularity. GCCE 2018: 739-740 - [c76]Yukio Nakano, Takashi Nose, Kazutaka Hotta, Hideki Kawarai, Tsuyoshi Ueno:
Monitoring system for a single aged person on the basis of electricity use - Heatstroke-prevention system. ICCE 2018: 1-3 - [c75]Jiang Fu, Yuya Chiba, Takashi Nose, Akinori Ito:
Evaluation of English Speech Recognition for Japanese Learners Using DNN-Based Acoustic Models. IIH-MSP (2) 2018: 93-100 - [c74]Mai Yamanaka, Yuya Chiba, Takashi Nose, Akinori Ito:
A Study on a Spoken Dialogue System with Cooperative Emotional Speech Synthesis Using Acoustic and Linguistic Information. IIH-MSP (2) 2018: 101-108 - [c73]Takashi Kimura, Takashi Nose, Shinji Hirooka, Yuya Chiba, Akinori Ito:
Comparison of Speech Recognition Performance Between Kaldi and Google Cloud Speech API. IIH-MSP (2) 2018: 109-115 - [c72]Kosuke Nakamura, Takashi Nose, Yuya Chiba, Akinori Ito:
Melody Completion Based on Convolutional Neural Networks and Generative Adversarial Learning. IIH-MSP (2) 2018: 116-123 - [c71]Shinya Hanabusa, Takashi Nose, Akinori Ito:
Segmental Pitch Control Using Speech Input Based on Differential Contexts and Features for Customizable Neural Speech Synthesis. IIH-MSP (2) 2018: 124-131 - [c70]Sou Miyamoto, Takashi Nose, Kazuyuki Hiroshiba, Yuri Odagiri, Akinori Ito:
Two-Stage Sequence-to-Sequence Neural Voice Conversion with Low-to-High Definition Spectrogram Mapping. IIH-MSP (2) 2018: 132-139 - [c69]Hiroto Aoyama, Takashi Nose, Yuya Chiba, Akinori Ito:
Improvement of Accent Sandhi Rules Based on Japanese Accent Dictionaries. IIH-MSP (2) 2018: 140-148 - [c68]Takahiro Furuya, Yuya Chiba, Takashi Nose, Akinori Ito:
Data Collection and Analysis for Automatically Generating Record of Human Behaviors by Environmental Sound Recognition. IIH-MSP (2) 2018: 149-156 - [c67]Toru Ishikawa, Takashi Nose, Akinori Ito:
DNN-Based Talking Movie Generation with Face Direction Consideration. IIH-MSP (2) 2018: 157-164 - [c66]Haoran Wu, Yuya Chiba, Takashi Nose, Akinori Ito:
Analyzing Effect of Physical Expression on English Proficiency for Multimodal Computer-Assisted Language Learning. INTERSPEECH 2018: 1746-1750 - [c65]Yukiko Kageyama, Yuya Chiba, Takashi Nose, Akinori Ito:
Improving User Impression in Spoken Dialog System with Gradual Speech Form Control. SIGDIAL Conference 2018: 235-240 - [c64]Yuya Chiba, Takashi Nose, Taketo Kase, Mai Yamanaka, Akinori Ito:
An Analysis of the Effect of Emotional Speech Synthesis on Non-Task-Oriented Dialogue System. SIGDIAL Conference 2018: 371-375 - 2017
- [j18]Yuya Chiba, Takashi Nose, Akinori Ito:
Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt. J. Multimodal User Interfaces 11(2): 185-196 (2017) - [j17]Tomohiro Nagata, Hiroki Mori, Takashi Nose:
Dimensional paralinguistic information control based on multiple-regression HSMM for spontaneous dialogue speech synthesis with robust parameter estimation. Speech Commun. 88: 137-148 (2017) - [j16]Takashi Nose, Yusuke Arao, Takao Kobayashi, Komei Sugiura, Yoshinori Shiga:
Sentence Selection Based on Extended Entropy Using Phonetic and Prosodic Contexts for Statistical Parametric Speech Synthesis. IEEE ACM Trans. Audio Speech Lang. Process. 25(5): 1107-1116 (2017) - [c63]Yuya Chiba, Takashi Nose, Akinori Ito:
Analysis of efficient multimodal features for estimating user's willingness to talk: Comparison of human-machine and human-human dialog. APSIPA 2017: 428-431 - [c62]Koki Katori, Yukio Nakano, Takashi Nose, Kazutaka Hotta, Hideki Kawarai, Tsuyoshi Ueno:
Monitoring system for a single aged person on the basis of electricity use - Prototype by using smart meter. GCCE 2017: 1-2 - [c61]Yukiko Kageyama, Yuya Chiba, Takashi Nose, Akinori Ito:
Collection of Example Sentences for Non-task-Oriented Dialog Using a Spoken Dialog System and Comparison with Hand-Crafted DB. HCI (29) 2017: 458-464 - [c60]Hayato Mori, Yuya Chiba, Takashi Nose, Akinori Ito:
Dialog-Based Interactive Movie Recommendation: Comparison of Dialog Strategies. IIH-MSP (2) 2017: 77-83 - [c59]Shunsuke Tada, Yuya Chiba, Takashi Nose, Akinori Ito:
Response Selection of Interview-Based Dialog System Using User Focus and Semantic Orientation. IIH-MSP (2) 2017: 84-90 - [c58]Yusuke Yamada, Takashi Nose, Yuya Chiba, Akinori Ito, Takahiro Shinozaki:
Development and Evaluation of Julius-Compatible Interface for Kaldi ASR. IIH-MSP (2) 2017: 91-96 - [c57]Sou Miyamoto, Takashi Nose, Suzunosuke Ito, Harunori Koike, Yuya Chiba, Akinori Ito, Takahiro Shinozaki:
Voice Conversion from Arbitrary Speakers Based on Deep Neural Networks with Adversarial Learning. IIH-MSP (2) 2017: 97-103 - [c56]Kosuke Nakamura, Yuya Chiba, Takashi Nose, Akinori Ito:
Evaluation of Nonlinear Tempo Modification Methods Based on Sinusoidal Modeling. IIH-MSP (2) 2017: 104-111 - [c55]Kazuki Sato, Takashi Nose, Akira Ito, Yuya Chiba, Akinori Ito, Takahiro Shinozaki:
A Study on 2D Photo-Realistic Facial Animation Generation Using 3D Facial Feature Points and Deep Neural Networks. IIH-MSP (2) 2017: 112-118 - [c54]Isao Miyagawa, Yuya Chiba, Takashi Nose, Akinori Ito:
Detection of Singing Mistakes from Singing Voice. IIH-MSP (2) 2017: 130-136 - 2016
- [j15]Takashi Nose:
Efficient Implementation of Global Variance Compensation for Parametric Speech Synthesis. IEEE ACM Trans. Audio Speech Lang. Process. 24(10): 1694-1704 (2016) - 2015
- [j14]Takashi Nose, Misa Kanemoto, Tomoki Koriyama, Takao Kobayashi:
HMM-based expressive singing voice synthesis with singing style control and robust pitch modeling. Comput. Speech Lang. 34(1): 308-322 (2015) - [c53]Taketo Kase, Takashi Nose, Akinori Ito:
On Appropriateness and Estimation of the Emotion of Synthesized Response Speech in a Spoken Dialogue System. HCI (27) 2015: 747-752 - [c52]Tsukasa Nishino, Takashi Nose, Akinori Ito:
Tempo Modification of Mixed Music Signal by Nonlinear Time Scaling and Sinusoidal Modeling. IIH-MSP 2015: 146-149 - [c51]Yuki Saito, Takashi Nose, Takahiro Shinozaki, Akinori Ito:
Conversion of Speaker's Face Image Using PCA and Animation Unit for Video Chatting. IIH-MSP 2015: 433-436 - [c50]Takashi Nose, Yusuke Arao, Takao Kobayashi, Komei Sugiura, Yoshinori Shiga, Akinori Ito:
Entropy-based sentence selection for speech synthesis using phonetic and prosodic contexts. INTERSPEECH 2015: 3491-3495 - 2014
- [j13]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
Statistical Parametric Speech Synthesis Based on Gaussian Process Regression. IEEE J. Sel. Top. Signal Process. 8(2): 173-183 (2014) - [j12]Takashi Nose, Vataya Chunwijitra, Takao Kobayashi:
A Parameter Generation Algorithm Using Local Variance for HMM-Based Speech Synthesis. IEEE J. Sel. Top. Signal Process. 8(2): 221-228 (2014) - [j11]Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka:
Prosodic variation enhancement using unsupervised context labeling for HMM-based expressive speech synthesis. Speech Commun. 57: 144-154 (2014) - [c49]Kohei Machida, Takashi Nose, Akinori Ito:
Speech recognition in a home environment using parallel decoding with GMM-based noise modeling. APSIPA 2014: 1-4 - [c48]Naoto Suzuki, Takashi Nose, Yutaka Hiroi, Akinori Ito:
Controlling Switching Pause Using an AR Agent for Interactive CALL System. HCI (27) 2014: 588-593 - [c47]Hafiyan Prafianto, Takashi Nose, Yuya Chiba, Akinori Ito, Kazuyuki Sato:
A study on the effect of speech rate on perception of spoken easy Japanese using speech synthesis. ICAILP 2014: 476-479 - [c46]Masahito Okamoto, Takashi Nose, Akinori Ito, Takeshi Nagano:
Subjective evaluation of packet loss recovery techniques for voice over IP. ICAILP 2014: 711-714 - [c45]Noriko Totsuka, Yuya Chiba, Takashi Nose, Akinori Ito:
Robot: Have I done something wrong? - Analysis of prosodic features of speech commands under the robot's unintended behavior. ICAILP 2014: 887-890 - [c44]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
Parametric speech synthesis based on Gaussian process regression using global variance and hyperparameter optimization. ICASSP 2014: 3834-3838 - [c43]Kazumichi Yoshida, Takashi Nose, Akinori Ito:
Analysis of English Pronunciation of Singing Voices Sung by Japanese Speakers. IIH-MSP 2014: 554-557 - [c42]Takashi Nose, Takao Kobayashi:
Quantized F0 Context and Its Applications to Speech Synthesis, Speech Coding, and Voice Conversion. IIH-MSP 2014: 578-581 - [c41]Daiki Nagahama, Takashi Nose, Tomoki Koriyama, Takao Kobayashi:
Transform mapping using shared decision tree context clustering for HMM-based cross-lingual speech synthesis. INTERSPEECH 2014: 770-774 - [c40]Tomoki Koriyama, Hiroshi Suzuki, Takashi Nose, Takahiro Shinozaki, Takao Kobayashi:
Accent type and phrase boundary estimation using acoustic and language models for automatic prosodic labeling. INTERSPEECH 2014: 2337-2341 - [c39]Takashi Nose, Akinori Ito:
Analysis of spectral enhancement using global variance in HMM-based speech synthesis. INTERSPEECH 2014: 2917-2921 - [c38]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
Parametric speech synthesis using local and global sparse Gaussian processes. MLSP 2014: 1-6 - [c37]Yuya Chiba, Masashi Ito, Takashi Nose, Akinori Ito:
User Modeling by Using Bag-of-Behaviors for Building a Dialog System Sensitive to the Interlocutor's Internal State. SIGDIAL Conference 2014: 74-78 - 2013
- [j10]Takashi Nose, Takao Kobayashi:
An intuitive style control technique in HMM-based expressive speech synthesis using subjective style intensity and multiple-regression global variance model. Speech Commun. 55(2): 347-357 (2013) - [c36]Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka:
HMM-based expressive speech synthesis based on phrase-level F0 context labeling. ICASSP 2013: 7859-7863 - [c35]Hiroki Kanagawa, Takashi Nose, Takao Kobayashi:
Speaker-independent style conversion for HMM-based expressive speech synthesis. ICASSP 2013: 7864-7868 - [c34]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
Frame-level acoustic modeling based on Gaussian process regression for statistical nonparametric speech synthesis. ICASSP 2013: 8007-8011 - [c33]Takashi Nose, Misa Kanemoto, Tomoki Koriyama, Takao Kobayashi:
A style control technique for singing voice synthesis based on multiple-regression HSMM. INTERSPEECH 2013: 378-382 - [c32]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
Statistical nonparametric speech synthesis using sparse Gaussian processes. INTERSPEECH 2013: 1072-1076 - [c31]Tomohiro Nagata, Hiroki Mori, Takashi Nose:
Robust estimation of multiple-regression HMM parameters for dimension-based expressive dialogue speech synthesis. INTERSPEECH 2013: 1549-1553 - 2012
- [j9]Vataya Chunwijitra, Takashi Nose, Takao Kobayashi:
A tone-modeling technique using a quantized F0 context to improve tone correctness in average-voice-based speech synthesis. Speech Commun. 54(2): 245-255 (2012) - [j8]Takashi Nose, Takao Kobayashi:
Very low bit-rate F0 coding for phonetic vocoders using MSD-HMM with quantized F0 symbols. Speech Commun. 54(3): 384-392 (2012) - [c30]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
An F0 modeling technique based on prosodic events for spontaneous speech synthesis. ICASSP 2012: 4589-4592 - [c29]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
Discontinuous Observation HMM for Prosodic-Event-Based F0 Generation. INTERSPEECH 2012: 462-465 - [c28]Vataya Chunwijitra, Takashi Nose, Takao Kobayashi:
A speech parameter generation algorithm using local variance for HMM-based speech synthesis. INTERSPEECH 2012: 1151-1154 - 2011
- [j7]Takashi Nose, Takao Kobayashi:
Speaker-independent HMM-based voice conversion using adaptive quantization of the fundamental frequency. Speech Commun. 53(7): 973-985 (2011) - [c27]Vataya Chunwijitra, Takashi Nose, Takao Kobayashi:
Tonal context labeling using quantized F0 symbols for improving tone correctness in average-voice-based speech synthesis. ICASSP 2011: 4708-4711 - [c26]Takashi Nose, Takao Kobayashi:
Very low bit-rate F0 coding for phonetic vocoder using MSD-HMM with quantized F0 context. ICASSP 2011: 5236-5239 - [c25]Takashi Nose, Takao Kobayashi:
A Perceptual Expressivity Modeling Technique for Speech Synthesis Based on Multiple-Regression HSMM. INTERSPEECH 2011: 109-112 - [c24]Yu Maeno, Takashi Nose, Takao Kobayashi, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka:
HMM-Based Emphatic Speech Synthesis Using Unsupervised Context Labeling. INTERSPEECH 2011: 1849-1852 - [c23]Tatsuhiko Saito, Takashi Nose, Takao Kobayashi, Yohei Okato, Akio Horii:
Performance Prediction of Speech Recognition Using Average-Voice-Based Speech Synthesis. INTERSPEECH 2011: 1953-1956 - [c22]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
On the Use of Extended Context for HMM-Based Spontaneous Conversational Speech Synthesis. INTERSPEECH 2011: 2657-2660 - 2010
- [j6]Yusuke Ijima, Takashi Nose, Makoto Tachibana, Takao Kobayashi:
A Rapid Model Adaptation Technique for Emotional Speech Recognition with Style Estimation Based on Multiple-Regression HMM. IEICE Trans. Inf. Syst. 93-D(1): 107-115 (2010) - [j5]Takashi Nose, Takao Kobayashi:
A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM. IEICE Trans. Inf. Syst. 93-D(1): 116-124 (2010) - [j4]Takashi Nose, Yuhei Ota, Takao Kobayashi:
HMM-Based Voice Conversion Using Quantized F0 Context. IEICE Trans. Inf. Syst. 93-D(9): 2483-2490 (2010) - [c21]Mikio Nakano, Naoto Iwahashi, Takayuki Nagai, Taisuke Sumii, Xiang Zuo, Ryo Taguchi, Takashi Nose, Akira Mizutani, Tomoaki Nakamura, Muhanmad Attamim, Hiromi Narimatsu, Kotaro Funakoshi, Yuji Hasegawa:
Grounding New Words on the Physical World in Multi-Domain Human-Robot Dialogues. AAAI Fall Symposium: Dialog with Robots 2010 - [c20]Takashi Nose, Koujirou Ooki, Takao Kobayashi:
HMM-based speech synthesis with unsupervised labeling of accentual context based on F0 quantization and average voice model. ICASSP 2010: 4622-4625 - [c19]Shuji Yokomizo, Takashi Nose, Takao Kobayashi:
Evaluation of prosodic contextual factors for HMM-based speech synthesis. INTERSPEECH 2010: 430-433 - [c18]Tomoki Koriyama, Takashi Nose, Takao Kobayashi:
Conversational spontaneous speech synthesis using average voice model. INTERSPEECH 2010: 853-856 - [c17]Takashi Nose, Takao Kobayashi:
Speaker-independent HMM-based voice conversion using quantized fundamental frequency. INTERSPEECH 2010: 1724-1727 - [c16]Takashi Nose, Takao Kobayashi:
HMM-based robust voice conversion using adaptive F0 quantization. SSW 2010: 80-85
2000 – 2009
- 2009
- [j3]Takashi Nose, Makoto Tachibana, Takao Kobayashi:
HMM-Based Style Control for Expressive Speech Synthesis with Arbitrary Speaker's Voice Using Model Adaptation. IEICE Trans. Inf. Syst. 92-D(3): 489-497 (2009) - [j2]Junichi Yamagishi, Takashi Nose, Heiga Zen, Zhen-Hua Ling, Tomoki Toda, Keiichi Tokuda, Simon King, Steve Renals:
Robust Speaker-Adaptive HMM-Based Text-to-Speech Synthesis. IEEE Trans. Speech Audio Process. 17(6): 1208-1230 (2009) - [c15]Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi:
Emotional speech recognition based on style estimation and adaptation with multiple-regression HMM. ICASSP 2009: 4157-4160 - [c14]Yusuke Ijima, Takeshi Matsubara, Takashi Nose, Takao Kobayashi:
Speaking style adaptation for spontaneous speech recognition using multiple-regression HMM. INTERSPEECH 2009: 552-555 - [c13]Takashi Nose, Junichi Adada, Takao Kobayashi:
HMM-based speaker characteristics emphasis using average voice model. INTERSPEECH 2009: 2631-2634 - [c12]Ryo Taguchi, Naoto Iwahashi, Takashi Nose, Kotaro Funakoshi, Mikio Nakano:
Learning lexicons from spoken utterances based on statistical model selection. INTERSPEECH 2009: 2731-2734 - 2008
- [c11]Junichi Yamagishi, Takashi Nose, Heiga Zen, Tomoki Toda, Keiichi Tokuda:
Performance evaluation of the speaker-independent HMM-based speech synthesis system "HTS 2007" for the Blizzard Challenge 2007. ICASSP 2008: 3957-3960 - [c10]Makoto Tachibana, Shinsuke Izawa, Takashi Nose, Takao Kobayashi:
Speaker and style adaptation using average voice model for style control in HMM-based speech synthesis. ICASSP 2008: 4633-4636 - [c9]Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi:
An on-line adaptation technique for emotional speech recognition using style estimation with multiple-regression HMM. INTERSPEECH 2008: 1297-1300 - [c8]Takashi Nose, Yoichi Kato, Makoto Tachibana, Takao Kobayashi:
An estimation technique of style expressiveness for emotional speech using model adaptation based on multiple-regression HSMM. INTERSPEECH 2008: 2759-2762 - 2007
- [j1]Takashi Nose, Junichi Yamagishi, Takashi Masuko, Takao Kobayashi:
A Style Control Technique for HMM-Based Expressive Speech Synthesis. IEICE Trans. Inf. Syst. 90-D(9): 1406-1413 (2007) - [c7]Takashi Nose, Yoichi Kato, Takao Kobayashi:
A Speaker Adaptation Technique for MRHSMM-Based Style Control of Synthetic Speech. ICASSP (4) 2007: 833-836 - [c6]Takashi Nose, Yoichi Kato, Takao Kobayashi:
Style estimation of speech based on multiple regression hidden semi-Markov model. INTERSPEECH 2007: 2285-2288 - [c5]Heiga Zen, Takashi Nose, Junichi Yamagishi, Shinji Sako, Takashi Masuko, Alan W. Black, Keiichi Tokuda:
The HMM-based speech synthesis system (HTS) version 2.0. SSW 2007: 294-299 - 2006
- [c4]Takashi Nose, Junichi Yamagishi, Takao Kobayashi:
A style control technique for speech synthesis using multiple regression HSMM. INTERSPEECH 2006 - [c3]Makoto Tachibana, Takashi Nose, Junichi Yamagishi, Takao Kobayashi:
A technique for controlling voice quality of synthetic speech using multiple regression HSMM. INTERSPEECH 2006
1990 – 1999
- 1996
- [c2]Kenzi Noike, Nobuo Inui, Takashi Nose, Yoshiyuki Kotani:
Generating Musical Symbols to Perform Expressively by Appromimate Functions. ICMC 1996 - 1993
- [c1]Kenzi Noike, Nobuo Takiguchi, Takashi Nose, Yoshiyuki Kotani, Hirohiko Nisimura:
Automatic Generation of Expressive Performance by using Music Structures. ICMC 1993
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-08 20:32 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint