default search action
Kartik Audhkhasi
Person information
- affiliation: University of Southern California, Los Angeles, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c65]Gowtham Ramesh, Kartik Audhkhasi, Bhuvana Ramabhadran:
Task Vector Algebra for ASR Models. ICASSP 2024: 12256-12260 - [i20]Shikhar Vashishth, Harman Singh, Shikhar Bharadwaj, Sriram Ganapathy, Chulayuth Asawaroengchai, Kartik Audhkhasi, Andrew Rosenberg, Ankur Bapna, Bhuvana Ramabhadran:
STAB: Speech Tokenizer Assessment Benchmark. CoRR abs/2409.02384 (2024) - 2023
- [c64]Kartik Audhkhasi, Brian Farris, Bhuvana Ramabhadran, Pedro J. Moreno:
Modular Conformer Training for Flexible End-to-End ASR. ICASSP 2023: 1-5 - [c63]Tongzhou Chen, Cyril Allauzen, Yinghui Huang, Daniel S. Park, David Rybach, W. Ronny Huang, Rodrigo Cabrera, Kartik Audhkhasi, Bhuvana Ramabhadran, Pedro J. Moreno, Michael Riley:
Large-Scale Language Model Rescoring on Long-Form Data. ICASSP 2023: 1-5 - [c62]Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran:
Robust Knowledge Distillation from RNN-T Models with Noisy Training Labels Using Full-Sum Loss. ICASSP 2023: 1-5 - [c61]Murali Karthick Baskar, Andrew Rosenberg, Bhuvana Ramabhadran, Kartik Audhkhasi:
O-1: Self-training with Oracle and 1-best Hypothesis. INTERSPEECH 2023: 77-81 - [i19]Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran:
Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss. CoRR abs/2303.05958 (2023) - [i18]Tongzhou Chen, Cyril Allauzen, Yinghui Huang, Daniel S. Park, David Rybach, W. Ronny Huang, Rodrigo Cabrera, Kartik Audhkhasi, Bhuvana Ramabhadran, Pedro J. Moreno, Michael Riley:
Large-scale Language Model Rescoring on Long-form Data. CoRR abs/2306.08133 (2023) - [i17]Murali Karthick Baskar, Andrew Rosenberg, Bhuvana Ramabhadran, Kartik Audhkhasi:
O-1: Self-training with Oracle and 1-best Hypothesis. CoRR abs/2308.07486 (2023) - 2022
- [c60]Krishna Somandepalli, Hang Qi, Brian Eoff, Alan Cowen, Kartik Audhkhasi, Josh Belanich, Brendan Jou:
Federated Learning for Affective Computing Tasks. ACII 2022: 1-8 - [c59]Kartik Audhkhasi, Yinghui Huang, Bhuvana Ramabhadran, Pedro J. Moreno:
Analysis of Self-Attention Head Diversity for Conformer-based Automatic Speech Recognition. INTERSPEECH 2022: 1026-1030 - [c58]Zhong Meng, Tongzhou Chen, Rohit Prabhavalkar, Yu Zhang, Gary Wang, Kartik Audhkhasi, Jesse Emond, Trevor Strohman, Bhuvana Ramabhadran, W. Ronny Huang, Ehsan Variani, Yinghui Huang, Pedro J. Moreno:
Modular Hybrid Autoregressive Transducer. SLT 2022: 197-204 - [i16]Kartik Audhkhasi, Yinghui Huang, Bhuvana Ramabhadran, Pedro J. Moreno:
Analysis of Self-Attention Head Diversity for Conformer-based Automatic Speech Recognition. CoRR abs/2209.06096 (2022) - [i15]Zhong Meng, Tongzhou Chen, Rohit Prabhavalkar, Yu Zhang, Gary Wang, Kartik Audhkhasi, Jesse Emond, Trevor Strohman, Bhuvana Ramabhadran, W. Ronny Huang, Ehsan Variani, Yinghui Huang, Pedro J. Moreno:
Modular Hybrid Autoregressive Transducer. CoRR abs/2210.17049 (2022) - 2021
- [c57]Hainan Xu, Yinghui Huang, Yun Zhu, Kartik Audhkhasi, Bhuvana Ramabhadran:
Convolutional Dropout and Wordpiece Augmentation for End-to-End Speech Recognition. ICASSP 2021: 5984-5988 - [c56]Andrew Rouditchenko, Angie W. Boggust, David Harwath, Brian Chen, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Hilde Kuehne, Rameswar Panda, Rogério Schmidt Feris, Brian Kingsbury, Michael Picheny, Antonio Torralba, James R. Glass:
AVLnet: Learning Audio-Visual Language Representations from Instructional Videos. Interspeech 2021: 1584-1588 - [c55]Kartik Audhkhasi, Tongzhou Chen, Bhuvana Ramabhadran, Pedro J. Moreno:
Mixture Model Attention: Flexible Streaming and Non-Streaming Automatic Speech Recognition. Interspeech 2021: 1812-1816 - [c54]Hainan Xu, Kartik Audhkhasi, Yinghui Huang, Jesse Emond, Bhuvana Ramabhadran:
Regularizing Word Segmentation by Creating Misspellings. Interspeech 2021: 2561-2565 - 2020
- [j8]Bart Kosko, Kartik Audhkhasi, Osonde Osoba:
Noise can speed backpropagation learning and deep bidirectional pretraining. Neural Networks 129: 359-384 (2020) - [c53]George Saon, Zoltán Tüske, Kartik Audhkhasi:
Alignment-Length Synchronous Decoding for RNN Transducer. ICASSP 2020: 7804-7808 - [c52]Yinghui Huang, Hong-Kwang Kuo, Samuel Thomas, Zvi Kons, Kartik Audhkhasi, Brian Kingsbury, Ron Hoory, Michael Picheny:
Leveraging Unpaired Text Data for Training End-To-End Speech-to-Intent Systems. ICASSP 2020: 7984-7988 - [c51]Zoltán Tüske, George Saon, Kartik Audhkhasi, Brian Kingsbury:
Single Headed Attention Based Sequence-to-Sequence Model for State-of-the-Art Results on Switchboard. INTERSPEECH 2020: 551-555 - [c50]Hong-Kwang Jeff Kuo, Zoltán Tüske, Samuel Thomas, Yinghui Huang, Kartik Audhkhasi, Brian Kingsbury, Gakuto Kurata, Zvi Kons, Ron Hoory, Luis A. Lastras:
End-to-End Spoken Language Understanding Without Full Transcripts. INTERSPEECH 2020: 906-910 - [c49]Samuel Thomas, Kartik Audhkhasi, Brian Kingsbury:
Transliteration Based Data Augmentation for Training Multilingual ASR Acoustic Models in Low Resource Settings. INTERSPEECH 2020: 4736-4740 - [i14]Zoltán Tüske, George Saon, Kartik Audhkhasi, Brian Kingsbury:
Single headed attention based sequence-to-sequence model for state-of-the-art results on Switchboard-300. CoRR abs/2001.07263 (2020) - [i13]Andrew Rouditchenko, Angie W. Boggust, David Harwath, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Rogério Feris, Brian Kingsbury, Michael Picheny, Antonio Torralba, James R. Glass:
AVLnet: Learning Audio-Visual Language Representations from Instructional Videos. CoRR abs/2006.09199 (2020) - [i12]Hong-Kwang Jeff Kuo, Zoltán Tüske, Samuel Thomas, Yinghui Huang, Kartik Audhkhasi, Brian Kingsbury, Gakuto Kurata, Zvi Kons, Ron Hoory, Luis A. Lastras:
End-to-End Spoken Language Understanding Without Full Transcripts. CoRR abs/2009.14386 (2020) - [i11]Yinghui Huang, Hong-Kwang Kuo, Samuel Thomas, Zvi Kons, Kartik Audhkhasi, Brian Kingsbury, Ron Hoory, Michael Picheny:
Leveraging Unpaired Text Data for Training End-to-End Speech-to-Intent Systems. CoRR abs/2010.04284 (2020)
2010 – 2019
- 2019
- [c48]George Saon, Zoltán Tüske, Kartik Audhkhasi, Brian Kingsbury, Michael Picheny, Samuel Thomas:
Simplified LSTMS for Speech Recognition. ASRU 2019: 547-553 - [c47]Angie W. Boggust, Kartik Audhkhasi, Dhiraj Joshi, David Harwath, Samuel Thomas, Rogério Schmidt Feris, Danny Gutfreund, Yang Zhang, Antonio Torralba, Michael Picheny, James R. Glass:
Grounding Spoken Words in Unlabeled Video. CVPR Workshops 2019: 29-32 - [c46]Shane Settle, Kartik Audhkhasi, Karen Livescu, Michael Picheny:
Acoustically Grounded Word Embeddings for Improved Acoustics-to-word Speech Recognition. ICASSP 2019: 5641-5645 - [c45]George Saon, Zoltán Tüske, Kartik Audhkhasi, Brian Kingsbury:
Sequence Noise Injected Training for End-to-end Speech Recognition. ICASSP 2019: 6261-6265 - [c44]Michael Picheny, Zoltán Tüske, Brian Kingsbury, Kartik Audhkhasi, Xiaodong Cui, George Saon:
Challenging the Boundaries of Speech Recognition: The MALACH Corpus. INTERSPEECH 2019: 326-330 - [c43]Gakuto Kurata, Kartik Audhkhasi:
Guiding CTC Posterior Spike Timings for Improved Posterior Fusion and Knowledge Distillation. INTERSPEECH 2019: 1616-1620 - [c42]Gakuto Kurata, Kartik Audhkhasi:
Multi-Task CTC Training with Auxiliary Feature Reconstruction for End-to-End Speech Recognition. INTERSPEECH 2019: 1636-1640 - [c41]Kartik Audhkhasi, George Saon, Zoltán Tüske, Brian Kingsbury, Michael Picheny:
Forget a Bit to Learn Better: Soft Forgetting for CTC-Based Automatic Speech Recognition. INTERSPEECH 2019: 2618-2622 - [c40]Samuel Thomas, Kartik Audhkhasi, Zoltán Tüske, Yinghui Huang, Michael Picheny:
Detection and Recovery of OOVs for Improved English Broadcast News Captioning. INTERSPEECH 2019: 2973-2977 - [c39]Zoltán Tüske, Kartik Audhkhasi, George Saon:
Advancing Sequence-to-Sequence Based Speech Recognition. INTERSPEECH 2019: 3780-3784 - [i10]Shane Settle, Kartik Audhkhasi, Karen Livescu, Michael Picheny:
Acoustically Grounded Word Embeddings for Improved Acoustics-to-Word Speech Recognition. CoRR abs/1903.12306 (2019) - [i9]Gakuto Kurata, Kartik Audhkhasi:
Guiding CTC Posterior Spike Timings for Improved Posterior Fusion and Knowledge Distillation. CoRR abs/1904.08311 (2019) - [i8]Michael Picheny, Zoltán Tüske, Brian Kingsbury, Kartik Audhkhasi, Xiaodong Cui, George Saon:
Challenging the Boundaries of Speech Recognition: The MALACH Corpus. CoRR abs/1908.03455 (2019) - 2018
- [j7]Rahul Gupta, Kartik Audhkhasi, Zach Jacokes, Agata Rozga, Shrikanth S. Narayanan:
Modeling Multiple Time Series Annotations as Noisy Distortions of the Ground Truth: An Expectation-Maximization Approach. IEEE Trans. Affect. Comput. 9(1): 76-89 (2018) - [c38]Kartik Audhkhasi, Brian Kingsbury, Bhuvana Ramabhadran, George Saon, Michael Picheny:
Building Competitive Direct Acoustics-to-Word Models for English Conversational Speech Recognition. ICASSP 2018: 4759-4763 - [c37]Xuesong Yang, Kartik Audhkhasi, Andrew Rosenberg, Samuel Thomas, Bhuvana Ramabhadran, Mark Hasegawa-Johnson:
Joint Modeling of Accents and Acoustics for Multi-Accent Speech Recognition. ICASSP 2018: 5989-5993 - [c36]Yinghui Huang, Abhinav Sethy, Kartik Audhkhasi, Bhuvana Ramabhadran:
Whole Sentence Neural Language Models. ICASSP 2018: 6089-6093 - [c35]Gakuto Kurata, Kartik Audhkhasi:
Improved Knowledge Distillation from Bi-Directional to Uni-Directional LSTM CTC for End-to-End Speech Recognition. SLT 2018: 411-417 - [i7]Xuesong Yang, Kartik Audhkhasi, Andrew Rosenberg, Samuel Thomas, Bhuvana Ramabhadran, Mark Hasegawa-Johnson:
Joint Modeling of Accents and Acoustics for Multi-Accent Speech Recognition. CoRR abs/1802.02656 (2018) - 2017
- [j6]Kartik Audhkhasi, Andrew Rosenberg, George Saon, Abhinav Sethy, Bhuvana Ramabhadran, Stanley F. Chen, Michael Picheny:
Recent progress in deep end-to-end models for spoken language processing. IBM J. Res. Dev. 61(4-5): 2:1-2:10 (2017) - [j5]Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, Brian Kingsbury:
End-to-End ASR-Free Keyword Search From Speech. IEEE J. Sel. Top. Signal Process. 11(8): 1351-1359 (2017) - [c34]Jia Cui, Brian Kingsbury, Bhuvana Ramabhadran, George Saon, Tom Sercu, Kartik Audhkhasi, Abhinav Sethy, Markus Nußbaum-Thom, Andrew Rosenberg:
Knowledge distillation across ensembles of multilingual models for low-resource languages. ICASSP 2017: 4825-4829 - [c33]Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, Brian Kingsbury:
End-to-end ASR-free keyword search from speech. ICASSP 2017: 4840-4844 - [c32]Andrew Rosenberg, Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Michael Picheny:
End-to-end speech recognition and keyword search on low-resource languages. ICASSP 2017: 5280-5284 - [c31]George Saon, Gakuto Kurata, Tom Sercu, Kartik Audhkhasi, Samuel Thomas, Dimitrios Dimitriadis, Xiaodong Cui, Bhuvana Ramabhadran, Michael Picheny, Lynn-Li Lim, Bergul Roomi, Phil Hall:
English Conversational Telephone Speech Recognition by Humans and Machines. INTERSPEECH 2017: 132-136 - [c30]Kartik Audhkhasi, Bhuvana Ramabhadran, George Saon, Michael Picheny, David Nahamoo:
Direct Acoustics-to-Word Models for English Conversational Speech Recognition. INTERSPEECH 2017: 959-963 - [i6]Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, Brian Kingsbury:
End-to-End ASR-free Keyword Search from Speech. CoRR abs/1701.04313 (2017) - [i5]George Saon, Gakuto Kurata, Tom Sercu, Kartik Audhkhasi, Samuel Thomas, Dimitrios Dimitriadis, Xiaodong Cui, Bhuvana Ramabhadran, Michael Picheny, Lynn-Li Lim, Bergul Roomi, Phil Hall:
English Conversational Telephone Speech Recognition by Humans and Machines. CoRR abs/1703.02136 (2017) - [i4]Kartik Audhkhasi, Bhuvana Ramabhadran, George Saon, Michael Picheny, David Nahamoo:
Direct Acoustics-to-Word Models for English Conversational Speech Recognition. CoRR abs/1703.07754 (2017) - [i3]Kartik Audhkhasi, Brian Kingsbury, Bhuvana Ramabhadran, George Saon, Michael Picheny:
Building competitive direct acoustics-to-word models for English conversational speech recognition. CoRR abs/1712.03133 (2017) - 2016
- [j4]Rahul Gupta, Kartik Audhkhasi, Sungbok Lee, Shrikanth S. Narayanan:
Detecting paralinguistic events in audio stream using context in features and probabilistic decisions. Comput. Speech Lang. 36: 72-92 (2016) - [j3]Kartik Audhkhasi, Osonde Osoba, Bart Kosko:
Noise-enhanced convolutional neural networks. Neural Networks 78: 15-23 (2016) - [c29]Jie Chen, Lingfei Wu, Kartik Audhkhasi, Brian Kingsbury, Bhuvana Ramabhadran:
Efficient one-vs-one kernel ridge regression for speech recognition. ICASSP 2016: 2454-2458 - [c28]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran:
Semantic word embedding neural network language models for automatic speech recognition. ICASSP 2016: 5995-5999 - [c27]Samuel Thomas, Kartik Audhkhasi, Jia Cui, Brian Kingsbury, Bhuvana Ramabhadran:
Multilingual Data Selection for Low Resource Speech Recognition. INTERSPEECH 2016: 3853-3857 - [i2]Dmitriy Serdyuk, Kartik Audhkhasi, Philemon Brakel, Bhuvana Ramabhadran, Samuel Thomas, Yoshua Bengio:
Invariant Representations for Noisy Speech Recognition. CoRR abs/1612.01928 (2016) - 2015
- [c26]Jia Cui, Brian Kingsbury, Bhuvana Ramabhadran, Abhinav Sethy, Kartik Audhkhasi, Xiaodong Cui, Ellen Kislal, Lidia Mangu, Markus Nußbaum-Thom, Michael Picheny, Zoltán Tüske, Pavel Golik, Ralf Schlüter, Hermann Ney, Mark J. F. Gales, Kate M. Knill, Anton Ragni, Haipeng Wang, Philip C. Woodland:
Multilingual representations for low resource speech recognition and keyword search. ASRU 2015: 259-266 - [c25]Rahul Gupta, Kartik Audhkhasi, Shrikanth S. Narayanan:
A mixture of experts approach towards intelligibility classification of pathological speech. ICASSP 2015: 1986-1990 - [c24]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran:
Diverse Embedding Neural Network Language Models. ICLR (Workshop) 2015 - 2014
- [j2]Kartik Audhkhasi, Andreas M. Zavou, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Theoretical Analysis of Diversity in an Ensemble of Automatic Speech Recognition Systems. IEEE ACM Trans. Audio Speech Lang. Process. 22(3): 711-726 (2014) - [c23]Rahul Gupta, Kartik Audhkhasi, Shrikanth S. Narayanan:
Training ensemble of diverse classifiers on feature subsets. ICASSP 2014: 2927-2931 - [c22]Naveen Kumar, Maarten Van Segbroeck, Kartik Audhkhasi, Peter Drotár, Shrikanth S. Narayanan:
Fusion of diverse denoising systems for robust automatic speech recognition. ICASSP 2014: 5557-5561 - [c21]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Shrikanth S. Narayanan:
Semi-supervised term-weighted value rescoring for keyword search. ICASSP 2014: 7869-7873 - 2013
- [j1]Kartik Audhkhasi, Shrikanth S. Narayanan:
A Globally-Variant Locally-Constant Model for Fusion of Labels from Multiple Diverse Experts without Using Reference Labels. IEEE Trans. Pattern Anal. Mach. Intell. 35(4): 769-783 (2013) - [c20]Abhinav Sethy, Stanley F. Chen, Ebru Arisoy, Bhuvana Ramabhadran, Kartik Audhkhasi, Shrikanth S. Narayanan, Paul Vozila:
Joint training of interpolated exponential n-gram models. ASRU 2013: 25-30 - [c19]Kartik Audhkhasi, Osonde Osoba, Bart Kosko:
Noise benefits in backpropagation and deep bidirectional pre-training. IJCNN 2013: 1-8 - [c18]Kartik Audhkhasi, Osonde Osoba, Bart Kosko:
Noisy hidden Markov models for speech recognition. IJCNN 2013: 1-6 - [c17]Rahul Gupta, Kartik Audhkhasi, Sungbok Lee, Shrikanth S. Narayanan:
Paralinguistic event detection from speech using probabilistic time-series smoothing and masking. INTERSPEECH 2013: 173-177 - [c16]Daniel Bone, Theodora Chaspari, Kartik Audhkhasi, James Gibson, Andreas Tsiartas, Maarten Van Segbroeck, Ming Li, Sungbok Lee, Shrikanth S. Narayanan:
Classifying language-related developmental disorders from speech cues: the promise and the potential confounds. INTERSPEECH 2013: 182-186 - [c15]Kartik Audhkhasi, Andreas M. Zavou, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Empirical link between hypothesis diversity and fusion performance in an ensemble of automatic speech recognition systems. INTERSPEECH 2013: 3082-3086 - [c14]Fabrizio Morbini, Kartik Audhkhasi, Kenji Sagae, Ron Artstein, Dogan Can, Panayiotis G. Georgiou, Shrikanth S. Narayanan, Anton Leuski, David R. Traum:
Which ASR should I choose for my dialogue system? SIGDIAL Conference 2013: 394-403 - [i1]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Shrikanth S. Narayanan:
Generalized Ambiguity Decomposition for Understanding Ensemble Diversity. CoRR abs/1312.7463 (2013) - 2012
- [c13]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Analyzing quality of crowd-sourced speech transcriptions of noisy audio for acoustic model adaptation. ICASSP 2012: 4137-4140 - [c12]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Shrikanth S. Narayanan:
Creating ensemble of diverse maximum entropy models. ICASSP 2012: 4845-4848 - [c11]Kartik Audhkhasi, Angeliki Metallinou, Ming Li, Shrikanth S. Narayanan:
Speaker Personality Classification Using Systems Based on Acoustic-Lexical Cues and an Optimal Tree-Structured Bayesian Network. INTERSPEECH 2012: 262-265 - [c10]Fabrizio Morbini, Kartik Audhkhasi, Ron Artstein, Maarten Van Segbroeck, Kenji Sagae, Panayiotis G. Georgiou, David R. Traum, Shrikanth S. Narayanan:
A reranking approach for recognition and classification of speech input in conversational dialogue systems. SLT 2012: 49-54 - 2011
- [c9]Kartik Audhkhasi, Shrikanth S. Narayanan:
Emotion classification from speech using evaluator reliability-weighted combination of ranked lists. ICASSP 2011: 4956-4959 - [c8]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Accurate transcription of broadcast news speech using multiple noisy transcribers and unsupervised reliability metrics. ICASSP 2011: 4980-4983 - [c7]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Reliability-Weighted Acoustic Model Adaptation Using Crowd Sourced Transcriptions. INTERSPEECH 2011: 3045-3048 - 2010
- [c6]Kartik Audhkhasi, Shrikanth S. Narayanan:
Data-dependent evaluator modeling and its application to emotional valence classification from speech. INTERSPEECH 2010: 2366-2369 - [c5]Qun Feng Tan, Kartik Audhkhasi, Panayiotis G. Georgiou, Emil Ettelaie, Shrikanth S. Narayanan:
Automatic speech recognition system channel modeling. INTERSPEECH 2010: 2442-2445
2000 – 2009
- 2009
- [c4]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Lattice-based lexical cues for word fragment detection in conversational speech. ASRU 2009: 568-573 - [c3]Om Deshmukh, Kundan Kandhway, Ashish Verma, Kartik Audhkhasi:
Automatic evaluation of spoken english fluency. ICASSP 2009: 4829-4832 - [c2]Kartik Audhkhasi, Kundan Kandhway, Om Deshmukh, Ashish Verma:
Formant-based technique for automatic filled-pause detection in spontaneous spoken english. ICASSP 2009: 4857-4860 - 2007
- [c1]Kartik Audhkhasi, Ashish Verma:
Keyword Search using Modified Minimum Edit Distance Measure. ICASSP (4) 2007: 929-932
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:24 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint