CN112599119B - Method for establishing and analyzing mobility dysarthria voice library in big data background - Google Patents
Method for establishing and analyzing mobility dysarthria voice library in big data background Download PDFInfo
- Publication number
- CN112599119B CN112599119B CN202011546906.5A CN202011546906A CN112599119B CN 112599119 B CN112599119 B CN 112599119B CN 202011546906 A CN202011546906 A CN 202011546906A CN 112599119 B CN112599119 B CN 112599119B
- Authority
- CN
- China
- Prior art keywords
- voice
- speech
- data
- corpus
- big data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010013887 Dysarthria Diseases 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 29
- 241001672694 Citrus reticulata Species 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 15
- 238000005516 engineering process Methods 0.000 claims abstract description 13
- 238000007405 data analysis Methods 0.000 claims abstract description 11
- 238000013461 design Methods 0.000 claims abstract description 10
- 239000000463 material Substances 0.000 claims description 27
- 230000006870 function Effects 0.000 claims description 11
- 238000002360 preparation method Methods 0.000 claims description 6
- 238000011835 investigation Methods 0.000 claims description 5
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000000630 rising effect Effects 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims 11
- 230000000142 dyskinetic effect Effects 0.000 claims 2
- 238000013518 transcription Methods 0.000 claims 1
- 230000035897 transcription Effects 0.000 claims 1
- 208000012902 Nervous system disease Diseases 0.000 abstract description 4
- 208000025966 Neurological disease Diseases 0.000 abstract description 4
- 230000008901 benefit Effects 0.000 abstract description 4
- 238000003745 diagnosis Methods 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 abstract description 2
- 238000011160 research Methods 0.000 description 20
- 238000012545 processing Methods 0.000 description 15
- 238000010276 construction Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000002354 daily effect Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 208000018737 Parkinson disease Diseases 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000033764 rhythmic process Effects 0.000 description 3
- 206010011878 Deafness Diseases 0.000 description 2
- 208000032041 Hearing impaired Diseases 0.000 description 2
- 208000006011 Stroke Diseases 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 208000035475 disorder Diseases 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 201000006417 multiple sclerosis Diseases 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 238000012827 research and development Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 102000014461 Ataxins Human genes 0.000 description 1
- 108010078286 Ataxins Proteins 0.000 description 1
- 206010008025 Cerebellar ataxia Diseases 0.000 description 1
- 208000009415 Spinocerebellar Ataxias Diseases 0.000 description 1
- 208000030886 Traumatic Brain injury Diseases 0.000 description 1
- 206010002026 amyotrophic lateral sclerosis Diseases 0.000 description 1
- 201000004562 autosomal dominant cerebellar ataxia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 208000029028 brain injury Diseases 0.000 description 1
- 210000003169 central nervous system Anatomy 0.000 description 1
- 206010008129 cerebral palsy Diseases 0.000 description 1
- 208000030251 communication disease Diseases 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001660 hyperkinetic effect Effects 0.000 description 1
- 230000003483 hypokinetic effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 210000001428 peripheral nervous system Anatomy 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001148 spastic effect Effects 0.000 description 1
- 208000027765 speech disease Diseases 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/65—Clustering; Classification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/04—Segmentation; Word boundary detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/34—Adaptation of a single recogniser for parallel processing, e.g. by use of multiple processors or cloud computing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Epidemiology (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Pathology (AREA)
- Primary Health Care (AREA)
- Electrically Operated Instructional Devices (AREA)
- Machine Translation (AREA)
Abstract
本发明涉及一种大数据背景下运动性构音障碍语音库的建立及分析方法,包括以下步骤:发音文本的设计;语音录制;对语音文件的参数分析;数据库管理系统的建立的建立;大数据技术的数据分析。本发明旨在研究神经系统疾病引起的运动性构音障碍的患者语音特性,依托于开放网络平台的优势,可以实现覆盖大规模群体的测量以及相关信息的收集,实现普通话、方言、健康人语音、患者语音等语音库的建立,并在此基础上,建立满足运动性构音障碍患者病情诊断的词库。
The invention relates to a method for establishing and analyzing a speech library for motor dysarthria under the background of big data, which includes the following steps: design of pronunciation text; speech recording; parameter analysis of speech files; establishment of a database management system; Data analysis for data technology. This invention aims to study the speech characteristics of patients with motor dysarthria caused by neurological diseases. Relying on the advantages of an open network platform, it can realize measurement covering large-scale groups and the collection of relevant information, and realize the speech of Mandarin, dialects, and healthy people. , patients' voices and other phonetic databases, and on this basis, establish a vocabulary database that meets the condition diagnosis of patients with motor dysarthria.
Description
技术领域Technical field
本发明涉及一种大数据背景下运动性构音障碍语音库的建立及分析方法。The invention relates to a method for establishing and analyzing a speech library for motor dysarthria in the context of big data.
背景技术Background technique
(1)运动性构音障碍研究现状:(1) Current status of research on motor dysarthria:
运动性构音障碍(dysarthria)是指由于中枢神经系统或周围神经系统损害导致,肌肉的控制紊乱而形成的一组言语障碍。运动性构音障碍常表现为言语相关肌肉组织运动减慢、减弱、不精确、不协调,也可能影响到呼吸、共鸣、喉发声的控制、构音和韵律,临床上常简称为构音障碍。运动性构音障碍常见病因包括脑外伤、脑瘫、肌萎缩性侧索硬化、多发性硬化、脑卒中、帕金森病、脊髓小脑共济失调等。构音障碍根据神经解剖和言语声学特点可以分为弛缓型、痉挛型、失调型、运动过弱型、运动过强型和混合型。在与脑损伤相关的交流障碍中,构音障碍发病率高达54%。目前临床可以通过对嗓音、共鸣、韵律等方面的检查可从主观和客观两个方面反应构音障碍的言语声学特点,有利于提供针对性的治疗和全面科学地阐明构音障碍的言语声学病理机制。Dysarthria refers to a group of speech disorders caused by muscle control disorders caused by damage to the central nervous system or peripheral nervous system. Motor dysarthria often manifests as slowing, weakening, imprecision, and uncoordination of speech-related muscle tissue movements. It may also affect breathing, resonance, laryngeal voice control, articulation, and rhythm. Clinically, it is often referred to as dysarthria. . Common causes of motor dysarthria include brain trauma, cerebral palsy, amyotrophic lateral sclerosis, multiple sclerosis, stroke, Parkinson's disease, spinocerebellar ataxia, etc. Dysarthria can be divided into flaccid, spastic, dysregulated, hypokinetic, hyperkinetic and mixed types based on neuroanatomical and speech acoustic characteristics. Among communication disorders related to brain injury, the incidence of dysarthria is as high as 54%. At present, clinical examinations of voice, resonance, rhythm, etc. can reflect the speech acoustic characteristics of dysarthria from both subjective and objective aspects, which is conducive to providing targeted treatment and comprehensively and scientifically elucidating the speech acoustic pathology of dysarthria. mechanism.
对于运动性构音障碍总体的发病率国内外研究报道均较少,Miller等对125例帕金森病患者研究显示,有69.6%的患者的言语清晰度均值比正常对照组低,其中51.2%的患者低一个标准差,表明在帕金森患者中构音障碍的发病率较高。Bogousslavsky等对1000例初次卒中患者进行筛选,发现有言语障碍的患者高达46%,其中12.4%确诊为构音障碍患者。Hartelius等研究也发现多发性硬化患者中构音障碍发病率为51%。由此可见构音障碍的发病率较高。构音障碍的评定,目前国内尚无统一的评定方法,运动性构音障碍更无专门评定标准,多数采用Frenchay构音障碍评价法或改良法和中国康复研究中心构音障碍检查表,由临床医师或康复科医师检查、评分、记录、评价构音障碍程度、类型。There are few reports on the overall incidence of motor dysarthria at home and abroad. Miller et al. studied 125 patients with Parkinson's disease and showed that 69.6% of the patients had lower average speech intelligibility than the normal control group, of which 51.2% patients one standard deviation lower, indicating a higher prevalence of dysarthria in patients with Parkinson's disease. Bogousslavsky et al. screened 1,000 first-time stroke patients and found that up to 46% of patients had speech impairment, of which 12.4% were diagnosed with dysarthria. Hartelius et al. also found that the incidence of dysarthria in patients with multiple sclerosis was 51%. This shows that the incidence of dysarthria is relatively high. There is currently no unified assessment method in China for the assessment of dysarthria, and there is no specific assessment standard for motor dysarthria. Most use the Frenchay Dysarthria Evaluation Method or modified method and the China Rehabilitation Research Center Dysarthria Checklist, which is developed by clinical The physician or rehabilitation physician examines, scores, records, and evaluates the degree and type of dysarthria.
(2)国内语音库研究现状:(2) Current status of domestic phonetic database research:
随着信息技术与计算机科学的发展,语音技术使机器行为与人类自然语言的交互成为可能,不论是语音合成、语音识别还是语音辨认研究,都必定依靠于后端优秀语音语料库的建设。目前国外语音库的发展较为成熟,中国的语音库研究也已在近十几年间突飞猛进,语音库的研究与建立已在不同的语言和文化语境中落地。但是针对运动性构音障碍语音库的建设目前还处于研究状态。With the development of information technology and computer science, speech technology makes it possible to interact between machine behavior and human natural language. Whether it is speech synthesis, speech recognition or speech recognition research, it must rely on the construction of excellent back-end speech corpora. At present, the development of foreign phonetic libraries is relatively mature, and China's phonetic bank research has also advanced by leaps and bounds in the past ten years. The research and establishment of phonetic banks have been implemented in different language and cultural contexts. However, the construction of a speech library for motor dysarthria is still in a state of research.
国内的构音语音功能评估研究主要集中在主观评估方面,而且只有少数的研究者将构音与语音的概念有所区分。黄昭鸣等提出了《汉语构音能力测验词表》,该词表包含50个字,言语康复师通过评价被试的50个字的构音语音,能够全面评价被试对21个声母和4种声调的构音能力,同时,通过18项音位对比、37个最小语音对来评估被试的音位对比能力。陈三定等人对50名聋儿进行了汉语普通话声母、韵母和声调的评价,揭示了说汉语普通话的聋儿构音语音的发展规律,还进一步提出了及早、顺序、容错和巩固”的言语康复教育原则。华东师范大学的张晶博士研究了听障儿童在个辅音构音时的主要错误走向,分析成因,并相应的提出了听障儿童辅音音位治疗框架。Domestic research on the assessment of articulation and speech function mainly focuses on subjective assessment, and only a few researchers distinguish the concepts of articulation and speech. Huang Zhaoming et al. proposed the "Chinese Articulation Ability Test Word List", which contains 50 words. By evaluating the subjects' articulation pronunciation of 50 words, the speech rehabilitation therapist can comprehensively evaluate the subjects' understanding of 21 initial consonants and 4 types of sounds. At the same time, the subject's phoneme contrast ability was assessed through 18 phoneme comparisons and 37 minimum phonetic pairs. Chen Sanding and others evaluated the initial consonants, finals and tones of Mandarin Chinese on 50 deaf children, revealing the development rules of articulation and pronunciation of deaf children who speak Mandarin Chinese, and further proposed the speech rehabilitation of "early, sequential, fault-tolerant and consolidated" Educational principles. Dr. Zhang Jing from East China Normal University studied the main errors made by hearing-impaired children in the articulation of consonants, analyzed the causes, and accordingly proposed a consonant phoneme treatment framework for hearing-impaired children.
(3)大数据在医疗领域研究现状:(3) Current research status of big data in the medical field:
目前,对大数据定义比较流行的是:超过典型数据库软件工具所能撷取、储存、处理和分析能力的资料。大数据区别于超大规模数据、海量数据等传统数据概念,其具有四个基本特征:大量、多样、时效、价值。Kayyali B等研究了大数据在美国医疗行业的影响,指出随着时间推移,大数据对医疗行业的价值将越来越显著。目前医疗领域内的大数据主要来自制药企业,临床诊断数据,患者就医数据,健康管理、社交网络数据。例如药物研发是一个相对密集的过程,即使对中小型企业而言,一项药物研发的数据也在TB以上;医院的数据每天增长也非常快,一个病人的双源CT检查一次成像在3000张,大概产生1.5GB影像资料,一个标准病理检查图像有将近5GB图像,加上患者就医、电子病历等数据,每天都在快速增长。基于海量大数据分析的研究方法引发了人们对于科学方法论的思考。研究无需直接接触研究对象,而通过直接分析和挖掘海量数据便可获得新的研究发现,这或许催生了一种新的科研模式。Currently, the more popular definition of big data is: data that exceeds the capabilities of typical database software tools to capture, store, process, and analyze. Big data is different from traditional data concepts such as ultra-large-scale data and massive data. It has four basic characteristics: large amount, diversity, timeliness, and value. Kayyali B et al. studied the impact of big data on the U.S. medical industry and pointed out that as time goes by, the value of big data to the medical industry will become increasingly significant. At present, big data in the medical field mainly comes from pharmaceutical companies, clinical diagnosis data, patient medical data, health management, and social network data. For example, drug research and development is a relatively intensive process. Even for small and medium-sized enterprises, the data of a drug research and development is more than TB. The data of hospitals are also growing very fast every day. A patient's dual-source CT examination can produce 3,000 images at a time. , approximately 1.5GB of imaging data is generated, and a standard pathological examination image has nearly 5GB of images. Together with patient medical treatment, electronic medical records and other data, it is growing rapidly every day. Research methods based on massive big data analysis have triggered people's thinking about scientific methodology. Research does not require direct contact with the research object, but new research discoveries can be obtained by directly analyzing and mining massive data. This may give rise to a new scientific research model.
语音语料库的建立是一个繁琐复杂的问题,对于语音语料库的后期完善还有待改进的问题,例如充分利用现有的词间变调规则,尽量体现变调和轻声的实际情况。对于语料的不足,可以在预处理环节提高现有语料利用率。鉴于以上原因,语音库应采取开放型数据库,以便可以随时添加、修改,以便完善该数据库。由于语音情况不尽相同,因而具体的语音语料库的建立也会碰到各种各样的困难,我们在这里所讨论的问题,只是对于建立语音语料库的一种探讨,希望可以为语音的研究提供数据支持,为更好的发展语言,完善语音语料库起着重要作用。The establishment of a speech corpus is a tedious and complicated issue, and there are still issues that need to be improved in the later improvement of the speech corpus, such as making full use of the existing inter-word inflection rules and trying to reflect the actual situation of inflection and softness. For the shortage of corpus, the utilization rate of existing corpus can be improved in the preprocessing step. In view of the above reasons, the speech database should be an open database so that it can be added and modified at any time to improve the database. Since speech conditions are different, the establishment of specific speech corpora will also encounter various difficulties. The issues we discuss here are just a discussion on the establishment of speech corpora. We hope that they can provide information for speech research. Data support plays an important role in better developing language and improving speech corpus.
此外,数据量大毫无疑问是网络大数据分析技术的一大优势,但如何保证海量数据的质量,以及如何实现对海量数据进行清洗、管理和分析等问题,也成为本课题研究的一大技术难点。海量的网络大数据具有多源异构、交互性、时效性、突发性和高噪声等特点,因而导致了网络大数据虽然价值巨大但噪声也大,价值密度低的特征。这对保证网络大数据分析研究中的数据质量则构成了巨大挑战。In addition, the large amount of data is undoubtedly a major advantage of network big data analysis technology, but how to ensure the quality of massive data and how to clean, manage and analyze massive data have also become a major issue in this research. Technical Difficulties. Massive network big data has the characteristics of multi-source heterogeneity, interactivity, timeliness, burstiness and high noise. This leads to the characteristics that although network big data is huge in value, it is also noisy and has low value density. This poses a huge challenge to ensuring data quality in network big data analysis research.
发明内容Contents of the invention
本发明设计了一种大数据背景下运动性构音障碍语音库的建立及分析方法,其解决的技术问题是数据量大毫无疑问是网络大数据分析技术的一大优势,但如何保证海量数据的质量,以及如何实现对海量数据进行清洗、管理和分析等问题,也成为一大技术难点。The present invention designs a method for establishing and analyzing a speech library for motor dysarthria in the context of big data. The technical problem it solves is that large amounts of data are undoubtedly a major advantage of network big data analysis technology, but how to ensure massive amounts of data? The quality of data and how to clean, manage and analyze massive data have also become a major technical difficulty.
为了解决上述存在的技术问题,本发明采用了以下方案:In order to solve the above-mentioned technical problems, the present invention adopts the following solutions:
一种大数据背景下运动性构音障碍语音库的建立及分析方法,包括以下步骤:步骤1、发音文本的设计;A method for establishing and analyzing a speech library for motor dysarthria in the context of big data, including the following steps: Step 1. Design of pronunciation text;
步骤2、语音录制;Step 2. Voice recording;
步骤3、语音文件的标注;Step 3. Annotation of voice files;
步骤4、对语音文件的声学参数分析;Step 4. Analyze the acoustic parameters of the speech file;
步骤5、数据库管理系统的建立;Step 5. Establishment of database management system;
步骤6、大数据技术的数据分析。Step 6. Data analysis using big data technology.
优选地,所述步骤6中大数据技术的数据分析基于Hadoop平台的语音分类机制,具体包括如下分步骤:Preferably, the data analysis of big data technology in step 6 is based on the speech classification mechanism of the Hadoop platform, specifically including the following sub-steps:
步骤61、收集复数个患者语音文件,对语音进行音段切分和标注,构建语音数据库,对提取的声学参数进行分析,获取语音分类的有效特征;Step 61: Collect multiple patient voice files, segment and annotate the speech segments, build a voice database, analyze the extracted acoustic parameters, and obtain effective features for voice classification;
步骤62、然后基于Hadoop平台,采用Map函数对大数据语音分类问题进行细分,用多节点并行、分布式地对子问题进行语音分类求解,得到相应的语音分类结果;Step 62. Then based on the Hadoop platform, use the Map function to subdivide the big data speech classification problem, use multiple nodes to solve the speech classification problem in parallel and distributed manner, and obtain the corresponding speech classification results;
步骤63、最后利用Reduce函数对子问题的语音分类结果进行组合,以适应大数据语音分类的在线要求。Step 63: Finally, use the Reduce function to combine the speech classification results of the sub-problems to adapt to the online requirements of big data speech classification.
优选地,所述步骤1中发音文本的设计包括发音文本的选择,所述发音文本的语料库的选择原则包括以下一种或多种:Preferably, the design of the pronunciation text in step 1 includes the selection of the pronunciation text, and the selection principles of the corpus of the pronunciation text include one or more of the following:
a、语料库中的单字要求尽量包含所有的声韵现象,能够更好更方便的反映不同患者语音的音系特征;a. The words in the corpus are required to include all phonological phenomena as much as possible, so as to better and more conveniently reflect the phonological characteristics of different patients’ voices;
b、语料库中的词汇依据汉语调查常用表为基础,所以能方便的与汉语普通话进行比较;b. The vocabulary in the corpus is based on commonly used tables for Chinese surveys, so it can be easily compared with Mandarin Chinese;
c、语料库中的句子主要是根据几个相关主题,与患者进行对话所得,所以更符合语音识别面对的真实情形;“几个相关主题”包括日常生活主题或病史主题,例如询问首次发病时间及病史情况。c. The sentences in the corpus are mainly obtained from conversations with patients based on several related topics, so they are more in line with the real situations faced by speech recognition; "several related topics" include daily life topics or medical history topics, such as asking about the time of the first onset of illness and medical history.
d、语料库中的句子在内容和语义上都是完整的,所以能够尽可能的反映一个句子的韵律信息;d. The sentences in the corpus are complete in content and semantics, so they can reflect the prosodic information of a sentence as much as possible;
e、对三音子不进行归类的挑选,这样能够有效的解决训练数据稀疏的问题。e. The triphones are not classified into categories, which can effectively solve the problem of sparse training data.
优选地,所述步骤1中所述发音文本的设计还包括发音文本的编制,所述发音文本的编制原则包括以下一种或多种:Preferably, the design of the pronunciation text in step 1 also includes the preparation of the pronunciation text, and the preparation principles of the pronunciation text include one or more of the following:
a、单字部分:将调查字表中列举的声母韵母以及声调的一些常用字作为本次语音库的主要录音所用语料;a. Single character part: The initials and finals listed in the survey word list and some common words with tones are used as the main recording materials for this phonetic database;
b、词汇部分:以一个四千词词表为基础但不局限于此,根据原来关于相关音系的结论记录相关词语,力求能够全面反映其语音特点,包括音质和超音质特点,针对一些很有特色的语音现象,可增加例词来反映其特征;“相关音系的结论记录相关词语”指的是,根据在同一语言中使用的音,组合规律以及节律和语调的特点,总结的常用词汇。b. Vocabulary part: Based on but not limited to a 4,000-word vocabulary list, relevant words are recorded according to the original conclusions about related phonology, and strive to fully reflect their phonetic characteristics, including phonetic and super-phonetic characteristics, and target some very important For distinctive phonetic phenomena, examples can be added to reflect their characteristics; "Related phonological conclusions and related words" refer to the summary of commonly used sounds based on the sounds used in the same language, the combination rules, and the characteristics of rhythm and intonation. vocabulary.
“特色的语音现象”指的是方言中容易读错的,比如平舌音翘舌音难区分的,f和h不分等情况。"Special phonetic phenomena" refer to those that are easy to mispronounce in dialects, such as flat tongue sounds that are difficult to distinguish from raised tongue sounds, f and h being indifferent, etc.
c、语句材料部分:根据不同发音人的语言掌握程度决定语料数量,选取时既要保证语料的范围尽可能广,还需使其具有一定的代表性;“代表性”在此指的是可以体现运动性构音障碍语言特点,具有普遍性的语句。c. Sentence material part: The number of corpus is determined according to the language mastery of different speakers. When selecting, it is necessary to ensure that the scope of the corpus is as wide as possible and to make it have a certain degree of representativeness; "representative" here means that it can A universal statement that embodies the language characteristics of motor dysarthria.
d、自然对话部分:日常生活为题,采用回答问题和自由谈话的形式,录制发音人20-40分钟的语音材料,涉及日常口语中和普通话说法不同的词汇,要求发音人用方言说出来。d. Natural dialogue part: Daily life topics, using the form of answering questions and free conversation, recording 20-40 minutes of the speaker's voice material, involving vocabulary that is different from daily spoken language and Mandarin, and requiring the speaker to speak it in dialect.
优选地,所述步骤2的语音录制包括发音人的确定,所述发音人的选取原则是挑选口齿清晰、语速适中(“语速适中”是指语速适中,控制在120-150字/分钟)、熟练使用本地语且愿意主动配合调查的母语发音人,还要保证其所处的语言环境比较稳定,同时又要有文化程度;或者/和,所述语音录制还包括通过语音采集器进行的语音采集,所述语音采集采用两种方式:一种是具有提示文本的朗读,提示是汉语的文字材料,发音人将其转换成自己的母语并朗读;另一种是自然语音,发音人利用提示讲述民间故事、民族生活状况以及当地民歌的哼唱。Preferably, the voice recording in step 2 includes the determination of the speaker, and the selection principle of the speaker is to select a speaker with clear speech and a moderate speaking speed ("moderate speaking speed" refers to a moderate speaking speed, controlled at 120-150 words/ Minutes), a native speaker who is proficient in the local language and willing to actively cooperate with the investigation, must also ensure that the language environment he or she is in is relatively stable, and at the same time must be well-educated; or/and, the voice recording also includes using a voice collector The speech collection is carried out in two ways: one is reading aloud with prompt text, the prompt is Chinese text material, the speaker converts it into his own mother tongue and reads it aloud; the other is natural speech, pronunciation People use prompts to tell folk stories, ethnic living conditions, and humming of local folk songs.
优选地,步骤4中所述对语音文件的声学参数分析包括语音库的语音标注,基本的语音标注包括各个音节的声韵母切分和对齐,以及声韵调的标注,包括两个部分:第一部分是文字标注,汉字+pinyin即字音转写,将语音信息用汉字记录下来,以便提供给识别系统使用,也能为语言学的研究提供素材;文字标注必须标明基本文字信息以及副语言学现象,基本标注中的副语言学现象可用通用副语言学符号表示;第二部分是音节标注,普通话音节标注采用标准普通话音节标注,音节标注为有调标注;声调标注中0表示轻声,1表示阴平,2表示阳平,3表示上声,4表示去声。Preferably, the acoustic parameter analysis of the speech file described in step 4 includes speech annotation of the speech library. The basic speech annotation includes the segmentation and alignment of final consonants of each syllable, and the annotation of phonetic tones, including two parts: the first part. It is text annotation. Chinese characters + pinyin are phonetic transliterations. Speech information is recorded in Chinese characters so that it can be used by the recognition system and can also provide material for linguistic research. Text annotation must indicate basic text information and paralinguistic phenomena. Paralinguistic phenomena in basic annotation can be represented by universal paralinguistic symbols; the second part is syllable annotation. Standard Mandarin syllable annotation is used for Mandarin syllable annotation, and syllable annotation is tonal annotation; in tone annotation, 0 means soft, and 1 means Yinping. 2 means Yangping, 3 means rising tone, and 4 means falling tone.
优选地,步骤4中所述对语音文件的声学参数分析还包括声学参数的提取;首先对所录制的语音进行切分和消除静音段的处理,以保证分析的对象为单个字词、词组、语句、对话;然后在语音波形数据中对于语音信号的起止段做出判定,对语音进行标注;最后再根据自相关算法得到相应的基频和共振峰声学分析参数数据。Preferably, the acoustic parameter analysis of the speech file described in step 4 also includes the extraction of acoustic parameters; first, the recorded speech is segmented and the silent segments are eliminated to ensure that the objects of analysis are single words, phrases, Sentences and dialogues; then the starting and ending segments of the speech signal are determined in the speech waveform data, and the speech is marked; finally, the corresponding fundamental frequency and formant acoustic analysis parameter data are obtained based on the autocorrelation algorithm.
优选地,步骤5中所述数据库管理系统的建立包括数据库的选取,选用较易实现的sql数据库管理系统。Preferably, the establishment of the database management system in step 5 includes the selection of a database, and the sql database management system, which is easier to implement, is selected.
一种基于Hadoop平台的大数据语音分类流程方法,包括以下步骤:使用上述建立方法进行语音库的构建,在此语音库基础上,基于Hadoop平台,采用Map函数对大数据语音分类问题进行细分,用多节点并行、分布式地对子问题进行语音分类求解,得到相应的语音分类结果;最后利用Reduce函数对子问题的语音分类结果进行组合,以适应大数据语音分类的在线要求。A big data speech classification process method based on the Hadoop platform, including the following steps: using the above establishment method to construct a speech library, based on this speech library, based on the Hadoop platform, using the Map function to subdivide the big data speech classification problem , use multiple nodes to solve the speech classification of sub-problems in parallel and distributed manner, and obtain the corresponding speech classification results; finally, use the Reduce function to combine the speech classification results of the sub-problems to adapt to the online requirements of big data speech classification.
具体步骤如下:Specific steps are as follows:
(1)Client向Hadoop平台的Job Tracker提交一个语音分类任务,Job Tracker将语音特征数据复制到本地的分布式文件处理系统中;(1) Client submits a speech classification task to the Job Tracker of the Hadoop platform, and the Job Tracker copies the speech feature data to the local distributed file processing system;
(2)对语音分类的任务进行初始化,将任务放入任务队列中,Job Tracker根据不同节点的处理能力将任务分配到相应的节点上,即Task Tracker上;(2) Initialize the speech classification task, put the task into the task queue, and the Job Tracker allocates the task to the corresponding node, that is, the Task Tracker, based on the processing capabilities of different nodes;
(3)各Task Tracker根据分配的任务,采用支持向量机拟合待分类语音特征与语音特征库之间的关系,得到语音相应的类别;(3) Each Task Tracker uses a support vector machine to fit the relationship between the speech features to be classified and the speech feature library according to the assigned tasks, and obtains the corresponding category of the speech;
(4)将语音相应的类别作为Key/Value,保存到本地文件磁盘中;(4) Save the corresponding category of the voice as Key/Value to the local file disk;
(5)如果语音分类中间结果的Key/Value相同,则对其进行合并,将合并的结果交给Reduce进行处理,得到语音分类的结果,并将结果写入到分布式文件处理系统中;(5) If the Key/Value of the intermediate results of speech classification are the same, merge them, pass the merged results to Reduce for processing, obtain the results of speech classification, and write the results to the distributed file processing system;
(6)Job Tracker将任务状态进行清空处理,用户从分布式文件处理系统中得到语音分类的结果。(6) Job Tracker clears the task status, and the user obtains the speech classification results from the distributed file processing system.
该大数据背景下运动性构音障碍语音库的建立及分析方法具有以下有益效果:The establishment and analysis method of the motor dysarthria speech database in the context of big data has the following beneficial effects:
(1)本发明旨在研究神经系统疾病引起的运动性构音障碍的患者语音特性,依托于开放网络平台的优势,可以实现覆盖大规模群体的测量以及相关信息的收集,实现普通话、方言、健康人语音、患者语音等语音库的建立,并在此基础上,建立满足运动性构音障碍患者病情诊断的词库。(1) The present invention aims to study the speech characteristics of patients with motor dysarthria caused by neurological diseases. Relying on the advantages of an open network platform, it can achieve measurement covering large-scale groups and the collection of relevant information, and realize the implementation of Mandarin, dialect, The establishment of speech databases such as healthy people's voices and patients' voices, and on this basis, the establishment of a vocabulary database that meets the condition diagnosis of patients with motor dysarthria.
(2)本发明在语音库不断扩充下,最终分别根据普通话、方言、不同病史、不同病情等信息建立丰富的数据资源中心,为神经系统疾病患者提供一种网络自主诊断的途径,也可辅助医生进行临床诊疗,为神经系统疾病病情的量化提供丰富精准的数据平台。(2) With the continuous expansion of the speech database, the present invention finally establishes a rich data resource center based on information such as Mandarin, dialects, different medical histories, different conditions, etc., providing a way for patients with neurological diseases to self-diagnose via the Internet, and can also assist Doctors conduct clinical diagnosis and treatment, providing a rich and accurate data platform for quantifying neurological diseases.
(3)本发明在语音库基础上,基于Hadoop平台,采用Map函数对大数据语音分类问题进行细分,用多节点并行、分布式地对子问题进行语音分类求解,得到相应的语音分类结果;最后利用Reduce函数对子问题的语音分类结果进行组合,以适应大数据语音分类的在线要求。(3) Based on the speech database and the Hadoop platform, the present invention uses the Map function to subdivide the big data speech classification problem, uses multi-node parallel and distributed to solve the speech classification of the sub-problems, and obtains the corresponding speech classification results. ; Finally, the Reduce function is used to combine the speech classification results of the sub-problems to adapt to the online requirements of big data speech classification.
附图说明Description of drawings
图1:本发明实施例中“bao”的语音标注示例。Figure 1: Example of phonetic annotation of "bao" in the embodiment of the present invention.
图2:本发明实施例中“bao”语音的共振峰数据。Figure 2: Formant data of the "bao" voice in the embodiment of the present invention.
图3:本发明实施例中Hadoop平台的基本框架。Figure 3: The basic framework of the Hadoop platform in the embodiment of the present invention.
图4:本发明基于Hadoop平台的大数据语音分类流程。Figure 4: The present invention's big data voice classification process based on the Hadoop platform.
具体实施方式Detailed ways
下面结合图1至图4,对本发明做进一步说明:The present invention will be further described below in conjunction with Figures 1 to 4:
语音库由清音库、浊音库、声调库、语音合成程序、汉语—拼音转化程序构成。The speech library consists of an unvoiced sound library, a voiced sound library, a tone library, a speech synthesis program, and a Chinese-pinyin conversion program.
1.清音库的建立:1. Creation of unvoiced sound library:
根据清音的特性,为了提高合成语音的质量。清音库采取直接采样法建立。即对各种拼音组合中的浊音段前面的清音部分取样,构成清音库。由于1个音节中清音实际只占很小的一部分,所以,由400多个无调音节中提取出的清音构成的清音库,实际所占的存储空间很小。According to the characteristics of unvoiced sounds, in order to improve the quality of synthesized speech. The unvoiced sound library is created using the direct sampling method. That is, the unvoiced parts in front of the voiced segments in various pinyin combinations are sampled to form an unvoiced database. Since the unvoiced sounds in a syllable actually account for only a small part, the unvoiced sound library composed of the unvoiced sounds extracted from more than 400 atonal syllables actually occupies a very small storage space.
2.浊音库的建立:2. Creation of voiced sound library:
浊音由浊音合成程序调用对应浊音的VTFR合成。浊音库实际是由各种浊音的VTFR构成,采用提取VTFR程序依次提取各种浊音的VTFR,将各种浊音的VTFR和浊音合成程序保存在1个数据包内,就构成了浊音库。实际提取出的VTFR只是1条曲线,这样构成的浊音库所占的空间非常小。The voiced sounds are synthesized by calling the VTFR corresponding to the voiced sounds by the voiced sound synthesis program. The voiced sound library is actually composed of VTFRs of various voiced sounds. The VTFR extraction program is used to extract the VTFRs of various voiced sounds in sequence. The VTFRs of various voiced sounds and the voiced sound synthesis program are saved in one data package to form a voiced sound library. The actual extracted VTFR is only one curve, and the voiced sound library constructed in this way occupies very little space.
本发明语音语料库的建立主要包括以下四个主要过程:发音文本的设计;语音录制;对语音文件的参数分析;数据库管理系统的建立;大数据技术的数据分析。The establishment of the speech corpus of the present invention mainly includes the following four main processes: design of pronunciation text; speech recording; parameter analysis of speech files; establishment of a database management system; and data analysis of big data technology.
1.发音文本的设计;1. Design of pronunciation text;
1.1发音文本的选择:1.1 Selection of pronunciation text:
如何选取语料,是语料库建库工作的关键。为了保证建库工作的有序有效,保证语料库的质量,在语料库建库之前,首先要研究制定好语料库的选择原则。本语音语料库的选择原则包括:一、语料库中的单字要求尽量包含所有的声韵现象,所以可以更好更方便的反映该方言语音的音系特征;二、语料库中的词汇依据汉语调查常用表为基础,所以能方便的与汉语普通话进行比较;三、语料库中的句子主要是从口语语料挑选来的!所以更符合语音识别面对的真实情形;四、语料库中的句子在内容和语义上都是完整的,所以能够尽可能的反映一个句子的韵律信息;五、我们对三音子不进行归类的挑选,这样可以有效的解决训练数据稀疏的问题。How to select corpus is the key to corpus construction. In order to ensure the orderly and effective construction of the corpus and the quality of the corpus, the corpus selection principles must first be studied and formulated before corpus construction. The selection principles of this speech corpus include: 1. The words in the corpus are required to contain all phonetic phenomena as much as possible, so that they can better and more conveniently reflect the phonological characteristics of the dialect's speech; 2. The vocabulary in the corpus is based on the common Chinese survey table: Basics, so it can be easily compared with Mandarin Chinese; 3. The sentences in the corpus are mainly selected from spoken corpus! Therefore, it is more in line with the real situation faced by speech recognition; 4. The sentences in the corpus are complete in content and semantics, so they can reflect the prosodic information of a sentence as much as possible; 5. We do not classify triphones selection, which can effectively solve the problem of sparse training data.
1.2发音文本的编制:1.2 Preparation of pronunciation text:
发音文本的编制是建立语音数据库的关键环节之一。我们在确定发音素材时,依据发音文本选取原则,包括五个部分:一是单字部分。将调查字表中列举的声母韵母以及声调的一些常用字作为本次语音库的主要录音所用语料;二是词汇部分。以一个四千词词表为基础但不局限于此,根据原来关于相关音系的结论记录相关词语,力求能够全面反映其语音特点,包括音质和超音质特点,针对一些很有特色的语音现象,可增加例词来反映其特征;三是语句材料部分,根据不同发音人的语言掌握程度决定语料数量,选取时既要保证语料的范围尽可能广,还需使其具有一定的代表性;四是自然对话部分,日常生活为题,采用回答问题和自由谈话的形式,录制发音人约半个小时的语音材料,涉及日常口语中和普通话说法不同的词汇,要求发音人用方言说出来。The preparation of pronunciation text is one of the key steps in establishing a speech database. When we determine the pronunciation material, we follow the pronunciation text selection principle and include five parts: First, the single-character part. The initials and finals listed in the survey word list and some common words with tones are used as the main recording materials for this phonetic database; the second is the vocabulary part. Based on but not limited to a 4,000-word vocabulary list, relevant words are recorded according to the original conclusions about related phonology, striving to fully reflect their phonetic characteristics, including phonetic and super-phonetic characteristics, and to target some very distinctive phonetic phenomena. , examples can be added to reflect its characteristics; the third is the sentence material part. The number of corpus is determined according to the language mastery of different speakers. When selecting, it is necessary to ensure that the scope of the corpus is as wide as possible and to make it have a certain representativeness; The fourth is the natural dialogue part, which takes the topic of daily life and adopts the form of answering questions and free conversation. The speaker’s voice material is recorded for about half an hour. It involves words that are different in daily spoken language and Mandarin, and the speaker is required to speak them in dialect.
2、语音录制;2. Voice recording;
2.1发音人的确定:2.1 Determination of the speaker:
发音人的选取原则是挑选口齿清晰、语速适中、熟练使用本地语且愿意主动配合调查的母语发音人,还要保证其所处的语言环境比较稳定,同时又要有一定的文化程度。The principle of selecting speakers is to select native speakers who are clear in speech, speak at a moderate speed, are proficient in the local language, and are willing to actively cooperate with the investigation. They must also ensure that their language environment is relatively stable and they must have a certain level of education.
2.2语音采集:2.2 Voice collection:
录音过程中的说话方式直接决定语音库的用途。由于收集语料的特殊性,根据不同的研究目的,采用两种方式:一种是具有提示文本的朗读,提示是汉语的文字材料!发音人将其转换成自己的母语并朗读;另一种是自然语音,发音人可以利用提示讲述民间故事、民族生活状况以及当地民歌的哼唱等。The way you speak during the recording process directly determines the purpose of the voice library. Due to the particularity of the collected corpus, two methods are used according to different research purposes: one is reading aloud with text prompts, and the prompts are Chinese text materials! The speaker converts it into his native language and reads it aloud; the other is natural speech, and the speaker can use prompts to tell folk stories, national living conditions, and humming of local folk songs.
3、对语音文件的参数分析:3. Parameter analysis of voice files:
录制了发音文本后,需要对语音数据进行分析处理以得到语音信号的不同特征,这是语音语料库设计的关键,也是后期语音处理所必须的基础。本发明着眼于研究语音信息,因此需要对语音信号波形的基本属性进行标注,同时提取出相关的声学参数。After recording the pronunciation text, the speech data needs to be analyzed and processed to obtain different characteristics of the speech signal. This is the key to speech corpus design and the necessary foundation for later speech processing. The present invention focuses on studying speech information, so it is necessary to label the basic attributes of the speech signal waveform and extract relevant acoustic parameters at the same time.
3.1语音库的信息标注:3.1 Information annotation of speech database:
语音标注使用Praat软件,参照汉语音段标注系统SAMPA-C进行分级标注。语音库的标注包括文字标注和有调音节标注两部分,在此以语音“bao”为例,如图1所示。Speech annotation uses Praat software and refers to the Chinese speech segment annotation system SAMPA-C for hierarchical annotation. The annotation of the speech library includes two parts: text annotation and tonal syllable annotation. Here, the phonetic "bao" is taken as an example, as shown in Figure 1.
第一部分是文字标注,汉字+pinyin即字音转写,将语音信息用汉字记录下来,以便提供给识别系统使用,也能为语言学的研究提供素材。文字标注必须标明基本文字信息以及副语言学现象,基本标注中的副语言学现象可用通用副语言学符号表示。The first part is text annotation. Chinese characters + pinyin are phonetic transliterations. The speech information is recorded in Chinese characters so that it can be used by the recognition system and can also provide material for linguistic research. Text annotations must indicate basic text information and paralinguistic phenomena. Paralinguistic phenomena in basic annotations can be represented by universal paralinguistic symbols.
第二部分是音节标注,普通话音节标注采用标准普通话音节标注,音节标注为有调标注。声调标注中0表示轻声,1表示阴平,2表示阳平,3表示上声,4表示去声。The second part is syllable annotation. Standard Mandarin syllable annotation is used for Mandarin syllable annotation, and the syllable annotation is tonal annotation. In the tone annotation, 0 means soft tone, 1 means yin level, 2 means yang level, 3 means up tone, and 4 means down tone.
3.2声学参数的提取:3.2 Extraction of acoustic parameters:
对于录制好的语音信号,还需提取出各语段的声学参数,实际操作中首先对所录制的语音进行切分和消除静音段的处理,以保证分析的对象均为单个字词;然后在语音波形数据中对于语音信号的起止段做出判定,标注出韵母的范围;最后再根据自相关算法得到相应的基频和共振峰数据,以语音“bao”为例,如图2所示。For the recorded speech signal, it is necessary to extract the acoustic parameters of each segment. In actual operation, the recorded speech is first segmented and the silent segments are eliminated to ensure that the analyzed objects are single words; then In the speech waveform data, the starting and ending segments of the speech signal are determined, and the range of finals is marked; finally, the corresponding fundamental frequency and formant data are obtained based on the autocorrelation algorithm, taking the speech "bao" as an example, as shown in Figure 2.
4、数据库管理系统的建立:4. Establishment of database management system:
4.1数据库的选取4.1 Selection of database
对于数据库的选择,由于在语音库中,需要存储大量的语音波形数据,其特点是数据量大,长度不固定,对事务处理和恢复、安全性和对网络的支持等方面要求较低。因此,我们可以选用较易实现的sql数据库管理系统。Regarding the choice of database, a large amount of voice waveform data needs to be stored in the speech database, which is characterized by a large amount of data, variable length, and low requirements for transaction processing and recovery, security, and network support. Therefore, we can choose a SQL database management system that is easier to implement.
4.2数据库管理系统的建立4.2 Establishment of database management system
语音语料库中数据库管理系统的建立需存储四种素材:一是发音人属性素材,如发音人年龄、性别、受教育情况、对汉语掌握情况、本人对母语使用状况等;二是发音文本素材,录入和存储发音人发音素材及其对应的方言发音和普通话国际音标等文本材料;三是实际语音数据材料,主要用于保存录制好的语音波形图形的原始参数;四是声学分析参数数据,即对处理后的语音波形提取的声学参数的保存。The establishment of the database management system in the speech corpus requires the storage of four types of materials: first, the speaker's attribute materials, such as the speaker's age, gender, education status, mastery of Chinese, and the person's use of the mother tongue, etc.; the second is the pronunciation text material. Enter and store the speaker's pronunciation materials and their corresponding dialect pronunciation and Mandarin International Phonetic Alphabet and other text materials; the third is the actual speech data material, which is mainly used to save the original parameters of the recorded speech waveform graphics; the fourth is the acoustic analysis parameter data, that is Save the acoustic parameters extracted from the processed speech waveform.
5、大数据技术的数据分析5. Data analysis using big data technology
大数据是一种规模大到在获取、存储、管理、分析方面大大超出了传统数据库软件工具能力范围的数据集合,具有海量的数据规模、快速的数据流转、多样的数据类型和价值密度低四大特征。大数据技术的战略意义不在于掌握庞大的数据信息,而在于对这些含有意义的数据进行专业化处理。换而言之,如果把大数据比作一种产业,那么这种产业实现盈利的关键,在于提高对数据的“加工能力”,通过“加工”实现数据的“增值”。在词库建设中,采用大数据技术的重要价值在于通过对数据的针对性分析与研究,实现评定词库中语音元素优劣的目的,从而使词库建设更加完善。Big data is a collection of data that is so large that its acquisition, storage, management, and analysis greatly exceed the capabilities of traditional database software tools. It has massive data scale, rapid data flow, diverse data types, and low value density. Big features. The strategic significance of big data technology lies not in mastering huge data information, but in professional processing of these meaningful data. In other words, if big data is compared to an industry, then the key to making this industry profitable is to improve the "processing capabilities" of data and achieve the "value-added" of data through "processing". In the construction of the lexicon, the important value of using big data technology is to achieve the purpose of evaluating the quality of the phonetic elements in the lexicon through targeted analysis and research of the data, thereby making the construction of the lexicon more complete.
通过网络平台将词库共享,以方便不同人群的测试,同时也将获得更多的数据样本,丰富语音库,在未来,可以根据不同地域、不同方言,建立更具有针对性的运动性构音障碍患者词库,为后续对病情分类和分级的自动识别提供更丰富可靠的数据样本。The vocabulary library will be shared through the network platform to facilitate testing of different groups of people. At the same time, more data samples will be obtained to enrich the speech library. In the future, more targeted movement articulation can be established based on different regions and different dialects. The dictionary of patients with disorders provides richer and more reliable data samples for subsequent automatic identification of disease classification and grading.
如图3所示,提出一种基于Hadoop平台的语音分类机制,首先收集大量的图像,构建图像数据库,并提取图像分类的有效特征;然后基于Hadoop平台,采用Map函数对大数据语音分类问题进行细分,用多节点并行、分布式地对子问题进行语音分类求解,得到相应的语音分类结果;最后利用Reduce函数对子问题的语音分类结果进行组合,以适应大数据语音分类的在线要求。As shown in Figure 3, a speech classification mechanism based on the Hadoop platform is proposed. First, a large number of images are collected, an image database is constructed, and effective features for image classification are extracted. Then based on the Hadoop platform, the Map function is used to solve the big data speech classification problem. Subdivided into sub-problems, multiple nodes are used to solve the speech classification problem in parallel and distributed manner, and the corresponding speech classification results are obtained. Finally, the Reduce function is used to combine the speech classification results of the sub-problems to adapt to the online requirements of big data speech classification.
如图4所示,基于Hadoop平台的大数据语音分类流程,其具体步骤如下:As shown in Figure 4, the specific steps of the big data speech classification process based on the Hadoop platform are as follows:
(1)Client向Hadoop平台的Job Tracker提交一个语音分类任务,Job Tracker将语音特征数据复制到本地的分布式文件处理系统中;(1) Client submits a speech classification task to the Job Tracker of the Hadoop platform, and the Job Tracker copies the speech feature data to the local distributed file processing system;
(2)对语音分类的任务进行初始化,将任务放入任务队列中,Job Tracker根据不同节点的处理能力将任务分配到相应的节点上,即Task Tracker上;(2) Initialize the speech classification task, put the task into the task queue, and the Job Tracker allocates the task to the corresponding node, that is, the Task Tracker, based on the processing capabilities of different nodes;
(3)各Task Tracker根据分配的任务,采用支持向量机拟合待分类语音特征与语音特征库之间的关系,得到语音相应的类别;(3) Each Task Tracker uses a support vector machine to fit the relationship between the speech features to be classified and the speech feature library according to the assigned tasks, and obtains the corresponding category of the speech;
(4)将语音相应的类别作为Key/Value,保存到本地文件磁盘中;(4) Save the corresponding category of the voice as Key/Value to the local file disk;
(5)如果语音分类中间结果的Key/Value相同,则对其进行合并,将合并的结果交给Reduce进行处理,得到语音分类的结果,并将结果写入到分布式文件处理系统中;(5) If the Key/Value of the intermediate results of speech classification are the same, merge them, pass the merged results to Reduce for processing, obtain the results of speech classification, and write the results to the distributed file processing system;
(6)Job Tracker将任务状态进行清空处理,用户从分布式文件处理系统中得到语音分类的结果。(6) Job Tracker clears the task status, and the user obtains the speech classification results from the distributed file processing system.
上面结合附图对本发明进行了示例性的描述,显然本发明的实现并不受上述方式的限制,只要采用了本发明的方法构思和技术方案进行的各种改进,或未经改进将本发明的构思和技术方案直接应用于其它场合的,均在本发明的保护范围内。The present invention has been exemplarily described above in conjunction with the accompanying drawings. It is obvious that the implementation of the present invention is not limited by the above-mentioned manner, as long as various improvements of the method, concept and technical solution of the present invention are adopted, or the present invention is implemented without improvement. If the concepts and technical solutions are directly applied to other situations, they are all within the protection scope of the present invention.
Claims (4)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010395558X | 2020-05-12 | ||
CN202010395558 | 2020-05-12 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112599119A CN112599119A (en) | 2021-04-02 |
CN112599119B true CN112599119B (en) | 2023-12-15 |
Family
ID=75200795
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011546906.5A Active CN112599119B (en) | 2020-05-12 | 2020-12-24 | Method for establishing and analyzing mobility dysarthria voice library in big data background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112599119B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450777A (en) * | 2021-05-28 | 2021-09-28 | 华东师范大学 | End-to-end sound barrier voice recognition method based on comparison learning |
CN113889096A (en) * | 2021-09-16 | 2022-01-04 | 北京捷通华声科技股份有限公司 | Method and device for analyzing sound library training data |
CN114566248B (en) * | 2022-01-18 | 2025-04-01 | 华东师范大学 | A method for intelligently pushing Chinese pronunciation training programs |
CN114999468A (en) * | 2022-05-20 | 2022-09-02 | 河北科技大学 | Speech recognition algorithm and device for aphasia patients based on speech features |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6067520A (en) * | 1995-12-29 | 2000-05-23 | Lee And Li | System and method of recognizing continuous mandarin speech utilizing chinese hidden markou models |
CN102799684A (en) * | 2012-07-27 | 2012-11-28 | 成都索贝数码科技股份有限公司 | Video-audio file catalogue labeling, metadata storage indexing and searching method |
CN103405217A (en) * | 2013-07-08 | 2013-11-27 | 上海昭鸣投资管理有限责任公司 | System and method for multi-dimensional measurement of dysarthria based on real-time articulation modeling technology |
CN105740397A (en) * | 2016-01-28 | 2016-07-06 | 广州市讯飞樽鸿信息技术有限公司 | Big data parallel operation-based voice mail business data analysis method |
CN106128450A (en) * | 2016-08-31 | 2016-11-16 | 西北师范大学 | The bilingual method across language voice conversion and system thereof hidden in a kind of Chinese |
CN110111780A (en) * | 2018-01-31 | 2019-08-09 | 阿里巴巴集团控股有限公司 | Data processing method and server |
-
2020
- 2020-12-24 CN CN202011546906.5A patent/CN112599119B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6067520A (en) * | 1995-12-29 | 2000-05-23 | Lee And Li | System and method of recognizing continuous mandarin speech utilizing chinese hidden markou models |
CN102799684A (en) * | 2012-07-27 | 2012-11-28 | 成都索贝数码科技股份有限公司 | Video-audio file catalogue labeling, metadata storage indexing and searching method |
CN103405217A (en) * | 2013-07-08 | 2013-11-27 | 上海昭鸣投资管理有限责任公司 | System and method for multi-dimensional measurement of dysarthria based on real-time articulation modeling technology |
CN105740397A (en) * | 2016-01-28 | 2016-07-06 | 广州市讯飞樽鸿信息技术有限公司 | Big data parallel operation-based voice mail business data analysis method |
CN106128450A (en) * | 2016-08-31 | 2016-11-16 | 西北师范大学 | The bilingual method across language voice conversion and system thereof hidden in a kind of Chinese |
CN110111780A (en) * | 2018-01-31 | 2019-08-09 | 阿里巴巴集团控股有限公司 | Data processing method and server |
Also Published As
Publication number | Publication date |
---|---|
CN112599119A (en) | 2021-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112599119B (en) | Method for establishing and analyzing mobility dysarthria voice library in big data background | |
Myles et al. | Using information technology to support empirical SLA research | |
CN109841231B (en) | Early AD (AD) speech auxiliary screening system for Chinese mandarin | |
Munson et al. | The phonetics of sex and gender | |
CN119301697A (en) | Multimodal system and method for mental health assessment based on speech with emotional stimulation | |
French et al. | Forensic speech science | |
WO2021147363A1 (en) | Text-based major depressive disorder recognition method | |
CN109727608A (en) | A kind of ill voice appraisal procedure based on Chinese speech | |
CN107456208A (en) | The verbal language dysfunction assessment system and method for Multimodal interaction | |
Liu et al. | AI recognition method of pronunciation errors in oral English speech with the help of big data for personalized learning | |
Procter et al. | Cultural competency in voice evaluation: considerations of normative standards for sociolinguistically diverse voices | |
CN113571088A (en) | Difficult airway assessment method and device based on deep learning voiceprint recognition | |
Coro et al. | Automatic detection of potentially ineffective verbal communication for training through simulation in neonatology | |
Ali et al. | Development and analysis of speech emotion corpus using prosodic features for cross linguistics | |
CN114916921A (en) | A rapid speech cognitive assessment method and device | |
CN114999468A (en) | Speech recognition algorithm and device for aphasia patients based on speech features | |
Brown | Phonetic cues and the perception of gender and sexual orientation | |
CN111583914B (en) | Big data voice classification method based on Hadoop platform | |
Lin et al. | Classifying speech intelligibility levels of children in two continuous speech styles | |
Alsulaiman | Arabic fluency assessment: Procedures for assessing stuttering in arabic preschool children | |
Lai et al. | Intonation and voice quality of Northern Appalachian English: A first look | |
Nunes | Whispered speech segmentation based on Deep Learning | |
Priyadharshini et al. | Natural language processing (nlp) based phonetic insights for improving voice recognition and synthesis | |
He et al. | Automatic detection of consonant omission in cleft palate speech | |
He et al. | Research on Teacher Classroom Teaching Speech Emotion Recognition Based on LSTM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |