[go: up one dir, main page]

CN115862595B - Intelligent voice control method and system based on big data and readable storage medium - Google Patents

Intelligent voice control method and system based on big data and readable storage medium Download PDF

Info

Publication number
CN115862595B
CN115862595B CN202310174868.2A CN202310174868A CN115862595B CN 115862595 B CN115862595 B CN 115862595B CN 202310174868 A CN202310174868 A CN 202310174868A CN 115862595 B CN115862595 B CN 115862595B
Authority
CN
China
Prior art keywords
voice
vehicle
user
data
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310174868.2A
Other languages
Chinese (zh)
Other versions
CN115862595A (en
Inventor
朱磊
姚国庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhitang Technology Beijing Co ltd
Original Assignee
Zhitang Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhitang Technology Beijing Co ltd filed Critical Zhitang Technology Beijing Co ltd
Priority to CN202310174868.2A priority Critical patent/CN115862595B/en
Publication of CN115862595A publication Critical patent/CN115862595A/en
Application granted granted Critical
Publication of CN115862595B publication Critical patent/CN115862595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The invention discloses an intelligent voice control method, a system and a readable storage medium based on big data, wherein the method comprises the following steps: acquiring voice data of a user side to identify current user attributes, and determining target audio to play based on the user attributes so as to enable a vehicle to perform voice interaction with the user side; in the process of voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, and an audio response mechanism is switched based on the driving state; and in the process of voice interaction between the vehicle and the user side, carrying out language attribution recognition based on the voice data so as to switch and broadcast the voice version of the target audio. The invention can adjust the corresponding voice interaction response mechanism according to different driving states of the vehicle, thereby keeping the consciousness of trapped personnel of the vehicle awake when the vehicle is dangerous, and in addition, can carry out voice interaction according to dialects corresponding to the voice response in the vehicle, thereby improving user experience and improving the preference of big data processing.

Description

Intelligent voice control method and system based on big data and readable storage medium
Technical Field
The invention relates to the technical field of big data and intelligent control, in particular to an intelligent voice control method, an intelligent voice control system and a readable storage medium based on big data.
Background
The automobile has entered into thousands of households along with the development of the age and the progress of science and technology, brings endless convenience to the daily life of people, and has endless development prospect in the field of automobiles along with the continuous development of new energy automobiles.
With the rising of self-driving tour of automobiles, automobiles also become residences for people to rest, and then at present, when the automobiles carry out voice interaction with users, standardized interaction can be carried out only according to the audio of a factory basic version, and targeted adjustment cannot be carried out according to different users.
Disclosure of Invention
The invention aims to provide an intelligent voice control method, system and readable storage medium based on big data, which can adjust corresponding voice interaction response mechanisms according to different driving states of a vehicle, so that the consciousness of trapped people of the vehicle can be kept when the vehicle is dangerous, in addition, voice interaction can be carried out according to dialects corresponding to voice response in the vehicle, user experience is improved, and the superiority of big data processing is improved.
The first aspect of the invention provides an intelligent voice control method based on big data, which comprises the following steps:
acquiring voice data of a user side to identify current user attributes, and determining target audio to play based on the user attributes so as to enable a vehicle to perform voice interaction with the user side;
in the process of voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, and an audio response mechanism is switched based on the driving state;
in the process of voice interaction between the vehicle and the user side, carrying out language attribution recognition based on the voice data so as to switch and broadcast the voice version of the target audio;
and when the vehicle and the user terminal do not perform voice interaction, performing voice value and speech speed analysis based on voice data of the user terminal, so as to switch and broadcast the voice version of the target audio.
In this solution, the obtaining the voice data of the user terminal to identify the current user attribute, so as to determine the target audio based on the user attribute for broadcasting specifically includes:
voiceprint recognition is carried out based on the voice data so as to obtain user attributes corresponding to the current user side;
and determining target audio preset by the current user side based on the user attribute matching setting data, wherein the target audio is at least for one user attribute.
In this solution, the determining, based on the user attribute matching setting data, the target audio preset by the current user side specifically includes:
matching an attribute database in the setting data based on the user attributes, wherein,
when the attribute database is used for successfully identifying the attribute of the current user, extracting corresponding audio to determine target audio preset by the current user side, wherein the target audio comprises user setting audio or factory standard audio.
In this scheme, in the process of performing voice interaction between the vehicle and the user terminal, the current driving state of the vehicle is identified, and the audio response mechanism is switched based on the driving state, which specifically includes:
identifying a driving state of the vehicle, the driving state including at least a vehicle running state, a vehicle collision state, and a vehicle parking state, wherein,
switching a first audio response mechanism based on the vehicle running state, wherein the first audio response mechanism is a passive response mechanism;
switching a second audio response mechanism based on the vehicle collision status, the second audio response mechanism being an active response mechanism;
and switching a third audio response mechanism based on the vehicle parking state, wherein the third audio response mechanism is an active and passive combined response mechanism.
In this scheme, in the process that the vehicle carries out voice interaction with the user terminal, language attribution identification is carried out based on the voice data so as to switch and broadcast the voice version of the target audio, specifically including:
the voice data is acquired to carry out voiceprint parameter identification so as to identify the language attribution, wherein,
if the identification is successful, identifying a target dialect based on the language attribution, and updating the voice version based on the target dialect so as to switch the playing sound of the current target audio;
if the recognition fails, the current positioning data of the vehicle is obtained, and the target dialect is recognized based on the positioning data, so that the voice version is updated based on the target dialect to switch the playing sound of the current target audio.
In this scheme, when the vehicle does not interact with the user terminal, the voice data based on the user terminal performs voice value and speech rate analysis, so as to switch and broadcast the voice version of the target audio, including:
extracting voice data of a user side in the vehicle to perform volume value threshold analysis to obtain a volume highest value;
extracting voice data of a user side in the vehicle to perform speech speed threshold analysis to obtain a speech speed average value;
And analyzing based on the highest volume value and the average speech speed value, wherein when the highest volume value is greater than or equal to a preset volume threshold and the average speech speed value is greater than or equal to the preset speech speed threshold, switching and broadcasting the voice version of the target audio to be the emotion adjustment voice version.
The second aspect of the present invention also provides an intelligent voice control system based on big data, comprising a memory and a processor, wherein the memory comprises an intelligent voice control method program based on big data, and the intelligent voice control method program based on big data realizes the following steps when being executed by the processor:
acquiring voice data of a user side to identify current user attributes, and determining target audio to play based on the user attributes so as to enable a vehicle to perform voice interaction with the user side;
in the process of voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, and an audio response mechanism is switched based on the driving state;
in the process of voice interaction between the vehicle and the user side, carrying out language attribution recognition based on the voice data so as to switch and broadcast the voice version of the target audio;
And when the vehicle and the user terminal do not perform voice interaction, performing voice value and speech speed analysis based on voice data of the user terminal, so as to switch and broadcast the voice version of the target audio.
In this solution, the obtaining the voice data of the user terminal to identify the current user attribute, so as to determine the target audio based on the user attribute for broadcasting specifically includes:
voiceprint recognition is carried out based on the voice data so as to obtain user attributes corresponding to the current user side;
and determining target audio preset by the current user side based on the user attribute matching setting data, wherein the target audio is at least for one user attribute.
In this solution, the determining, based on the user attribute matching setting data, the target audio preset by the current user side specifically includes:
matching an attribute database in the setting data based on the user attributes, wherein,
when the attribute database is used for successfully identifying the attribute of the current user, extracting corresponding audio to determine target audio preset by the current user side, wherein the target audio comprises user setting audio or factory standard audio.
In this scheme, in the process of performing voice interaction between the vehicle and the user terminal, the current driving state of the vehicle is identified, and the audio response mechanism is switched based on the driving state, which specifically includes:
Identifying a driving state of the vehicle, the driving state including at least a vehicle running state, a vehicle collision state, and a vehicle parking state, wherein,
switching a first audio response mechanism based on the vehicle running state, wherein the first audio response mechanism is a passive response mechanism;
switching a second audio response mechanism based on the vehicle collision status, the second audio response mechanism being an active response mechanism;
and switching a third audio response mechanism based on the vehicle parking state, wherein the third audio response mechanism is an active and passive combined response mechanism.
In this scheme, in the process that the vehicle carries out voice interaction with the user terminal, language attribution identification is carried out based on the voice data so as to switch and broadcast the voice version of the target audio, specifically including:
the voice data is acquired to carry out voiceprint parameter identification so as to identify the language attribution, wherein,
if the identification is successful, identifying a target dialect based on the language attribution, and updating the voice version based on the target dialect so as to switch the playing sound of the current target audio;
if the recognition fails, the current positioning data of the vehicle is obtained, and the target dialect is recognized based on the positioning data, so that the voice version is updated based on the target dialect to switch the playing sound of the current target audio.
In this scheme, when the vehicle does not interact with the user terminal, the voice data based on the user terminal performs voice value and speech rate analysis, so as to switch and broadcast the voice version of the target audio, including:
extracting voice data of a user side in the vehicle to perform volume value threshold analysis to obtain a volume highest value;
extracting voice data of a user side in the vehicle to perform speech speed threshold analysis to obtain a speech speed average value;
and analyzing based on the highest volume value and the average speech speed value, wherein when the highest volume value is greater than or equal to a preset volume threshold and the average speech speed value is greater than or equal to the preset speech speed threshold, switching and broadcasting the voice version of the target audio to be the emotion adjustment voice version.
A third aspect of the present invention provides a computer-readable storage medium having embodied therein a big data based intelligent speech control method program of a machine, which when executed by a processor, implements the steps of a big data based intelligent speech control method as described in any of the above.
According to the intelligent voice control method, the system and the readable storage medium based on big data, corresponding audios can be responded based on different interaction data so as to be more fit with driving experience of a user, in addition, corresponding voice interaction response mechanisms can be adjusted according to different driving states of a vehicle, so that consciousness of trapped people of the vehicle can be kept when the vehicle is dangerous, in addition, voice interaction can be carried out according to dialects corresponding to voice response in the vehicle, user experience is improved, the high data processing preference is improved, and when the emotion of people in the vehicle is recognized to be high, the emotion response of passengers is reduced by adjusting voice versions, and the occurrence of dangerous driving or road anger is reduced.
Drawings
FIG. 1 shows a flow chart of an intelligent voice control method based on big data of the present invention;
FIG. 2 shows a block diagram of an intelligent voice control system based on big data in accordance with the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Fig. 1 shows a flowchart of an intelligent voice control method based on big data.
As shown in fig. 1, the present application discloses an intelligent voice control method based on big data, which includes the following steps:
s102, acquiring voice data of a user side to identify current user attributes, and determining target audio based on the user attributes to play so as to enable a vehicle to interact with the user side in a voice mode;
S104, recognizing the current driving state of the vehicle in the process of voice interaction between the vehicle and the user side, and switching an audio response mechanism based on the driving state;
s106, in the process of voice interaction between the vehicle and the user side, carrying out language attribution identification based on the voice data so as to switch and broadcast the voice version of the target audio;
s108, when the vehicle and the user terminal do not perform voice interaction, voice value and voice speed analysis are performed based on voice data of the user terminal, so that voice versions of the target audio are switched and broadcasted.
It should be noted that, in this embodiment, firstly, voice data of a user side may be acquired to perform voice recognition so as to acquire corresponding user attributes, so as to acquire different target audios based on different user attributes to perform voice interaction, for example, user a sets a preferred target audio in a vehicle setting, when user a is identified, user a is in voice interaction with user a, then the voice interaction based on the audio set by user a may be performed, so as to increase user experience, further, in the process of performing voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, an audio response mechanism is switched based on the driving state, the specific audio response mechanism includes a first audio response mechanism, a second audio response mechanism and a third audio response mechanism, wherein the active response of different audio response mechanisms is different from the passive response setting, so that the corresponding response mechanism can be automatically switched according to the driving state of the vehicle, for example, the first audio response mechanism is the passive response mechanism, that indicates that when user a is in the vehicle running, only after user a speaks the corresponding voice interaction, the voice interaction with user a is in the process of the vehicle, further, the voice interaction is performed with user a is not in the vehicle, if the user a is in the vehicle running, otherwise, the user a is in a user is in a conscious of the vehicle is not conscious of the user a, and the user is in the vehicle running accident, and the vehicle is further, because the user is in the active response is in the vehicle and the vehicle is a user has a communication with the active response, and the user is a user has a user is in the vehicle and a user is a user and a user is in a better condition, the voice data is used for carrying out language attribution recognition so as to switch and broadcast the voice version of the target audio, for example, when a user A runs to a country lane by self driving, when navigation is not explicitly described by a road, local villages can be inquired about to carry out road guidance, and the villages can carry out voice navigation while riding in a vehicle, but when the situation that voice is not easy occurs, some villages can carry out voice recognition on the villages when the villages cannot speak or hear not know the common, so that the voice version of the target audio is switched and broadcast by a vehicle machine of the vehicle so as to adapt to different language requirements, in addition, when the vehicle and the user side do not carry out voice interaction, the voice version of the target audio is switched and broadcast based on the voice data of the user side, the characteristic indicates that the voice value and the voice speed of the voice data are not recognized when the vehicle and the user side do not carry out voice interaction, the voice data can be recognized, the voice version of the voice data can be switched, and the problem that the possible high-rising voice of the user A is possibly occurs can be recognized, and the problem that the emotion of the voice version of the voice data is lowered is caused by the vehicle is lowered, and the problem that the emotion of the vehicle is caused by the problem of driving of the vehicle is lowered.
According to an embodiment of the present invention, the method for acquiring voice data of a user terminal to identify a current user attribute, so as to determine a target audio based on the user attribute for broadcasting specifically includes:
voiceprint recognition is carried out based on the voice data so as to obtain user attributes corresponding to the current user side;
and determining target audio preset by the current user side based on the user attribute matching setting data, wherein the target audio is at least for one user attribute.
It should be noted that, in this embodiment, voiceprint recognition is performed based on the voice data to determine whether the current voiceprint is a historical driver, where if the voiceprint of the user a is the historical driver, the user attribute corresponding to the user a may be recognized and obtained, so that the target audio set by the user a may be obtained based on matching of the user attribute corresponding to the user a, and if the voiceprint of the user a is the first driver, the output target audio is the factory standard audio, or if the user attribute corresponding to the user a is not matched to the target audio set by the user a, the same output target audio is the factory standard audio.
According to an embodiment of the present invention, the determining the target audio preset at the current user terminal based on the user attribute matching setting data specifically includes:
Matching an attribute database in the setting data based on the user attributes, wherein,
when the attribute database is used for successfully identifying the attribute of the current user, extracting corresponding audio to determine target audio preset by the current user side, wherein the target audio comprises user setting audio or factory standard audio.
It should be noted that, in this embodiment, as described in the foregoing embodiment, when matching the attribute database based on the user attribute is successful, it indicates that the user a is a historical driver, and the target audio can be obtained based on the audio set by the user a, and when matching the attribute database fails, it indicates that the user a is a first driver, and the output target audio is factory standard audio.
According to the embodiment of the invention, in the process of voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, and the audio response mechanism is switched based on the driving state, which specifically comprises the following steps:
identifying a driving state of the vehicle, the driving state including at least a vehicle running state, a vehicle collision state, and a vehicle parking state, wherein,
switching a first audio response mechanism based on the vehicle running state, wherein the first audio response mechanism is a passive response mechanism;
Switching a second audio response mechanism based on the vehicle collision status, the second audio response mechanism being an active response mechanism;
and switching a third audio response mechanism based on the vehicle parking state, wherein the third audio response mechanism is an active and passive combined response mechanism.
It should be noted that, in this embodiment, since the vehicle plays a good role in protecting the driver and the passenger, particularly, for example, when the vehicle collides, the collision information is sent out at the first time to inform the vehicle brand side of the current vehicle of the accident, please rescue in time, and some vehicles actively send the information to the customer service personnel, the customer service personnel communicates with the driver and passenger by telephone, but the efficiency value of the traditional calling for help mode is low due to the transmission of the information and the personnel arrangement of the customer service personnel, in this embodiment, when the vehicle driving state is recognized as the vehicle collision state, the audio response mechanism of voice interaction is switched to the second audio response mechanism, which is the active response mechanism, so that the first time calling for help is performed on the driver by using the target audio, for example, when the user a alone is trapped in the single car accident, the help calling can be performed based on the audio set by the user A, wherein the audio including the relatives set by the user A is simulated by the AI so as to trigger the spirit of the user A more, keep awake and wait for being rescued, the first audio response mechanism is a passive response mechanism, voice interaction is performed only when the driver of the vehicle shouts out the starting password, in order to ensure the scheme integrity, the content cannot be lost, the identified driving state is a vehicle parking state, the vehicle parking state in the embodiment is a parking state in which the vehicle starts in a normal running state, for example, when the user A drives the vehicle to park, the voice interaction requirement of the user can be responded in the parking state, and advice can be actively provided to the user based on big data screening data so as to improve the driving experience of the user, see the following description for specific suggestions.
According to an embodiment of the present invention, in the process of performing voice interaction between a vehicle and a user terminal, language attribution recognition is performed based on the voice data so as to switch and broadcast a voice version of the target audio, which specifically includes:
the voice data is acquired to carry out voiceprint parameter identification so as to identify the language attribution, wherein,
if the identification is successful, identifying a target dialect based on the language attribution, and updating the voice version based on the target dialect so as to switch the playing sound of the current target audio;
if the recognition fails, the current positioning data of the vehicle is obtained, and the target dialect is recognized based on the positioning data, so that the voice version is updated based on the target dialect to switch the playing sound of the current target audio.
It should be noted that, in this embodiment, when a dialect is identified, a corresponding language attribution may be identified based on voice data of a user, so that a corresponding target dialect may be obtained based on the language attribution, the voice version may be switched based on the target dialect to switch the playing sound of the current target audio, for example, when the user a is driving, the voice data of the user B is identified, the language attribution location guest region may be successfully identified, the corresponding "guest" is taken as the target dialect to update the voice version to switch the playing sound of the current target audio, so that dialect communication between the vehicle and the user a of the guest is realized, for example, the user a may not speak the guest, when the driving navigation is performed, the user B may be queried to identify the navigation destination, further, due to the difference of the voices possibly existing in different regions of the guest dialect, for example, when the language of the user B is failed to be identified, the current positioning data of the vehicle may be successfully identified, the target dialect is identified based on the positioning data, so that the current target dialect is updated based on the target dialect, the current voice version may be switched, and the voice data may be selected based on the current voice version, and the voice data may be selected when the voice of the user B is selected, and the voice is suitable for the user B is positioned.
According to an embodiment of the present invention, when the vehicle and the user terminal do not perform voice interaction, the voice value and the speech rate analysis are performed based on the voice data of the user terminal, so as to switch and broadcast the voice version of the target audio, which specifically includes:
extracting voice data of a user side in the vehicle to perform volume value threshold analysis to obtain a volume highest value;
extracting voice data of a user side in the vehicle to perform speech speed threshold analysis to obtain a speech speed average value;
and analyzing based on the highest volume value and the average speech speed value, wherein when the highest volume value is greater than or equal to a preset volume threshold and the average speech speed value is greater than or equal to the preset speech speed threshold, switching and broadcasting the voice version of the target audio to be the emotion adjustment voice version.
It should be noted that, in this embodiment, when the vehicle and the user terminal do not perform voice interaction, the voice data of the user terminal may be obtained to perform a volume value and a speech rate analysis, so as to identify a problem of a high emotion of a driver and a passenger, so as to switch a voice version of the broadcast audio, so as to reduce a voice version of the high emotion loss and reduce a problem of dangerous driving, where a condition of judgment is that the volume maximum value and the speech rate average value are both satisfied, specifically, when the volume maximum value is greater than or equal to a preset volume threshold and the speech rate average value is greater than or equal to a preset speech rate threshold, it is determined that the emotion of the current driver and the passenger is in a high emotion state, and when the user performs voice interaction, the voice version of the target audio is switched to a voice version of the low emotion loss.
It should be noted that, when the target dialect is identified based on the positioning data, a region language selection may be performed, so as to provide multiple voices for the user to select, which specifically includes: and (3) analyzing the positioning data and the population big data, and screening corresponding dialects based on population quantity ranking for output.
It should be noted that, since the same region may live with multiple clans or groups, the population number may be ranked as the ranking of the target dialect according to the demographic data analysis, so as to output the target dialect to provide multiple voices for the user to select, so as to adapt to different users.
It should be noted that the data screening based on big data actively suggests suggestions to the user, specifically includes:
outputting a recommendation based on the time data in combination with the positioning data of the current vehicle, wherein,
outputting dining voice to actively suggest based on the positioning data when the time data is in the dining time period;
and outputting playing voice to actively suggest based on the positioning data when the time data is not in the dining time period.
It should be noted that, in this embodiment, after the vehicle is parked, active advice may be performed by combining time data and positioning data, for example, when the time data is within a dining time period, active advice is performed by outputting dining voice based on the positioning data, and advice content such as "dingdong, dining time is up, and the following meal is recommended to you; when the time data is not in the dining time period, playing voice is output to conduct active advice based on the positioning data, advice content such as dingdong is displayed, people can see which local bars can play nearby, and different information about the periphery of the current position can be provided for a user to promote user experience through the active advice in a parking state.
It should be noted that after switching to the third audio response mechanism, the method further includes identifying a threshold distribution of the current parking duration, and specifically includes:
a threshold distribution of the current parking duration is identified, the threshold distribution comprising a first threshold range and a second threshold range, and a third threshold point, wherein,
the parking duration is located in the first threshold range, and when the parking duration reaches a first threshold, a first active suggestion is started;
the parking time length is in the second threshold range, and when the parking time length reaches a second threshold value, a second active suggestion is started, wherein the second active suggestion carries out content updating based on the second active suggestion;
and when the parking duration reaches the third threshold value, starting a third initiative proposal.
It should be noted that, in this embodiment, when the vehicle is parked, the active advice may be updated based on the parking duration, where the threshold ranges corresponding to the parking duration, for example, the first threshold range (0 min,10 min) and the second threshold range [10min,30min ], and the first threshold point "7min", the second threshold point "22min" and the third threshold point "36min", when the vehicle is parked, as time passes, when the parking duration reaches "7min", the first active advice is started, for example, "dingdong", what can be helped by the loved owner, when the parking duration reaches "22min", the second active advice is started, for example, "dingdong", what can be helped by the loved owner, and nearby delicates, scenic spots and park for reference, and when the parking duration reaches "36min", the third active advice is started, for example, "dingdong", and the loved owner immediately responds to "needs to be described, and after the parking duration reaches" 7min ", the parking duration can be set in advance, and the parking duration may be adjusted according to the user setting may be closed.
It is worth mentioning that when the parking duration reaches the third threshold value, a third active advice is started, and the method further includes:
judging whether feedback data are successfully acquired in a preset time period after the third active advice is played; wherein,,
if the feedback data is not successfully acquired, dangerous alarm is carried out, and emergency contacts are informed based on user attributes;
and if the feedback data is successfully acquired, judging a dangerous value based on the feedback data so as to carry out dangerous alarm.
It should be noted that, in this embodiment, after the third active suggestion is played, feedback data of the user side is to be acquired within a preset time period, where the feedback data may be touch data or voice data, if the feedback data is not successfully acquired, a danger alarm is performed, an in-vehicle alarm is started to remind a driver and passengers, and based on user attributes, an emergency contact person is informed that a danger exists in a current vehicle, such as a danger of vehicle loss or a danger of personnel on the vehicle, if the feedback data is successfully acquired, a danger value judgment is performed based on the feedback data to perform a danger alarm, after sos touch data is acquired, it is indicated that the danger value is determined that the current driver and passengers are in a dangerous state, and when the voice data includes preset danger words, it is indicated that the current driver and passengers are in a dangerous state, and also perform an alarm, and danger words such as "hellp, life-saving" are related words.
FIG. 2 shows a block diagram of an intelligent voice control system based on big data in accordance with the present invention.
As shown in fig. 2, the invention discloses an intelligent voice control system based on big data, which comprises a memory and a processor, wherein the memory comprises an intelligent voice control method program based on big data, and the intelligent voice control method program based on big data realizes the following steps when being executed by the processor:
acquiring voice data of a user side to identify current user attributes, and determining target audio to play based on the user attributes so as to enable a vehicle to perform voice interaction with the user side;
in the process of voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, and an audio response mechanism is switched based on the driving state;
in the process of voice interaction between the vehicle and the user side, carrying out language attribution recognition based on the voice data so as to switch and broadcast the voice version of the target audio;
and when the vehicle and the user terminal do not perform voice interaction, performing voice value and speech speed analysis based on voice data of the user terminal, so as to switch and broadcast the voice version of the target audio.
It should be noted that, in this embodiment, firstly, voice data of a user side may be acquired to perform voice recognition so as to acquire corresponding user attributes, so as to acquire different target audios based on different user attributes to perform voice interaction, for example, user a sets a preferred target audio in a vehicle setting, when user a is identified, user a is in voice interaction with user a, then the voice interaction based on the audio set by user a may be performed, so as to increase user experience, further, in the process of performing voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, an audio response mechanism is switched based on the driving state, the specific audio response mechanism includes a first audio response mechanism, a second audio response mechanism and a third audio response mechanism, wherein the active response of different audio response mechanisms is different from the passive response setting, so that the corresponding response mechanism can be automatically switched according to the driving state of the vehicle, for example, the first audio response mechanism is the passive response mechanism, that indicates that when user a is in the vehicle running, only after user a speaks the corresponding voice interaction, the voice interaction with user a is in the process of the vehicle, further, the voice interaction is performed with user a is not in the vehicle, if the user a is in the vehicle running, otherwise, the user a is in a user is in a conscious of the vehicle is not conscious of the user a, and the user is in the vehicle running accident, and the vehicle is further, because the user is in the active response is in the vehicle and the vehicle is a user has a communication with the active response, and the user is a user has a user is in the vehicle and a user is a user and a user is in a better condition, the voice data is used for carrying out language attribution recognition so as to switch and broadcast the voice version of the target audio, for example, when a user A runs to a country lane by self driving, when navigation is not explicitly described by a road, local villages can be inquired about to carry out road guidance, and the villages can carry out voice navigation while riding in a vehicle, but when the situation that voice is not easy occurs, some villages can carry out voice recognition on the villages when the villages cannot speak or hear not know the common, so that the voice version of the target audio is switched and broadcast by a vehicle machine of the vehicle so as to adapt to different language requirements, in addition, when the vehicle and the user side do not carry out voice interaction, the voice version of the target audio is switched and broadcast based on the voice data of the user side, the characteristic indicates that the voice value and the voice speed of the voice data are not recognized when the vehicle and the user side do not carry out voice interaction, the voice data can be recognized, the voice version of the voice data can be switched, and the problem that the possible high-rising voice of the user A is possibly occurs can be recognized, and the problem that the emotion of the voice version of the voice data is lowered is caused by the vehicle is lowered, and the problem that the emotion of the vehicle is caused by the problem of driving of the vehicle is lowered.
According to an embodiment of the present invention, the method for acquiring voice data of a user terminal to identify a current user attribute, so as to determine a target audio based on the user attribute for broadcasting specifically includes:
voiceprint recognition is carried out based on the voice data so as to obtain user attributes corresponding to the current user side;
and determining target audio preset by the current user side based on the user attribute matching setting data, wherein the target audio is at least for one user attribute.
It should be noted that, in this embodiment, voiceprint recognition is performed based on the voice data to determine whether the current voiceprint is a historical driver, where if the voiceprint of the user a is the historical driver, the user attribute corresponding to the user a may be recognized and obtained, so that the target audio set by the user a may be obtained based on matching of the user attribute corresponding to the user a, and if the voiceprint of the user a is the first driver, the output target audio is the factory standard audio, or if the user attribute corresponding to the user a is not matched to the target audio set by the user a, the same output target audio is the factory standard audio.
According to an embodiment of the present invention, the determining the target audio preset at the current user terminal based on the user attribute matching setting data specifically includes:
Matching an attribute database in the setting data based on the user attributes, wherein,
when the attribute database is used for successfully identifying the attribute of the current user, extracting corresponding audio to determine target audio preset by the current user side, wherein the target audio comprises user setting audio or factory standard audio.
It should be noted that, in this embodiment, as described in the foregoing embodiment, when matching the attribute database based on the user attribute is successful, it indicates that the user a is a historical driver, and the target audio can be obtained based on the audio set by the user a, and when matching the attribute database fails, it indicates that the user a is a first driver, and the output target audio is factory standard audio.
According to the embodiment of the invention, in the process of voice interaction between the vehicle and the user side, the current driving state of the vehicle is identified, and the audio response mechanism is switched based on the driving state, which specifically comprises the following steps:
identifying a driving state of the vehicle, the driving state including at least a vehicle running state, a vehicle collision state, and a vehicle parking state, wherein,
switching a first audio response mechanism based on the vehicle running state, wherein the first audio response mechanism is a passive response mechanism;
Switching a second audio response mechanism based on the vehicle collision status, the second audio response mechanism being an active response mechanism;
and switching a third audio response mechanism based on the vehicle parking state, wherein the third audio response mechanism is an active and passive combined response mechanism.
It should be noted that, in this embodiment, since the vehicle plays a good role in protecting the driver and the passenger, particularly, for example, when the vehicle collides, the collision information is sent out at the first time to inform the vehicle brand side of the current vehicle of the accident, please rescue in time, and some vehicles actively send the information to the customer service personnel, the customer service personnel communicates with the driver and passenger by telephone, but the efficiency value of the traditional calling for help mode is low due to the transmission of the information and the personnel arrangement of the customer service personnel, in this embodiment, when the vehicle driving state is recognized as the vehicle collision state, the audio response mechanism of voice interaction is switched to the second audio response mechanism, which is the active response mechanism, so that the first time calling for help is performed on the driver by using the target audio, for example, when the user a alone is trapped in the single car accident, the help calling can be performed based on the audio set by the user A, wherein the audio including the relatives set by the user A is simulated by the AI so as to trigger the spirit of the user A more, keep awake and wait for being rescued, the first audio response mechanism is a passive response mechanism, voice interaction is performed only when the driver of the vehicle shouts out the starting password, in order to ensure the scheme integrity, the content cannot be lost, the identified driving state is a vehicle parking state, the vehicle parking state in the embodiment is a parking state in which the vehicle starts in a normal running state, for example, when the user A drives the vehicle to park, the voice interaction requirement of the user can be responded in the parking state, and advice can be actively provided to the user based on big data screening data so as to improve the driving experience of the user, see the following description for specific suggestions.
According to an embodiment of the present invention, in the process of performing voice interaction between a vehicle and a user terminal, language attribution recognition is performed based on the voice data so as to switch and broadcast a voice version of the target audio, which specifically includes:
the voice data is acquired to carry out voiceprint parameter identification so as to identify the language attribution, wherein,
if the identification is successful, identifying a target dialect based on the language attribution, and updating the voice version based on the target dialect so as to switch the playing sound of the current target audio;
if the recognition fails, the current positioning data of the vehicle is obtained, and the target dialect is recognized based on the positioning data, so that the voice version is updated based on the target dialect to switch the playing sound of the current target audio.
It should be noted that, in this embodiment, when a dialect is identified, a corresponding language attribution may be identified based on voice data of a user, so that a corresponding target dialect may be obtained based on the language attribution, the voice version may be switched based on the target dialect to switch the playing sound of the current target audio, for example, when the user a is driving, the voice data of the user B is identified, the language attribution location guest region may be successfully identified, the corresponding "guest" is taken as the target dialect to update the voice version to switch the playing sound of the current target audio, so that dialect communication between the vehicle and the user a of the guest is realized, for example, the user a may not speak the guest, when the driving navigation is performed, the user B may be queried to identify the navigation destination, further, due to the difference of the voices possibly existing in different regions of the guest dialect, for example, when the language of the user B is failed to be identified, the current positioning data of the vehicle may be successfully identified, the target dialect is identified based on the positioning data, so that the current target dialect is updated based on the target dialect, the current voice version may be switched, and the voice data may be selected based on the current voice version, and the voice data may be selected when the voice of the user B is selected, and the voice is suitable for the user B is positioned.
According to an embodiment of the present invention, when the vehicle and the user terminal do not perform voice interaction, the voice value and the speech rate analysis are performed based on the voice data of the user terminal, so as to switch and broadcast the voice version of the target audio, which specifically includes:
extracting voice data of a user side in the vehicle to perform volume value threshold analysis to obtain a volume highest value;
extracting voice data of a user side in the vehicle to perform speech speed threshold analysis to obtain a speech speed average value;
and analyzing based on the highest volume value and the average speech speed value, wherein when the highest volume value is greater than or equal to a preset volume threshold and the average speech speed value is greater than or equal to the preset speech speed threshold, switching and broadcasting the voice version of the target audio to be the emotion adjustment voice version.
It should be noted that, in this embodiment, when the vehicle and the user terminal do not perform voice interaction, the voice data of the user terminal may be obtained to perform a volume value and a speech rate analysis, so as to identify a problem of a high emotion of a driver and a passenger, so as to switch a voice version of the broadcast audio, so as to reduce a voice version of the high emotion loss and reduce a problem of dangerous driving, where a condition of judgment is that the volume maximum value and the speech rate average value are both satisfied, specifically, when the volume maximum value is greater than or equal to a preset volume threshold and the speech rate average value is greater than or equal to a preset speech rate threshold, it is determined that the emotion of the current driver and the passenger is in a high emotion state, and when the user performs voice interaction, the voice version of the target audio is switched to a voice version of the low emotion loss.
It should be noted that, when the target dialect is identified based on the positioning data, a region language selection may be performed, so as to provide multiple voices for the user to select, which specifically includes: and (3) analyzing the positioning data and the population big data, and screening corresponding dialects based on population quantity ranking for output.
It should be noted that, since the same region may live with multiple clans or groups, the population number may be ranked as the ranking of the target dialect according to the demographic data analysis, so as to output the target dialect to provide multiple voices for the user to select, so as to adapt to different users.
It should be noted that the data screening based on big data actively suggests suggestions to the user, specifically includes:
outputting a recommendation based on the time data in combination with the positioning data of the current vehicle, wherein,
outputting dining voice to actively suggest based on the positioning data when the time data is in the dining time period;
and outputting playing voice to actively suggest based on the positioning data when the time data is not in the dining time period.
It should be noted that, in this embodiment, after the vehicle is parked, active advice may be performed by combining time data and positioning data, for example, when the time data is within a dining time period, active advice is performed by outputting dining voice based on the positioning data, and advice content such as "dingdong, dining time is up, and the following meal is recommended to you; when the time data is not in the dining time period, playing voice is output to conduct active advice based on the positioning data, advice content such as dingdong is displayed, people can see which local bars can play nearby, and different information about the periphery of the current position can be provided for a user to promote user experience through the active advice in a parking state.
It should be noted that after switching to the third audio response mechanism, the method further includes identifying a threshold distribution of the current parking duration, and specifically includes:
a threshold distribution of the current parking duration is identified, the threshold distribution comprising a first threshold range and a second threshold range, and a third threshold point, wherein,
the parking duration is located in the first threshold range, and when the parking duration reaches a first threshold, a first active suggestion is started;
the parking time length is in the second threshold range, and when the parking time length reaches a second threshold value, a second active suggestion is started, wherein the second active suggestion carries out content updating based on the second active suggestion;
and when the parking duration reaches the third threshold value, starting a third initiative proposal.
It should be noted that, in this embodiment, when the vehicle is parked, the active advice may be updated based on the parking duration, where the threshold ranges corresponding to the parking duration, for example, the first threshold range (0 min,10 min) and the second threshold range [10min,30min ], and the first threshold point "7min", the second threshold point "22min" and the third threshold point "36min", when the vehicle is parked, as time passes, when the parking duration reaches "7min", the first active advice is started, for example, "dingdong", what can be helped by the loved owner, when the parking duration reaches "22min", the second active advice is started, for example, "dingdong", what can be helped by the loved owner, and nearby delicates, scenic spots and park for reference, and when the parking duration reaches "36min", the third active advice is started, for example, "dingdong", and the loved owner immediately responds to "needs to be described, and after the parking duration reaches" 7min ", the parking duration can be set in advance, and the parking duration may be adjusted according to the user setting may be closed.
It is worth mentioning that when the parking duration reaches the third threshold value, a third active advice is started, and the method further includes:
judging whether feedback data are successfully acquired in a preset time period after the third active advice is played; wherein,,
if the feedback data is not successfully acquired, dangerous alarm is carried out, and emergency contacts are informed based on user attributes;
and if the feedback data is successfully acquired, judging a dangerous value based on the feedback data so as to carry out dangerous alarm.
It should be noted that, in this embodiment, after the third active suggestion is played, feedback data of the user side is to be acquired within a preset time period, where the feedback data may be touch data or voice data, if the feedback data is not successfully acquired, a danger alarm is performed, an in-vehicle alarm is started to remind a driver and passengers, and based on user attributes, an emergency contact person is informed that a danger exists in a current vehicle, such as a danger of vehicle loss or a danger of personnel on the vehicle, if the feedback data is successfully acquired, a danger value judgment is performed based on the feedback data to perform a danger alarm, after sos touch data is acquired, it is indicated that the danger value is determined that the current driver and passengers are in a dangerous state, and when the voice data includes preset danger words, it is indicated that the current driver and passengers are in a dangerous state, and also perform an alarm, and danger words such as "hellp, life-saving" are related words.
A third aspect of the present invention provides a computer-readable storage medium having embodied therein a big data based intelligent speech control method program which, when executed by a processor, implements the steps of a big data based intelligent speech control method as described in any of the above.
According to the intelligent voice control method, the system and the readable storage medium based on big data, corresponding audios can be responded based on different interaction data so as to be more fit with driving experience of a user, in addition, corresponding voice interaction response mechanisms can be adjusted according to different driving states of a vehicle, so that consciousness of trapped people of the vehicle can be kept when the vehicle is dangerous, in addition, voice interaction can be carried out according to dialects corresponding to voice response in the vehicle, user experience is improved, the high data processing preference is improved, and when the emotion of people in the vehicle is recognized to be high, the emotion response of passengers is reduced by adjusting voice versions, and the occurrence of dangerous driving or road anger is reduced.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.

Claims (8)

1. The intelligent voice control method based on big data is characterized by comprising the following steps:
acquiring voice data of a user side to identify current user attributes, and determining target audio to play based on the user attributes so as to enable a vehicle to perform voice interaction with the user side;
in the process of voice interaction between a vehicle and a user side, the current driving state of the vehicle is identified, and an audio response mechanism is switched based on the driving state, wherein the method specifically comprises the following steps: identifying a driving state of a vehicle, wherein the driving state at least comprises a vehicle driving state, a vehicle collision state and a vehicle parking state, and the first audio response mechanism is switched based on the vehicle driving state and is a passive response mechanism; switching a second audio response mechanism based on the vehicle collision status, the second audio response mechanism being an active response mechanism; switching a third audio response mechanism based on the vehicle parking state, wherein the third audio response mechanism is an active and passive combined response mechanism; after switching to the third audio response mechanism, identifying a threshold distribution of the current parking duration, specifically including: identifying a threshold distribution of a current parking duration, wherein the threshold distribution comprises a first threshold range, a second threshold range and a third threshold point, the parking duration is positioned in the first threshold range, and when the parking duration reaches the first threshold, a first active suggestion is started; the parking time length is in the second threshold range, and when the parking time length reaches a second threshold value, a second active suggestion is started, wherein the second active suggestion carries out content updating based on the second active suggestion; when the parking duration reaches the third threshold value, a third active suggestion is started; after the third active advice is played, judging whether feedback data are successfully acquired in a preset time period; if the feedback data is not successfully acquired, dangerous alarm is carried out, and emergency contacts are informed based on user attributes; if the feedback data is successfully acquired, judging a dangerous value based on the feedback data so as to carry out dangerous alarm;
In the process of voice interaction between the vehicle and the user side, carrying out language attribution recognition based on the voice data so as to switch and broadcast the voice version of the target audio;
and when the vehicle and the user terminal do not perform voice interaction, performing voice value and speech speed analysis based on voice data of the user terminal, so as to switch and broadcast the voice version of the target audio.
2. The intelligent voice control method based on big data according to claim 1, wherein the step of obtaining voice data of a user terminal to identify a current user attribute, so as to determine a target audio based on the user attribute for broadcasting specifically includes:
voiceprint recognition is carried out based on the voice data so as to obtain user attributes corresponding to the current user side;
and determining target audio preset by the current user side based on the user attribute matching setting data, wherein the target audio is at least for one user attribute.
3. The intelligent voice control method based on big data according to claim 2, wherein the determining the target audio preset at the current user terminal based on the user attribute matching setting data specifically comprises:
Matching an attribute database in the setting data based on the user attributes, wherein,
when the attribute database is used for successfully identifying the attribute of the current user, extracting corresponding audio to determine target audio preset by the current user side, wherein the target audio comprises user setting audio or factory standard audio.
4. The intelligent voice control method based on big data according to claim 1, wherein in the process of voice interaction between a vehicle and a user terminal, language attribution recognition is performed based on the voice data so as to switch and broadcast the voice version of the target audio, specifically comprising:
the voice data is acquired to carry out voiceprint parameter identification so as to identify the language attribution, wherein,
if the identification is successful, identifying a target dialect based on the language attribution, and updating the voice version based on the target dialect so as to switch the playing sound of the current target audio;
if the recognition fails, the current positioning data of the vehicle is obtained, and the target dialect is recognized based on the positioning data, so that the voice version is updated based on the target dialect to switch the playing sound of the current target audio.
5. The intelligent voice control method based on big data according to claim 1, wherein when the vehicle and the user terminal do not perform voice interaction, the voice value and the speech rate analysis are performed based on the voice data of the user terminal, so as to switch and broadcast the voice version of the target audio, specifically comprising:
extracting voice data of a user side in the vehicle to perform volume value threshold analysis to obtain a volume highest value;
extracting voice data of a user side in the vehicle to perform speech speed threshold analysis to obtain a speech speed average value;
and analyzing based on the highest volume value and the average speech speed value, wherein when the highest volume value is greater than or equal to a preset volume threshold and the average speech speed value is greater than or equal to the preset speech speed threshold, switching and broadcasting the voice version of the target audio to be the emotion adjustment voice version.
6. The intelligent voice control system based on big data is characterized by comprising a memory and a processor, wherein the memory comprises an intelligent voice control method program based on big data, and the intelligent voice control method program based on big data realizes the following steps when being executed by the processor:
acquiring voice data of a user side to identify current user attributes, and determining target audio to play based on the user attributes so as to enable a vehicle to perform voice interaction with the user side;
In the process of voice interaction between a vehicle and a user side, the current driving state of the vehicle is identified, and an audio response mechanism is switched based on the driving state, wherein the method specifically comprises the following steps: identifying a driving state of a vehicle, wherein the driving state at least comprises a vehicle driving state, a vehicle collision state and a vehicle parking state, and the first audio response mechanism is switched based on the vehicle driving state and is a passive response mechanism; switching a second audio response mechanism based on the vehicle collision status, the second audio response mechanism being an active response mechanism; switching a third audio response mechanism based on the vehicle parking state, wherein the third audio response mechanism is an active and passive combined response mechanism; after switching to the third audio response mechanism, identifying a threshold distribution of the current parking duration, specifically including: identifying a threshold distribution of a current parking duration, wherein the threshold distribution comprises a first threshold range, a second threshold range and a third threshold point, the parking duration is positioned in the first threshold range, and when the parking duration reaches the first threshold, a first active suggestion is started; the parking time length is in the second threshold range, and when the parking time length reaches a second threshold value, a second active suggestion is started, wherein the second active suggestion carries out content updating based on the second active suggestion; when the parking duration reaches the third threshold value, a third active suggestion is started; after the third active advice is played, judging whether feedback data are successfully acquired in a preset time period; if the feedback data is not successfully acquired, dangerous alarm is carried out, and emergency contacts are informed based on user attributes; if the feedback data is successfully acquired, judging a dangerous value based on the feedback data so as to carry out dangerous alarm;
In the process of voice interaction between the vehicle and the user side, carrying out language attribution recognition based on the voice data so as to switch and broadcast the voice version of the target audio;
and when the vehicle and the user terminal do not perform voice interaction, performing voice value and speech speed analysis based on voice data of the user terminal, so as to switch and broadcast the voice version of the target audio.
7. The intelligent voice control system based on big data according to claim 6, wherein in the process of voice interaction between the vehicle and the user terminal, the voice version of the target audio is switched and broadcasted by performing language attribution recognition based on the voice data, which specifically comprises:
the voice data is acquired to carry out voiceprint parameter identification so as to identify the language attribution, wherein,
if the identification is successful, identifying a target dialect based on the language attribution, and updating the voice version based on the target dialect so as to switch the playing sound of the current target audio;
if the recognition fails, the current positioning data of the vehicle is obtained, and the target dialect is recognized based on the positioning data, so that the voice version is updated based on the target dialect to switch the playing sound of the current target audio.
8. A computer readable storage medium, characterized in that the computer readable storage medium comprises a big data based intelligent speech control method program, which when executed by a processor, implements the steps of a big data based intelligent speech control method according to any of claims 1 to 5.
CN202310174868.2A 2023-02-28 2023-02-28 Intelligent voice control method and system based on big data and readable storage medium Active CN115862595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310174868.2A CN115862595B (en) 2023-02-28 2023-02-28 Intelligent voice control method and system based on big data and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310174868.2A CN115862595B (en) 2023-02-28 2023-02-28 Intelligent voice control method and system based on big data and readable storage medium

Publications (2)

Publication Number Publication Date
CN115862595A CN115862595A (en) 2023-03-28
CN115862595B true CN115862595B (en) 2023-05-16

Family

ID=85659314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310174868.2A Active CN115862595B (en) 2023-02-28 2023-02-28 Intelligent voice control method and system based on big data and readable storage medium

Country Status (1)

Country Link
CN (1) CN115862595B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025137998A1 (en) * 2023-12-28 2025-07-03 深圳市锐明技术股份有限公司 Vehicle and voice escort method and apparatus therefor, and storage medium
CN120386877A (en) * 2025-06-27 2025-07-29 北京云链金汇数字科技有限公司 User behavior analysis and precision marketing methods based on model drive

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003125454A (en) * 2001-10-12 2003-04-25 Honda Motor Co Ltd Driving situation dependent call control system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9493130B2 (en) * 2011-04-22 2016-11-15 Angel A. Penilla Methods and systems for communicating content to connected vehicle users based detected tone/mood in voice input
US10137902B2 (en) * 2015-02-12 2018-11-27 Harman International Industries, Incorporated Adaptive interactive voice system
CN109189980A (en) * 2018-09-26 2019-01-11 三星电子(中国)研发中心 The method and electronic equipment of interactive voice are carried out with user
CN111976732A (en) * 2019-05-23 2020-11-24 上海博泰悦臻网络技术服务有限公司 Vehicle control method and system based on vehicle owner emotion and vehicle-mounted terminal
CN115691470A (en) * 2021-07-29 2023-02-03 上海擎感智能科技有限公司 Dialect-based vehicle-mounted interaction method, device and system
CN115520205B (en) * 2022-09-28 2023-05-19 润芯微科技(江苏)有限公司 Method, system and device for adjusting voice interaction based on driving state

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003125454A (en) * 2001-10-12 2003-04-25 Honda Motor Co Ltd Driving situation dependent call control system

Also Published As

Publication number Publication date
CN115862595A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110660397B (en) Dialogue system, vehicle and method for controlling a vehicle
CN115862595B (en) Intelligent voice control method and system based on big data and readable storage medium
US10931772B2 (en) Method and apparatus for pushing information
US11498573B2 (en) Pacification method, apparatus, and system based on emotion recognition, computer device and computer readable storage medium
CN109712615A (en) System and method for detecting the prompt in dialogic voice
US8005668B2 (en) Adaptive confidence thresholds in telematics system speech recognition
US20140303966A1 (en) Communication system and terminal device
US20130293367A1 (en) Criteria-Based Audio Messaging in Vehicles
CN106816149A (en) The priorization content loading of vehicle automatic speech recognition system
US20090249323A1 (en) Address book sharing system and method for non-verbally adding address book contents using the same
CN105702254A (en) Voice control system based on mobile terminal and voice control method thereof
CN110203154A (en) Recommended method, device, electronic equipment and the computer storage medium of vehicle functions
DE102014111816A1 (en) Vehicle telematics unit and method for operating this
CN110286745A (en) Dialog process system, the vehicle with dialog process system and dialog process method
CN107818788A (en) Remote speech identification on vehicle
CN105551484B (en) Selective noise suppressed during automatic speech recognition
CN108447488A (en) Enhance voice recognition tasks to complete
US7986974B2 (en) Context specific speaker adaptation user interface
DE102018126525A1 (en) In-vehicle system, procedure and storage medium
CN113538852A (en) Fatigue driving reminding method and vehicle-mounted terminal
CN112172712A (en) Cabin service method and cabin service system
CN117437912A (en) Speech recognition processing method and electronic equipment
JP7274901B2 (en) AGENT DEVICE, CONTROL METHOD OF AGENT DEVICE, AND PROGRAM
CN106847316A (en) Car machine audio control method and its control system
CN109947925A (en) On-vehicle machines people's natural language self-learning method, computer installation and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant