[go: up one dir, main page]

CN119559944A - Vehicle intelligent control method, device, vehicle and storage medium - Google Patents

Vehicle intelligent control method, device, vehicle and storage medium Download PDF

Info

Publication number
CN119559944A
CN119559944A CN202411740980.9A CN202411740980A CN119559944A CN 119559944 A CN119559944 A CN 119559944A CN 202411740980 A CN202411740980 A CN 202411740980A CN 119559944 A CN119559944 A CN 119559944A
Authority
CN
China
Prior art keywords
vehicle
voice
gesture
target
sound source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411740980.9A
Other languages
Chinese (zh)
Inventor
罗应渝
刘兵
农云飞
杨孛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Nissan Passenger Vehicle Co
Original Assignee
Dongfeng Nissan Passenger Vehicle Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Nissan Passenger Vehicle Co filed Critical Dongfeng Nissan Passenger Vehicle Co
Priority to CN202411740980.9A priority Critical patent/CN119559944A/en
Publication of CN119559944A publication Critical patent/CN119559944A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an intelligent control method and device for a vehicle, the vehicle and a storage medium, and relates to the technical field of vehicle control, wherein the method comprises the steps of determining voice information in the vehicle through a voice acquisition module, and determining a sound source position and a voice instruction according to the voice information; when the voice command is a target command, acquiring in-vehicle image data acquired by an image acquisition module based on the sound source position, carrying out target gesture recognition by using a preset recognition function based on the in-vehicle image data to obtain a recognition result, and controlling the vehicle according to the target gesture, the voice command and the sound source position when the recognition result is that the target gesture is recognized, wherein the scheme does not need to add additional depth information, carries out gesture recognition of in-vehicle users by multiplexing the image acquisition module, reduces the hardware cost of in-vehicle recognition and improves the recognition effect.

Description

Intelligent control method and device for vehicle, vehicle and storage medium
Technical Field
The application relates to the technical field of vehicle control, in particular to an intelligent vehicle control method and device, a vehicle and a storage medium.
Background
The existing cabin interaction in the vehicle mainly comprises voice control, key control and touch screen control, and the design is different from real social contact, and the difference point causes that the user feel interaction lacks flexibility in the human-computer interaction process. Therefore, in order to improve the interactive experience of the user, a TOF (Time of flight) camera is used for 3D gesture recognition during the current vehicle control, and the recognized information and voice are combined, so that the vehicle is intelligently controlled. However, this control method requires a new TOF camera, which increases hardware cost.
Disclosure of Invention
The application mainly aims to provide a vehicle intelligent control method, a vehicle intelligent control device, a vehicle and a storage medium, and aims to solve the technical problem that cost is high due to the fact that a TOF camera is required to be combined in existing vehicle control.
In order to achieve the above object, the present application provides a vehicle intelligent control method, which is applied to a vehicle, wherein a voice acquisition module and an image acquisition module are provided on the vehicle, and the method comprises:
Determining voice information in the vehicle through the voice acquisition module, and determining a sound source position and a voice instruction according to the voice information;
When the voice instruction is a target instruction, acquiring in-vehicle image data acquired by the image acquisition module based on the sound source position;
Performing target gesture recognition by using a preset recognition function based on the in-vehicle image data to obtain a recognition result;
and when the recognition result is that the target gesture is recognized, controlling the vehicle according to the target gesture, the voice command and the sound source position.
In an embodiment, when the recognition result is that a target gesture is recognized, the step of controlling the vehicle according to the target gesture, the voice command and the sound source position includes:
When the recognition result is that a target gesture is recognized, determining a gesture direction according to the target gesture;
Determining a target component according to the gesture direction and the sound source position;
and controlling the target component according to the voice command.
In one embodiment, the step of controlling the target component according to the voice command includes:
Determining key instruction words according to the voice instructions;
acquiring a vehicle component state;
determining a target action on the target component according to the key command words and the vehicle component state;
And controlling the vehicle according to the target action.
In an embodiment, before the step of performing target gesture recognition by using a preset recognition function based on the in-vehicle image data to obtain a recognition result, the method further includes:
Acquiring historical hand data, performing image learning through the historical hand data, constructing a two-dimensional hand model, and converting the two-dimensional hand model into a first function;
Obtaining historical gesture posture data according to the first function and the historical hand data, and carrying out finger pointing gesture learning according to the historical gesture posture data to construct a two-dimensional gesture recognition model, and converting the two-dimensional gesture recognition model into a second function;
Obtaining historical finger pointing gesture data according to the second function and the historical gesture data, learning a pointing direction according to the historical finger pointing gesture data, constructing a two-dimensional pointing model, and converting the two-dimensional pointing model into a third function;
And obtaining a preset recognition function through the third function.
In an embodiment, before the step of acquiring the in-vehicle image data acquired by the image acquisition module based on the sound source position when the voice command is the target command, the method further includes:
analyzing the voice command to determine whether the voice command is a vehicle control command;
When the voice command is the vehicle control command, determining whether vehicle component information exists in the vehicle control command;
When the vehicle component information does not exist in the vehicle control instruction, determining whether a preset reference word exists in the vehicle control instruction;
and when the preset instruction word exists in the vehicle control instruction, determining the voice instruction as a target instruction.
In an embodiment, after the step of performing target gesture recognition according to the in-vehicle image data by using a preset recognition function to obtain a recognition result, the method further includes:
generating voice prompt information when the recognition result is that the target gesture is not recognized;
Prompting the voice prompt information through the voice acquisition module, returning the voice information which responds to the interior of the vehicle through the voice acquisition module, and determining the sound source position and the voice instruction according to the voice information.
In an embodiment, the voice acquisition module comprises a microphone array comprising a plurality of microphones;
the step of determining the sound source position from the speech information comprises:
obtaining the time difference between each microphone in the microphone array according to a sound source positioning strategy and the voice information;
Calculating the angle and distance between the sound source and the microphone array according to the time difference;
and determining the sound source position through the angle and the distance.
In addition, in order to achieve the above object, the present application also provides a vehicle intelligent control device, including:
the determining module is used for determining the voice information in the vehicle through the voice acquisition module and determining the sound source position and the voice instruction according to the voice information;
the acquisition module is used for acquiring the image data in the vehicle through the image acquisition module based on the sound source position when the voice instruction is a target instruction;
The recognition module is used for recognizing the target gesture by using a preset recognition function based on the image data in the vehicle to obtain a recognition result;
and the control module is used for controlling the vehicle according to the target gesture, the voice command and the sound source position when the recognition result is that the target gesture is recognized.
In addition, in order to achieve the above object, the application also proposes a vehicle comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the vehicle intelligent control method as described above.
In addition, in order to achieve the above object, the present application also proposes a storage medium, which is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the vehicle intelligent control method as described above.
Furthermore, to achieve the above object, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the vehicle intelligent control method as described above.
The intelligent vehicle control method comprises the steps of determining voice information in a vehicle through the voice acquisition module, determining a sound source position and a voice command according to the voice information, acquiring vehicle image data acquired by the image acquisition module based on the sound source position when the voice command is a target command, performing target gesture recognition by using a preset recognition function based on the vehicle image data to obtain a recognition result, and controlling the vehicle according to the target gesture, the voice command and the sound source position when the recognition result is that the target gesture is recognized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a vehicle intelligent control method according to an embodiment of the present application;
FIG. 2 is a schematic diagram showing the distribution of devices in a vehicle according to an embodiment of the intelligent control method of the present application;
FIG. 3 is a schematic diagram of coordinates of a sound source in a vehicle according to an embodiment of the intelligent control method of the present application;
FIG. 4 is a schematic flow chart of a second embodiment of the intelligent control method for a vehicle according to the present application;
FIG. 5 is a schematic flow chart of a third embodiment of a vehicle intelligent control method according to the present application;
FIG. 6 is a schematic flow chart of a fourth embodiment of a vehicle intelligent control method according to the present application;
FIG. 7 is a schematic flow chart of an embodiment of a vehicle intelligent control method according to the present application;
FIG. 8 is a schematic block diagram of a vehicle intelligent control device according to an embodiment of the present application;
Fig. 9 is a schematic diagram of a vehicle structure of a hardware operating environment related to a vehicle intelligent control method according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
The method comprises the steps of determining voice information in a vehicle through a voice acquisition module, determining a sound source position and a voice command according to the voice information, acquiring image data in the vehicle through an image acquisition module based on the sound source position when the voice command is a target command, performing target gesture recognition by using a preset recognition function based on the image data in the vehicle to obtain a recognition result, and controlling the vehicle according to the target gesture, the voice command and the sound source position when the recognition result is that the target gesture is recognized.
Because the prior art when a user wakes up a voice assistant, an array microphone in the cabin recognizes a voice initiator (user) through a sound source localization algorithm, a TOF camera is activated, and recognition is performed on the voice initiator again. And finishing voice for the user. The TOF camera receives light returned from the object through continuous external light-emitting pulses, the distance of the object is obtained by detecting the flight (round trip) time of the light pulses, 3D scene data is constructed by combining a plurality of distance point information through an algorithm, and the pointing direction of gestures is accurately distinguished. The new Time of Fly camera is needed, the hardware cost is increased, and the key point is the newly added TOF sensor hardware, so that the function of OTA upgrading on the existing vehicle type is not provided.
The present application provides a solution that does not require a new TOF camera, but rather multiplexes OMS (Occupant Monitor System, occupant monitoring system) cameras for gesture recognition. Similar functions are realized in a software manner without any increase in hardware cost, and an existing vehicle can be upgraded through OTA.
It should be noted that, the execution body of the embodiment may be a computing service device having functions of data processing, network communication and program running, such as a tablet computer, a personal computer, a mobile phone, or an electronic device, a vehicle system in a vehicle, or the like, which can implement the above functions. The present embodiment and the following embodiments will be described below with reference to a vehicle system as an example.
Based on this, an embodiment of the present application provides a vehicle intelligent control method, and referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the vehicle intelligent control method of the present application.
In this embodiment, the vehicle intelligent control method is applied to a vehicle, and a voice acquisition module and an image acquisition module are disposed on the vehicle, and the vehicle intelligent control method includes steps S10 to S40:
And S10, determining voice information in the vehicle through a voice acquisition module, and determining the sound source position and the voice instruction according to the voice information.
It should be noted that, the vehicle is mainly provided with a vehicle-mounted system, a voice acquisition module and an in-cabin monitoring camera (OMS), the vehicle-mounted system can include an OMS control module, a voice recognition module, a vehicle control module and the like, and the OMS can be mounted on a front windshield of the vehicle or at a vehicle-mounted central control system. The voice acquisition module comprises a plurality of microphones so as to acquire voice information of a user in the vehicle, the voice acquisition module can comprise a microphone array, the microphone array comprises a plurality of microphones, as shown in fig. 2, fig. 2 is a distribution schematic diagram of devices in the vehicle, the microphones (microphone 1, microphone 2, microphone 3 and microphone 4) are arranged at various positions in the vehicle, and the OMS camera, the vehicle camera and the T-BOX (wireless gateway) are arranged on the center console.
In a specific implementation, after the vehicle is started, each microphone in the vehicle in the voice acquisition module is activated, and the acquired audio signals are consistently kept until the vehicle is powered down. Each microphone collects audio signals and sends the collected audio signals to the voice recognition module in a mode of A2B (Automotive Audio Bus, vehicle-mounted audio bus technology), and when a user wakes up the voice recognition function, the voice recognition module determines a sound source position through an internal link and sends the sound source position to the OMS control module.
Therefore, the voice information in the vehicle can be collected through the microphone and analyzed, so that the sound source position and the voice command can be determined.
In a possible implementation manner, the determining the sound source position according to the voice information in the step S10 may include steps a11 to a13:
and step A11, obtaining the time difference between each microphone in the microphone array according to the sound source positioning strategy and the voice information.
It can be understood that the sound source localization strategy is a sound source localization technology Of TDOA (TIME DIFFERENCE Of Arrival time difference), and the time difference between Arrival Of the voice signals at the two microphones is obtained through TDOA and collected voice information.
And step A12, calculating the angle and the distance of the sound source from the microphone array according to the time difference.
In implementations, the angle and distance of the sound source from the microphone array within the vehicle may be calculated from the time difference, and may include in particular the angle and distance of the sound source from each microphone.
And step A13, determining the sound source position through the angle and the distance.
It should be understood that the position of the sound source in the vehicle can be determined according to a specific angle and distance, as shown in fig. 3, fig. 3 is a schematic view of the coordinates of the sound source in the vehicle, and in this embodiment, four vehicles are taken as an example, and the coordinates of the sound source azimuth are x_11, x_12, x_13, and x_14, respectively.
After sound source localization, the OMS may be driven for image data collection by sound source localization.
And step S20, when the voice instruction is a target instruction, acquiring the image data in the vehicle, which is acquired by the image acquisition module, based on the sound source position.
It can be understood that after the voice command is obtained, semantic analysis can be performed on the voice command to determine a specific command meaning, the target command is a command belonging to a vehicle control command, the command contains a reference word and the command does not contain specific vehicle interior component information, and whether the voice command is the target command can be determined by analyzing the voice command.
When the voice command is not the target command, the voice command can be directly sent out, and command forwarding and verification are performed, so that the vehicle is controlled through the voice command.
If the voice command is a target command, that is, the voice command needs to be controlled, the action of the user is combined, and thus the vehicle is controlled according to the action and the voice synthesis, and the image data in the vehicle, which is acquired by the image acquisition module, can be acquired based on the sound source position.
The in-vehicle image data is image data at the sound source position acquired by the image acquisition module. The image data acquired by the image acquisition module can be screened according to the sound source position, so that the image data in the vehicle can be obtained.
And step S30, carrying out target gesture recognition by using a preset recognition function based on the in-vehicle image data to obtain a recognition result.
It should be understood that the preset recognition function may be a function for recognizing a hand, a gesture and a direction of a user in a vehicle, and after determining in-vehicle image data corresponding to a sound source position, whether the gesture and gesture specific information are made to the user in the image may be further performed, so as to obtain a recognition result.
It should be noted that, the recognition result may be that the target gesture is recognized or not recognized, and the target gesture is a valid gesture, that is, a gesture having a specific direction or a general direction.
It should be understood that if the target gesture is not recognized, the processing needs to be performed in time, for example, reminding or recognition is performed again, so as to avoid error in recognition.
In a possible implementation manner, the step S30 further includes steps S31 to S32:
and S31, when the recognition result is that the target gesture is not recognized, generating voice prompt information.
It will be appreciated that if the result of the recognition is that no valid gesture is recognized, it may be that the user has not performed a gesture or that the user has performed a gesture but has not been captured by the camera, so that a voice prompt may be generated, for example, the voice prompt is "do not recognize your gesture, please re-perform a gesture" or "do not recognize your gesture, please confirm whether to continue operation.
And step S32, prompting the voice prompt information through the voice acquisition module, returning the voice information which responds to the interior of the vehicle through the voice acquisition module, and determining the sound source position and the voice instruction according to the voice information.
In a specific implementation, the voice prompt information can be sent to a microphone array or a microphone corresponding to the sound source position in the microphone array to perform voice prompt, so as to judge whether the user requires continuous operation, if the user continues operation, the audio data of the user can be collected again, the sound source position and the voice command can be obtained, and whether the effective gesture is recognized is further judged. If the user requires that the operation is not continued, the interaction is ended.
And S40, when the recognition result is that the target gesture is recognized, controlling the vehicle according to the target gesture, the voice command and the sound source position.
If the recognition result is that the effective gesture of the user is recognized, the vehicle may be controlled according to the recognized effective gesture, the voice command of the user, and the sound source position, or a specific component in the vehicle may be controlled, or a component near the user at the sound source position may be controlled.
For example, the user's intention is determined by recognition based on the user's effective gesture, and the target control part is determined based on the user's voice command, the user's intention, and the sound source position, thereby controlling the target control part.
The embodiment provides a vehicle intelligent control method, which is applied to a vehicle, wherein a voice acquisition module and an image acquisition module are arranged on the vehicle; when the voice command is a target command, acquiring in-vehicle image data acquired by the image acquisition module based on the sound source position, carrying out target gesture recognition by using a preset recognition function based on the in-vehicle image data to obtain a recognition result, and controlling the vehicle according to the target gesture, the voice command and the sound source position when the recognition result is that the target gesture is recognized, wherein the scheme does not need to add additional depth information, and carrying out gesture recognition of in-vehicle users by multiplexing the image acquisition module, so that the hardware cost of in-vehicle recognition is reduced, and the recognition effect is improved.
In the second embodiment of the present application, the same or similar content as in the first embodiment of the present application may be referred to the above description, and will not be repeated. On this basis, referring to fig. 4, step S40 includes steps S401 to S403:
and S401, determining a gesture direction according to the target gesture when the recognition result is that the target gesture is recognized.
If the recognition result is that the target gesture is recognized, it is proved that a gesture instruction exists for the user, and a gesture direction is obtained according to the effective gesture of the user, wherein the gesture direction is the gesture direction of the user analyzed by the in-vehicle image data. Specifically, when the gesture of the user is recognized through a preset recognition function, the gesture direction of the user can be directly recognized.
In implementations, the gesture direction can be a left, right, up, or down direction at the sound source location.
And step S402, determining a target component according to the gesture direction and the sound source position.
In particular implementations, the components within the vehicle may be partitioned in advance to obtain a set of components { target_1}, { target_1} that are all identifiable vehicle components, including, but not limited to, windows, sunroofs, air conditioners, mood lights, trunk, fragrance, seat heating, seat ventilation, seat massaging, and the like.
After determining the gesture direction, the target component, i.e., the component to be controlled, may be determined according to the sound source position and the gesture direction.
For example, if the gesture direction is left and the sound source position is x_11, it is determined that the target component is a sub-driving window, for example, if the gesture direction is right and the sound source position is x_21, it is determined that the target component is a rear left window, as shown in table 1 below, and table 1 is a component table in the vehicle.
TABLE 1
{ Target_1} definition Component designations { Target_1} definition Component designations
T_11 Main driving window T_51 Trunk box
T_12 Auxiliary driving window T_61 Fragrance atmosphere
T_13 Rear left side window (Main driving rear window) T_71 Main driving seat heating
T_14 Rear right side window (auxiliary driving rear window) T_72 Auxiliary driving seat heating
T_21 Skylight T_73 Rear left side seat heating
T_31 Front-exhaust lower side air outlet T_74 Rear right seat heating
T_32 Rear exhaust lower side air outlet T_81 Main driving seat ventilation
T_33 Main driving air conditioner air outlet T_82 Auxiliary driving seat ventilation
T_34 Air outlet of auxiliary air conditioner T_83 Rear left side seat ventilation
T_35 Rear left air conditioner air outlet T_84 Rear right seat ventilation
T_36 Rear right air conditioner air outlet T_91 Main driving seat massage
T_37 Electric door of vehicle-mounted refrigerator T_92 Auxiliary driving seat massage
T_41 Atmosphere lamp T_93 Rear left side seat massage
T_42 Central reading lamp T_94 Rear right side seat massage
T_43 Rear row reading lamp
As shown in tables 2 and 3, tables 2 and 3 are target component tables corresponding to gesture orientations and sound source positions.
TABLE 2
TABLE 3 Table 3
And step S403, controlling the target component according to the voice command.
It will be appreciated that the specific component may be controlled in response to a user's voice command, e.g., the voice command is on, the target component is on, and if the voice command is off, the target component is off.
In one possible implementation, step S403 may include steps B11 to B14:
and step B11, determining key instruction words according to the voice instruction.
In particular implementations, voice instructions may be analyzed to determine key instruction words, such as open, close, etc. instruction words.
And step B12, acquiring the state of the vehicle part.
It will be appreciated that acquiring all of the component states within the vehicle, such as the vehicle component state being off, may eliminate the need for further action if the user intends to turn off the component, and thus may more quickly respond to the user's instructions based on the vehicle component state.
And step B13, determining a target action on the target component according to the key instruction words and the vehicle component state.
It should be noted that, whether the target component needs to be controlled or not may be determined according to a specific instruction and a vehicle component state, for example, the key instruction word is open, the target component is a sunroof, and if the status of the sunroof in the vehicle component state is completely open, the target action of the sunroof is no action, and a voice prompt that the sunroof is open may be generated. If the key instruction word is open, the target part is a skylight, and the skylight is not completely opened or closed in the vehicle part state, the target action is to open the skylight.
And step B14, controlling the vehicle according to the target action.
In a specific implementation, after determining the target action, the vehicle may be controlled according to the target action, thereby completing the interaction with the user.
TABLE 4 Table 4
As shown in table 4, table 4 is a relation table among key command words, vehicle component status, gesture direction, sound source position, direction component representation, target component and target action, for example, key command words are open, start, etc., gesture direction is left, sound source position is x_11, when target component is a secondary driving window, if target component status is closed, target action is to open the secondary driving window, if target component status is not to open the secondary driving window completely, target action is to open the secondary driving window, if target component status is to open the secondary driving window completely and secondary driving air conditioner air outlet is to open, target action is to voice prompt that the secondary driving window is open. If the target part is the air outlet of the secondary driving air conditioner, the target part is in a state that the secondary driving window is completely opened and the air outlet of the secondary driving air conditioner is closed, and the target action is used for opening the air outlet of the secondary driving air conditioner.
According to the embodiment, when the recognition result is that the target gesture is recognized, the gesture direction is determined according to the target gesture, the target component is determined according to the gesture direction and the sound source position, and the target component is controlled according to the voice command. The gesture direction is rapidly determined through the target gesture, so that a target component to be controlled is determined according to the gesture direction and the sound source position, and the control effect is improved.
In the third embodiment of the present application, the same or similar content as the first embodiment of the present application can be referred to the above description, and the description thereof will not be repeated. On this basis, referring to fig. 5, before step S30, the vehicle intelligent control method further includes steps S21 to S24:
and S21, acquiring historical hand data, performing image learning through the historical hand data, constructing a two-dimensional hand model, and converting the two-dimensional hand model into a first function.
It should be noted that, in order to improve accuracy and efficiency of gesture recognition, the OMS camera may perform visual training, so that a two-dimensional hand model, a two-dimensional gesture, and a two-dimensional pointing direction of a finger may be rapidly recognized. The content of the OMS vision training is mainly to identify valid gestures.
Therefore, during training, the historical hand data can be collected first, and the historical hand data are hand data of different ages, sexes and race, so that image learning is carried out on the historical hand data. The learning target is that 10000 samples can be identified through an OMS camera, the identification rate reaches 99.5%, a two-dimensional hand model is constructed, the two-dimensional hand model identified through visual training is converted into a function f_hand (), and the first function is the function f_hand ().
Step S22, historical gesture posture data are obtained according to the first function and the historical hand data, finger pointing gesture learning is conducted according to the historical gesture posture data, a two-dimensional gesture recognition model is built, and the two-dimensional gesture recognition model is converted into a second function.
In a specific implementation, different gesture data can be identified based on the function f_hand (), so as to obtain historical gesture data, wherein the historical gesture data comprise various gesture gestures, and the image learning is performed on the various gesture gestures, so that the image learning is performed around the finger pointing gesture in an important way. The learning target is that 10000 samples can be identified through an OMS camera, the identification rate is up to 99.5%, and the error identification rate is below 0.5%, so that a two-dimensional gesture identification model is built through training, and the two-dimensional gesture identification model is converted into a second function f_ gesture ().
And S23, obtaining historical finger pointing gesture data according to the second function and the historical gesture data, learning a pointing direction according to the historical finger pointing gesture data, constructing a two-dimensional pointing model, and converting the two-dimensional pointing model into a third function.
In a specific implementation, on the basis of the function second f_ gesture (), the pointing direction of the finger pointing gesture can be learned, that is, different finger pointing gesture data are obtained according to the historical gesture posture data and the second function, so that the pointing direction of the finger pointing gesture is learned, and the learning range is limited to only four directions of up, down, left and right. The learning target is that 10000 samples can be identified through an OMS camera, the identification rate is up to 99%, the false identification rate is below 0.5%, a two-dimensional pointing model is constructed, and the two-dimensional pointing model of the finger trained by vision is converted into a function third f_ Ddirection ().
And step S24, obtaining a preset recognition function through the third function.
In a specific implementation, the third function may be written into the OMS control module when the third function performs OTA upgrade on the vehicle through the T-BOX. When image data is acquired from the OMS camera, comparing the acquired data with a function f_ Ddirection (), judging yes, entering the next judging logic, judging no if not, and prompting ' I do not see your gesture, please try the bar again ' by voice '.
According to the method, a two-dimensional hand model is built through obtaining historical hand data and performing image learning through the historical hand data, the two-dimensional hand model is converted into a first function, historical gesture posture data is obtained according to the first function and the historical hand data, finger pointing gesture learning is performed according to the historical gesture posture data, a two-dimensional gesture recognition model is built, the two-dimensional gesture recognition model is converted into a second function, historical finger pointing gesture data is obtained according to the second function and the historical gesture posture data, pointing direction learning is performed according to the historical finger pointing gesture data, a two-dimensional pointing model is built, the two-dimensional pointing model is converted into a third function, and a preset recognition function is obtained through the third function. The OMS camera is used for visual training, and the training content mainly comprises three items of two-dimensional hand model recognition, two-dimensional gesture recognition and two-dimensional pointing (up, down, left and right) recognition of fingers. Therefore, the gesture information of the user can be quickly identified through the preset identification function, and the response speed of controlling the internal parts of the vehicle is improved.
In the fourth embodiment of the present application, the same or similar content as the first embodiment of the present application can be referred to the above description, and the description thereof will not be repeated. On this basis, referring to fig. 6, before step S20, the vehicle intelligent control method further includes steps S11 to S14:
And S11, analyzing the voice command to determine whether the voice command is a vehicle control command.
It should be noted that, the voice command may be parsed according to a certain parsing rule, for example, rules such as keyword splitting and keyword extraction, so as to determine whether the voice command is a car control command, where the car control command is set { b_1}, that is, the voice control regulations classified as a car control command, the set { a_1} may be written in the voice recognition module in advance, where the set { a_1} includes all identifiable valid functional voice commands (the functional voice commands include: car control, multimedia, telephone, navigation, AI chat, etc.), and the subset { b_1} is the voice control command classified as a car control command.
If the voice command is not the vehicle control command, determining whether the voice command is a command of other voice functions, and if so, directly controlling other voice functions according to the voice command.
And step S12, when the voice command is the vehicle control command, determining whether vehicle component information exists in the vehicle control command.
It should be understood that if the voice command is a vehicle control command, whether the vehicle component information exists in the vehicle control command can be further determined, and if the vehicle component information exists, the specific component can be directly controlled according to the vehicle control command without a subsequent gesture recognition process.
For example, if the voice command is "please open the main driving window", the vehicle component information in the voice command is the main driving window, and the opening of the main driving window can be controlled directly according to the voice command.
And step S13, when the vehicle component information does not exist in the vehicle control instruction, determining whether a preset reference word exists in the vehicle control instruction.
It will be appreciated that if no vehicle component information is present in the vehicle control command, it may be further determined whether the vehicle control command includes a predetermined reference including, but not limited to, "the" one ", and the like. The set { Pr_1}, pr_1} can be written in the voice recognition module in advance, and all recognizable preset reference words are contained in the set { Pr_1 }.
When the voice command accords with the judgment result of the vehicle control command is Y, judging logic 2 for judging whether the vehicle component information exists in the command can be performed. If the voice control command accords with the set { Target_1}, the voice control command is judged to be N, the specific vehicle component is controlled directly according to the voice control command, if the voice control command accords with the set { Pr_1} but does not contain the set { Target_1}, the voice control command is judged to be Y, the next judgment logic is entered, otherwise, the voice control command is invalid voice information, interaction fails, and the voice terminal shields the recognition of the command.
And S14, when the preset reference word exists in the vehicle control instruction, determining the voice instruction as a target instruction.
In a specific implementation, if a preset reference word exists in the vehicle control instruction, the voice instruction is determined to be the target instruction, and then the next judgment logic can be entered, namely whether the target gesture is identified.
The method comprises the steps of analyzing the voice command to determine whether the voice command is a vehicle control command, determining whether vehicle component information exists in the vehicle control command when the voice command is the vehicle control command, determining whether a preset command word exists in the vehicle control command when the vehicle component information does not exist in the vehicle control command, and determining that the voice command is a target command when the preset command word exists in the vehicle control command. Whether the voice command is a target command or not can be rapidly determined according to specific recognition rules, if the voice command is not the target command, subsequent gesture recognition operation is not needed, and the response speed to the voice command of the user is improved.
For the sake of understanding the implementation flow of the vehicle intelligent control method according to the first embodiment, referring to fig. 7, fig. 7 provides a schematic flow diagram of a vehicle intelligent control method, specifically, 02:audio data collection when 01:voice wakes up, 03:sound source localization is performed according to collected audio data, after sound source localization is successful, 13:driving OMS, 14:image data collection is performed through OMS, 04:receiving of instructions is performed after sound source localization, 05:semantic parsing, 06:enter judgment logic 1:whether a vehicle control instruction is performed, 07:no, then other voice functions are identified, 08:if yes, enter judgment logic 2:whether component information is not present in the vehicle control instruction and whether a pronoun is present, 09:when a pronoun is not present in the vehicle control instruction, 10:the vehicle control instruction is issued, 11:instruction is forwarded, and verified, 12:instruction is performed, 19:voice prompt results, 20:enter judgment logic 4 user requires continued operation, 21:no interaction instruction, 07:no, if no component information is present in the vehicle control instruction is recognized, and if a pronoun is not present, then the vehicle gesture is not present, and if a valid gesture is not present, then the vehicle intelligent control instruction is not present, and then the vehicle intelligent control method is performed, and then 09:3:3:3:instruction is not present.
It should be noted that the foregoing examples are only for understanding the present application, and do not constitute a limitation of the intelligent control method of the vehicle of the present application, and that many simple variations based on this technical concept are within the scope of the present application.
The present application also provides a vehicle intelligent control device, referring to fig. 8, the vehicle intelligent control device includes:
The determining module 10 is configured to determine, by using the voice collecting module, voice information in the vehicle, and determine a sound source position and a voice command according to the voice information.
And the acquisition module 20 is used for acquiring the image data in the vehicle through the image acquisition module based on the sound source position when the voice command is a target command.
And the recognition module 30 is used for performing target gesture recognition by using a preset recognition function based on the in-vehicle image data to obtain a recognition result.
And the control module 40 is configured to control the vehicle according to the target gesture, the voice command, and the sound source position when the recognition result is that the target gesture is recognized.
The vehicle intelligent control device provided by the application can solve the technical problem of high cost caused by the fact that the existing vehicle control needs to be combined with the TOF camera by adopting the vehicle intelligent control method in the embodiment. Compared with the prior art, the beneficial effects of the intelligent vehicle control device provided by the application are the same as those of the intelligent vehicle control method provided by the embodiment, and other technical features of the intelligent vehicle control device are the same as those disclosed by the method of the embodiment, so that the description is omitted herein.
In an embodiment, the control module 40 is further configured to determine a gesture direction according to the target gesture when the recognition result is that the target gesture is recognized, determine a target component according to the gesture direction and the sound source position, and control the target component according to the voice command.
In one embodiment, the control module 40 is further configured to determine a keyword according to the voice command, obtain a status of a vehicle component, determine a target action on the target component according to the keyword and the status of the vehicle component, and control the vehicle according to the target action.
In an embodiment, the recognition module 30 is further configured to obtain historical hand data, perform image learning according to the historical hand data, construct a two-dimensional hand model, convert the two-dimensional hand model into a first function, obtain historical gesture posture data according to the first function and the historical hand data, perform finger pointing gesture learning according to the historical gesture posture data, construct a two-dimensional gesture recognition model, convert the two-dimensional gesture recognition model into a second function, obtain historical finger pointing gesture data according to the second function and the historical gesture posture data, perform pointing direction learning according to the historical finger pointing gesture data, construct a two-dimensional pointing model, convert the two-dimensional pointing model into a third function, and obtain a preset recognition function according to the third function.
In an embodiment, the collecting module 20 is further configured to parse the voice command to determine whether the voice command is a vehicle control command, determine whether vehicle component information exists in the vehicle control command when the voice command is the vehicle control command, determine whether a preset reference word exists in the vehicle control command when the vehicle component information does not exist in the vehicle control command, and determine that the voice command is a target command when the preset reference word exists in the vehicle control command.
In an embodiment, the recognition module 30 is further configured to generate a voice prompt message when the recognition result is that the target gesture is not recognized, prompt the voice prompt message through the voice acquisition module, and return the voice message in the vehicle responded through the voice acquisition module, and determine the sound source position and the voice command according to the voice message.
In one embodiment, the voice acquisition module comprises a microphone array, the microphone array comprises a plurality of microphones, and the determining module 10 is further configured to determine the sound source position according to the voice information, wherein the determining module is further configured to obtain a time difference between each microphone in the microphone array according to a sound source positioning strategy and the voice information, calculate an angle and a distance of the sound source from the microphone array according to the time difference, and determine the sound source position according to the angle and the distance.
The application provides a vehicle which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the intelligent vehicle control method in the first embodiment.
Referring now to fig. 9, a schematic diagram of a vehicle suitable for use in implementing an embodiment of the present application is shown. The vehicle in the embodiment of the present application may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (Personal DIGITAL ASSISTANT: personal digital assistant), a PAD (Portable Application Description: tablet), a PMP (Portable MEDIA PLAYER: portable multimedia player), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, a stationary terminal such as a digital TV, a desktop computer, and the like. The vehicle illustrated in fig. 9 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present application.
As shown in fig. 9, the vehicle may include a processing device 1001 (e.g., a central processing unit, a graphics processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1003 into a random access Memory (RAM: random Access Memory) 1004. In the RAM1004, various programs and data required for vehicle operation are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, a system including an input device 1007 such as a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc., an output device 1008 including a Liquid crystal display (LCD: liquid CRYSTAL DISPLAY), a speaker, a vibrator, etc., a storage device 1003 including a magnetic tape, a hard disk, etc., and a communication device 1009 may be connected to the I/O interface 1006. The communication means 1009 may allow the vehicle to communicate with other devices wirelessly or by wire to exchange data. While a vehicle having various systems is illustrated in the figures, it should be understood that not all illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The vehicle provided by the application adopts the vehicle intelligent control method in the embodiment, and can solve the technical problem of high cost caused by the fact that the existing vehicle control needs to be combined with a TOF camera. Compared with the prior art, the beneficial effects of the vehicle provided by the application are the same as those of the intelligent control method of the vehicle provided by the embodiment, and other technical features of the vehicle are the same as those disclosed by the method of the embodiment, and are not repeated here.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the vehicle intelligent control method in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
The computer readable storage medium may be included in the vehicle or may exist alone without being incorporated in the vehicle.
The computer readable storage medium is loaded with one or more programs, when the one or more programs are executed by a vehicle, the vehicle is enabled to determine voice information in the vehicle through a voice acquisition module and determine a sound source position and a voice command according to the voice information, when the voice command is a target command, image data in the vehicle is acquired through an image acquisition module based on the sound source position, target gesture recognition is conducted through a preset recognition function based on the image data in the vehicle to obtain a recognition result, and when the recognition result is that the target gesture is recognized, the vehicle is controlled according to the target gesture, the voice command and the sound source position.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer programs) for executing the intelligent control method of the vehicle, so that the technical problem of high cost caused by the fact that the existing vehicle control needs to be combined with a TOF camera can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the intelligent control method for the vehicle provided by the embodiment, and are not repeated here.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of a vehicle intelligent control method as described above.
The computer program product provided by the application can solve the technical problem that the cost is high because the existing vehicle control needs to be combined with the TOF camera. Compared with the prior art, the beneficial effects of the computer program product provided by the application are the same as those of the intelligent control method for the vehicle provided by the embodiment, and are not repeated here.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1. The intelligent control method for the vehicle is characterized in that the intelligent control method for the vehicle is applied to the vehicle, a voice acquisition module and an image acquisition module are arranged on the vehicle, and the method comprises the following steps:
Determining voice information in the vehicle through the voice acquisition module, and determining a sound source position and a voice instruction according to the voice information;
When the voice instruction is a target instruction, acquiring in-vehicle image data acquired by the image acquisition module based on the sound source position;
Performing target gesture recognition by using a preset recognition function based on the in-vehicle image data to obtain a recognition result;
and when the recognition result is that the target gesture is recognized, controlling the vehicle according to the target gesture, the voice command and the sound source position.
2. The method of claim 1, wherein the step of controlling the vehicle according to the target gesture, the voice command, and the sound source position when the recognition result is that the target gesture is recognized comprises:
When the recognition result is that a target gesture is recognized, determining a gesture direction according to the target gesture;
Determining a target component according to the gesture direction and the sound source position;
and controlling the target component according to the voice command.
3. The method of claim 2, wherein the step of controlling the target component in accordance with the voice command comprises:
Determining key instruction words according to the voice instructions;
acquiring a vehicle component state;
determining a target action on the target component according to the key command words and the vehicle component state;
And controlling the vehicle according to the target action.
4. The method of claim 1, wherein the step of performing target gesture recognition using a preset recognition function based on the in-vehicle image data, before the step of obtaining a recognition result, further comprises:
Acquiring historical hand data, performing image learning through the historical hand data, constructing a two-dimensional hand model, and converting the two-dimensional hand model into a first function;
Obtaining historical gesture posture data according to the first function and the historical hand data, and carrying out finger pointing gesture learning according to the historical gesture posture data to construct a two-dimensional gesture recognition model, and converting the two-dimensional gesture recognition model into a second function;
Obtaining historical finger pointing gesture data according to the second function and the historical gesture data, learning a pointing direction according to the historical finger pointing gesture data, constructing a two-dimensional pointing model, and converting the two-dimensional pointing model into a third function;
And obtaining a preset recognition function through the third function.
5. The method of claim 1, wherein the step of acquiring the in-vehicle image data acquired by the image acquisition module based on the sound source position when the voice command is a target command further comprises:
analyzing the voice command to determine whether the voice command is a vehicle control command;
When the voice command is the vehicle control command, determining whether vehicle component information exists in the vehicle control command;
When the vehicle component information does not exist in the vehicle control instruction, determining whether a preset reference word exists in the vehicle control instruction;
and when the preset instruction word exists in the vehicle control instruction, determining the voice instruction as a target instruction.
6. The method according to claim 1, wherein after the step of performing target gesture recognition using a preset recognition function according to the in-vehicle image data to obtain a recognition result, the method further comprises:
generating voice prompt information when the recognition result is that the target gesture is not recognized;
Prompting the voice prompt information through the voice acquisition module, returning the voice information which responds to the interior of the vehicle through the voice acquisition module, and determining the sound source position and the voice instruction according to the voice information.
7. The method of any one of claims 1 to 6, wherein the speech acquisition module comprises a microphone array comprising a plurality of microphones;
the step of determining the sound source position from the speech information comprises:
obtaining the time difference between each microphone in the microphone array according to a sound source positioning strategy and the voice information;
Calculating the angle and distance between the sound source and the microphone array according to the time difference;
and determining the sound source position through the angle and the distance.
8. An intelligent control device for a vehicle, the device comprising:
the determining module is used for determining the voice information in the vehicle through the voice acquisition module and determining the sound source position and the voice instruction according to the voice information;
the acquisition module is used for acquiring the image data in the vehicle through the image acquisition module based on the sound source position when the voice instruction is a target instruction;
The recognition module is used for recognizing the target gesture by using a preset recognition function based on the image data in the vehicle to obtain a recognition result;
and the control module is used for controlling the vehicle according to the target gesture, the voice command and the sound source position when the recognition result is that the target gesture is recognized.
9. A vehicle comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the vehicle intelligent control method according to any one of claims 1 to 7.
10. A storage medium, characterized in that the storage medium is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the vehicle intelligent control method according to any one of claims 1 to 7.
CN202411740980.9A 2024-11-29 2024-11-29 Vehicle intelligent control method, device, vehicle and storage medium Pending CN119559944A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411740980.9A CN119559944A (en) 2024-11-29 2024-11-29 Vehicle intelligent control method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411740980.9A CN119559944A (en) 2024-11-29 2024-11-29 Vehicle intelligent control method, device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN119559944A true CN119559944A (en) 2025-03-04

Family

ID=94744841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411740980.9A Pending CN119559944A (en) 2024-11-29 2024-11-29 Vehicle intelligent control method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN119559944A (en)

Similar Documents

Publication Publication Date Title
US11034362B2 (en) Portable personalization
EP2862125B1 (en) Depth based context identification
US9211854B2 (en) System and method for incorporating gesture and voice recognition into a single system
CN102023703B (en) Combined lip reading and voice recognition multimodal interface system
CN109240576A (en) Image processing method and device, electronic equipment, storage medium in game
CN112309380B (en) Voice control method, system, equipment and automobile
CN109920410B (en) Apparatus and method for determining reliability of recommendation based on environment of vehicle
US20210347332A1 (en) Information interaction method and electronic device
US20230102157A1 (en) Contextual utterance resolution in multimodal systems
CN107305769A (en) Voice interaction processing method, device, equipment and operating system
CN107516526B (en) Sound source tracking and positioning method, device, equipment and computer readable storage medium
JP2022095768A (en) Method, device, apparatus, and medium for dialogues for intelligent cabin
CN114678021B (en) Audio signal processing method and device, storage medium and vehicle
CN112083795A (en) Object control method and device, storage medium and electronic equipment
CN113380246A (en) Instruction execution method, related device and computer program product
CN111833870A (en) Awakening method and device of vehicle-mounted voice system, vehicle and medium
CN112951216B (en) Vehicle-mounted voice processing method and vehicle-mounted information entertainment system
CN119559944A (en) Vehicle intelligent control method, device, vehicle and storage medium
EP4369185A1 (en) Execution instruction determination method and apparatus, device, and storage medium
KR102371513B1 (en) Dialogue processing apparatus and dialogue processing method
CN116564297A (en) Voice control method, device, computer equipment and storage medium
CN113936649A (en) Speech processing method, device and computer equipment
CN114889552A (en) Control method and system applied to vehicle, electronic device and storage medium
CN118283892A (en) In-vehicle illumination control method, system, equipment and medium
CN110806804A (en) Audio control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination