WO2020241906A1 - 음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 - Google Patents
음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 Download PDFInfo
- Publication number
- WO2020241906A1 WO2020241906A1 PCT/KR2019/006252 KR2019006252W WO2020241906A1 WO 2020241906 A1 WO2020241906 A1 WO 2020241906A1 KR 2019006252 W KR2019006252 W KR 2019006252W WO 2020241906 A1 WO2020241906 A1 WO 2020241906A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voice recognition
- recognition device
- voice
- control message
- location information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/32—Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- the present invention relates to a method for controlling a device through voice recognition and a user location of a plurality of devices, and a technology for implementing the same.
- a method of controlling a device based on a user's command utterance position is identified according to collaboration of a plurality of home appliances arranged in a space.
- a device for determining user location information based on an input voice in which a plurality of voice recognition devices are disposed, and a method for implementing the same, are presented.
- the present specification proposes a method of controlling devices by calculating the user's location when a command is uttered.
- An apparatus for controlling another device using voice recognition analyzes a voice command input by a voice input unit to identify a second voice recognition device to execute the command, and the second voice recognition device And a control unit that generates a control message to be transmitted to the user.
- the apparatus for controlling another device by using voice recognition includes location information of the first voice recognition device in the control message.
- An apparatus for controlling another device using voice recognition includes a voice file received by a voice input unit in the control message.
- An apparatus for controlling another device using voice recognition further includes a command database unit in which commands corresponding to functions of a plurality of voice recognition devices are stored.
- An apparatus operating using voice recognition includes a communication unit receiving a control message from a first voice recognition device and a control unit controlling the function providing unit according to the control message.
- the control unit of the device determines that the user is located near the first voice recognition device, and provides a function indicated by the control message.
- the function providing unit is controlled so as to be performed.
- a method of controlling a device using voice recognition includes a first voice recognition device receiving a voice input, a third voice recognition device, and a second voice providing a function corresponding to the received voice command.
- the first voice recognition device transmitting a first control message corresponding to the voice command input by the voice input unit of the first voice recognition device
- the third voice recognition device is the third voice recognition Transmitting a second control message corresponding to the voice command received by the voice input unit of the device, receiving the first control message and the second control message by the communication unit of the second voice recognition device, and the second And providing, by a control unit of the voice recognition device, a function indicated by the first control message and the second control message based on location information of the first voice recognition device and the second voice recognition device.
- a plurality of voice recognition devices are arranged so that the devices can determine user location information based on the input voice.
- each device may operate by calculating the user's location when a command is uttered.
- the robot cleaner may provide map information to accurately determine the user's location.
- FIG. 1 shows a configuration of a speech recognition apparatus according to an embodiment of the present invention.
- FIG. 2 shows a configuration in which speech recognition devices are arranged according to an embodiment of the present invention.
- FIG. 3 is a view showing a process of controlling a device based on a user's location using location information held by a robot cleaner according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a process of processing a voice command input by a talker by a plurality of voice recognition apparatuses according to an embodiment of the present invention.
- each device performs an operation sound to share its location according to an embodiment of the present invention.
- FIG 6 shows an operation process of an air conditioner or air purifier according to an embodiment of the present invention.
- FIG. 7 shows an operation process of a TV according to an embodiment of the present invention.
- first, second, A, B, (a), (b) may be used. These terms are only for distinguishing the component from other components, and the nature, order, order, or number of the component is not limited by the term.
- a component is described as being “connected”, “coupled” or “connected” to another component, the component may be directly connected or connected to that other component, but other components between each component It is to be understood that is “interposed”, or that each component may be “connected”, “coupled” or “connected” through other components.
- components may be subdivided and described for convenience of description, but these components may be implemented in one device or module, or one component may be a plurality of devices or modules. It can also be implemented by being divided into.
- ⁇ household appliances, household appliances, etc. that are arranged in a specific space and perform a predetermined function
- devices that are arranged in a specific space and perform a predetermined function
- speech recognition devices devices that perform speech recognition
- two or more devices disposed in a specific space may transmit and receive control messages using a communication function.
- the user may operate the devices by speaking a command set in the device with a voice.
- the command indicates the operation of the device and may be configured in various ways according to the classification of the device.
- the command may control the on/off of the device or instruct to perform a specific function configured in the device. This can be configured in various ways for each device.
- the command can be subdivided into a start command and a control command.
- the control command is a command that controls the operation of the device.
- the control command corresponds to a command that controls the function of a device for each device, such as "turn on”, “turn off”, and "weak wind”.
- Startup commands provide the ability to inform the device so that it can receive control commands.
- the start command may be the device's categorical name ("TV”, “Radio”, “Refrigerator”), or the device's brand ("Huissen”, “Trom”), or an interjection or conversational word ("Hey” , "Here”).
- a number of devices may be disposed in the user's living space. Accordingly, in this specification, a microphone array mounted on a plurality of voice recognition devices is used. When a user utters a command, a short-range device is activated by identifying the user's location among voice recognition devices, so that the home appliance can be automatically controlled using the location information of the user and the device.
- the voice recognition apparatus may directly operate the voice recognition apparatus by receiving a command uttered by the user, or may instruct the target device to operate by checking a target device included in the command.
- the voice input unit 110 receives a voice.
- the function providing unit 130 provides a predetermined function. For example, when the voice recognition device is a refrigerator, the function providing unit 130 provides a refrigerating/freezing function of the refrigerating chamber and the freezing chamber. When the voice recognition device is an air conditioner, the function providing unit 130 discharges air and provides the amount and direction of air discharge. When the voice recognition device is an air purifier, the function providing unit 130 provides a function of inhaling air and a function of purifying the inhaled air.
- the communication unit 180 may communicate with another device or an external cloud server.
- the cloud server may provide location information of devices or may convert input voice commands into text.
- the controller 150 cannot provide a separate voice recognition function, the cloud server may provide a voice recognition function for an input voice command.
- the control unit 150 of the first voice recognition device analyzes the voice command received by the voice input unit 110 to perform the command. Identify the second voice recognition device. Then, the controller 150 of the first voice recognition device generates a control message to be transmitted to the second voice recognition device.
- the communication unit 180 of the first voice recognition device transmits the control message to a plurality of devices including the second voice recognition device or the second voice recognition device. When transmitting to multiple users, it can be transmitted in a broadcasting method.
- the communication unit 180 of the second voice recognition device receives a control message from the first voice recognition device.
- the control unit 150 of the second voice recognition device controls the function providing unit 130 according to the control message.
- the control message includes location information of the first voice recognition device.
- the control message includes relative location information of the first voice recognition device generated based on location information of the second voice recognition device.
- control unit 150 of the first voice recognition device includes the location information (or the above-described relative location information) of the first voice recognition device in the control message. That is, the control message is generated to include location information.
- the controller 150 of the second voice recognition device may extract location information (or the above-described relative location information) of the first voice recognition device from the received control message.
- control unit 150 of the first voice recognition device does not separately include the location information (or the above-described relative location information) of the first voice recognition device in the control message
- control unit 150 of the second voice recognition device Location information of the first voice recognition device or map information including the location information may be received from a server or an external device.
- the command database unit 170 stores commands corresponding to functions of a plurality of voice recognition devices. In addition to the command corresponding to the function of the voice recognition device, commands corresponding to the functions of other devices may be further stored. In this case, the voice recognition device can check which device the input command targets.
- the controller 150 of the first voice recognition device may generate a control message by extracting identification information and control information of the second voice recognition device from a voice command input using the command database unit 170.
- the voice recognition device when the input command is a command for the device itself, the voice recognition device provides a function corresponding to the command. In this case, the location of the user who uttered the command may be confirmed through a speech recognition device or another device.
- the voice recognition device may transmit the command to a device to perform the command.
- the interface unit 190 outputs information such as sound, text, or graphics to the outside or receives a control signal from an external remote controller.
- FIG. 2 shows a configuration in which speech recognition devices are arranged according to an embodiment of the present invention.
- the space is divided into a wall and a door, and the refrigerator 100a is arranged in the kitchen.
- An air purifier 100b, an AI speaker 100c, an air conditioner 100d, and a robot cleaner 100e are placed in the living room.
- the user (ie, the talker) 1 stands around the refrigerator. All of 100a to 100e belong to voice recognition devices that perform a voice recognition function.
- the talker 1 inputs a command saying "Hi, LG, turn on the air conditioner". If the command is subdivided, "Hi, LG” is a start command and "Turn on air” is a control command.
- the command of the talker 1 is input to the voice input unit 110 of the refrigerator 100a.
- the refrigerator 100a receives a command by voice, and the refrigerator 100a analyzes the input command to perform voice recognition.
- the refrigerator 100a confirms that the talker 1 has commanded the operation of the air conditioner.
- the refrigerator 100a transmits a control message indicating that the operation of the air conditioner has been instructed to one or more of the air conditioner 100d or other voice recognition devices 100b, 100c, and 100e.
- the control message includes information that the talker 1 is near the refrigerator 100a.
- the refrigerator 100a may directly transmit a control message to the air conditioner 100d.
- the refrigerator 100a may transmit the control message in a broadcasting method, and all voice recognition devices in the space may receive it.
- the refrigerator 100a may transmit the input command to the AI speaker 100c.
- the air conditioner 100d may store the position of the refrigerator in advance, or may receive the position of the refrigerator through the AI speaker 100c or the robot cleaner 100e. For example, while the robot cleaner 100e moves and cleans the space, each home appliance or the arrangement state of the space may be stored.
- a map reflecting the locations of devices may be generated.
- the location of the refrigerator may be stored in the map.
- a user may input a command to the robot cleaner to "clean the refrigerator area" in the past, and place the robot cleaner near the refrigerator 100a.
- a specific point may be set as the refrigerator 100a on the map generated by the robot cleaner from the user or from the outside.
- the robot cleaner 100e secures space information based on the location information of objects on the previously input space, the robot cleaner 100e holds the location of the refrigerator 100a and the location of the air conditioner 100d. can do.
- the robot cleaner 100e may provide the location information of the refrigerator 100a or the relative position information of the refrigerator 100a to the air conditioner 100d based on the air conditioner 100d.
- the air conditioner 100d may check the user's location information using the location information of the refrigerator 100a and control the wind direction and the amount of air to send air to the user.
- the first voice recognition device 100i for receiving voice input such as a refrigerator 100a, a robot cleaner 100e, and an air conditioner 100d in FIG. It consists of a second voice recognition device (100w).
- the first voice recognition device 100i receives a voice command commanding to control the second voice recognition device 100w from the talker 1 (S11). Then, the input command is converted to generate a control message (S12). The first voice recognition device 100i transmits a control message to other devices (S13). In this case, the control message may be transmitted in a broadcasting method or may be transmitted 1:1 to the second voice-to-speech recognition apparatus 100w.
- the first voice recognition apparatus 100i or the second voice recognition apparatus 100w may receive location information of the first voice recognition apparatus 100i in advance from the robot cleaner 100e (S10a, S10b).
- the control message may include location information of the first voice recognition device 100i.
- the robot cleaner 100e may check the transmission state of S13 and transmit the location of the first speech recognition device 100i to the second speech recognition device 100w (S10c). .
- the cloud server 300 may transmit location information of each voice recognition device to the voice recognition devices.
- the second voice recognition device 100w operates based on the control message and the first voice recognition device (S14). For example, when the control message calculated from the command "Turn on the air conditioner" is transmitted from the first voice recognition device 100i, the second voice recognition device 100w sends air to the first voice recognition device 100i. .
- the cloud server 300 may store location information of each voice recognition device and provide it to the voice recognition devices again. In this case, the cloud server 300 may provide the positions of other voice recognition devices as relative location information based on each voice recognition device. Alternatively, the cloud server 300 may transmit map information including location information of each voice recognition device, and the communication unit 180 of the voice recognition devices may receive it.
- FIG. 4 is a diagram illustrating a process of processing a voice command input by a talker by a plurality of voice recognition devices according to an embodiment of the present invention.
- the volume of the command input to the refrigerator 100a disposed closest to the speaker is the volume of the command input to the air purifier 100b disposed farther than this Greater than
- the size of the input voice is included in the control message.
- the first voice recognition apparatus 100i and the third voice recognition apparatus 100j receive voice commands (S21i and S21j). Each device converts voice commands to generate control messages. At this time, each of the devices 100i and 100j includes information on the loudness of the voice received by the devices 100i and 100j when generating the control message (S22i and S22j).
- Each of the devices 100i and 100j transmits a control message to the second voice recognition device 100w (S23i and S23j).
- the second voice recognition apparatus 100w operates based on the control message and the volume of the voice included in the control message (S24).
- the volume of the sound received by the first voice recognition device 100i is 5.
- the second voice recognition device 100w determines that the user uttered near the first voice recognition device 100i.
- the second voice recognition device 100w may set the first voice recognition device 100i as the user's location and perform a predetermined function included in the control message.
- the second voice recognition device 100w may receive and store location information of each device through processes S10a, S10b, and S10c of FIG. 3.
- the second voice recognition device 100w may receive and store location information of each device from the AI speaker 100c or a cloud server disposed outside.
- the control message includes location information of the first and third voice recognition devices.
- the control message includes relative location information of the first and third voice recognition devices generated based on the location information of the second voice recognition device.
- the second voice recognition apparatus 100w may receive location information in advance from the robot cleaner 100e.
- control message in FIGS. 3 and 4 may include a voice file received by the voice input unit of the first voice recognition device 100i or the third voice recognition device 100j.
- the stored voice file may be generated by the first voice recognition device 100i or the third voice recognition device 100j and included in the control message.
- the first voice recognition apparatus 100i and the third voice recognition apparatus 100j may each transmit a control message to the second voice recognition apparatus 100w.
- the first voice recognition apparatus 100i transmits a first control message corresponding to a voice command input by the voice input unit 110 of the first voice recognition apparatus 100i (S23i).
- the third voice recognition apparatus 100j transmits a second control message corresponding to the voice command received by the voice input unit 110 of the third voice recognition apparatus 100j (S23j).
- the communication unit 180 of the second voice recognition apparatus 100w receives the first control message and the second control message. And, the control unit 150 of the second voice recognition device 100w instructs the first control message and the second control message based on the location information of the first voice recognition device 100i and the second voice recognition device 100j. Provides a function (S24).
- control unit 150 of the second voice recognition device 100w extracts the location information of the first voice recognition device 100i from the first control message
- the third voice recognition device 100j extracts the location information from the second control message.
- Location information can be extracted.
- the user's location can be more accurately identified by using the two location information. In particular, when comparing the volume information of the voice included in the control message, the user's location can be accurately identified.
- the refrigerator is the first voice recognition device 100i
- the air purifier is the third voice recognition device 100j
- the voices received by the two devices are at the same level.
- the control unit 150 of the air conditioner determines that the location of the talker is between the air purifier and the refrigerator, and the function control unit 130 directs the wind direction toward the middle point between the two devices. ) Can be controlled.
- each device performs an operation sound to share its location according to an embodiment of the present invention.
- the first voice recognition device 100i, the second voice recognition device 100w, and the third voice recognition device 100j respectively broadcast identification information of the device and output operation sounds (S31, S32, S33). Each device stores identification information of other devices and direction or distance information for the input sound.
- the voice input unit 110 installed in the second voice recognition device 100w is a directional microphone, it can be confirmed that the first voice recognition device 100i is disposed in the left direction.
- the voice input unit 110 is disposed on both the left and right sides of the second voice recognition device 100w, the direction of the second voice recognition device 100w is determined by using the difference in the loudness of the sound input by both voice input units 110. ) May determine the location of the first voice recognition device 100i.
- the distance can be calculated according to the size of the input operation sound.
- the first voice recognition device 100i may notify other devices by including the volume of the operating sound as well.
- the control unit 150 of each device compares the original sound level of the operation sound output from the other device with the size of the operation sound input to the voice input unit 110. Using this, the controller 250 may calculate the distances of other devices that have output the operating sound. The distance is inversely proportional to the loudness of the sound.
- each device may store information on the location or distance of other devices. For example, when using a map generated by the robot cleaner 100b or using an operation sound, devices may check the locations or distances of other devices.
- the first voice recognition device 100i when it receives a voice command from the user, it can transmit the voice command to the second voice recognition device 100w to be actually operated to instruct the operation.
- the second voice recognition apparatus 100w performs a function by reflecting information that the user who inputs the voice command was in the vicinity of the first voice recognition apparatus 100i.
- the location information of the user is important.
- the voice input unit 110 is a single microphone array
- the direction of the user can be determined using the map generated by the robot cleaner 100b, and the voice recognition devices are based on this. You can make an accurate position determination.
- the robot cleaner 100b stores the information of the corresponding point as a point to which the user voice input.
- the voice recognition apparatus may estimate the location of the talker by using the voice input unit 110 of various devices providing voice services, for example, a microphone.
- the voice recognition apparatus accurately checks the location information of the user by combining the location information of each home appliance and the location of the talker, and performs an operation based on the location of the user. Therefore, it is possible to increase the convenience of use of the device and improve the performance, and automatic control of the devices becomes possible.
- the voice input unit 110 for example, a microphone array mounted on a voice recognition device disposed at various locations enables the voice recognition device to acquire location information of a user.
- the voice input unit 110 disposed in the voice recognition device is a directional microphone, direction information of a voice input by each voice recognition device may be included in the control message.
- the voice recognition device receiving the control message may calculate the location information of the user by collecting direction information.
- each device outputs an operation sound (notification sound) of the product at a certain point in time, and other devices that receive the input may calculate the location of the device.
- each device can calculate its own location information and location information of another device by sharing map information that the robot cleaner learns and generates during the driving process.
- the devices can operate with directionality when the above-described embodiment is applied.
- the voice recognition device may automatically adjust the volume of the feedback voice comment according to the distance from the user or the location of the user.
- FIG 6 shows an operation process of an air conditioner or air purifier according to an embodiment of the present invention.
- the first voice recognition device 100i receives a voice command (S41).
- the first voice recognition device 100i generates a control message according to the above-described process and transmits it to the second voice recognition device 100w (S42).
- the controller 150 of the second voice recognition device 100w determines that the user is located near the first voice recognition device 100i and recognizes the first voice. The operation starts with the location of the device 100i as a target (S43). Accordingly, the control unit 150 of the second voice recognition apparatus 100w controls the function control unit 130 to provide a function indicated by the control message. The control unit 150 of the second voice recognition device 100w may adjust the direction and intensity of the wind.
- the interface unit 190 of the second voice recognition device 100w receives a direction change control signal within a certain time (for example, within 3 to 10 seconds), it is determined that the user has changed the direction of the wind. . Then, the control unit 150 of the second voice recognition device 100w corrects the location information of the first voice recognition device 100i in the changed direction (S44). This includes a situation in which the user adjusts the direction of the air conditioner/air purifier again after firing.
- FIG. 7 shows an operation process of a TV according to an embodiment of the present invention.
- S41 and S42 are as described in FIG. 6.
- the control unit 150 of the second voice recognition device 100w calculates the distance to the first voice recognition device 100i (S53). In addition, when the distance between the two devices is less than a certain standard, the synchronization is performed (S54).
- the TV placed in the living room calculates the distance to the refrigerator (first voice recognition device) using the received control message or previously received location information. If it is determined that it is within a certain distance, the TV placed in the living room is turned on according to an instruction included in the control message.
- the TV arranged in bedroom 4 calculates the distance to the refrigerator (first voice recognition device) using the received control message or previously received location information. If it is determined that the distance is greater than the predetermined distance, the TV placed in bedroom 4 does not follow the instructions included in the control message.
- the interface unit 190 of the second voice recognition device 100w receives a direction change control signal within a certain time (for example, within 3 to 10 seconds), it is determined that the user has changed the direction of the wind. . Then, the control unit 150 of the second voice recognition device 100w corrects the location information of the first voice recognition device 100i in the changed direction (S44). This includes a situation in which the user adjusts the direction of the air conditioner/air purifier again after firing.
- the present invention is not necessarily limited to these embodiments, and all constituent elements within the scope of the present invention are not It can also be selectively combined and operated.
- all of the components can be implemented as one independent hardware, a program module that performs some or all functions combined in one or more hardware by selectively combining some or all of the components. It may be implemented as a computer program having Codes and code segments constituting the computer program may be easily inferred by those skilled in the art.
- Such a computer program is stored in a computer readable storage medium, is read and executed by a computer, and thus an embodiment of the present invention can be implemented.
- the storage medium of the computer program includes a magnetic recording medium, an optical recording medium, and a storage medium including a semiconductor recording element.
- the computer program implementing the embodiment of the present invention includes a program module that is transmitted in real time through an external device.
Landscapes
- Engineering & Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Selective Calling Equipment (AREA)
- User Interface Of Digital Computer (AREA)
- Air Conditioning Control Device (AREA)
- Telephonic Communication Services (AREA)
Abstract
Description
Claims (16)
- 음성을 입력받는 제1음성인식장치에 있어서,음성을 입력받는 음성입력부;기능의 제공하는 기능제공부;상기 음성입력부가 입력받은 음성 명령어를 분석하여 상기 명령어를 수행할 제2음성인식장치를 식별하고, 상기 제2음성인식장치에게 전송할 제어 메시지를 생성하는 제어부; 및상기 제어 메시지를 상기 제2음성인식장치 또는 상기 제2음성인식장치를 포함한 다수의 장치들에게 전송하는 통신부를 포함하는, 음성인식을 이용하여 타 장치를 제어하는 장치.
- 제1항에 있어서,상기 제어부는 상기 제1음성인식장치의 위치 정보를 포함하는 상기 제어 메시지를 생성하는, 음성인식을 이용하여 타 장치를 제어하는 장치.
- 제2항에 있어서,상기 제어부는 상기 제2음성인식장치의 위치 정보를 기준으로 상기 제1음성인식장치의 상대 위치 정보를 생성하고 상기 제어 메시지는 상기 상대 위치 정보를 포함하는, 음성인식을 이용하여 타 장치를 제어하는 장치.
- 제1항에 있어서,상기 제어 메시지는 상기 음성 입력부가 입력받은 음성 파일을 포함하는, 음성인식을 이용하여 타 장치를 제어하는 장치.
- 제1항에 있어서,다수의 음성인식장치들의 기능에 대응하는 명령어가 저장된 명령어 데이터베이스부를 더 포함하며,상기 제어부는 상기 명령어 데이터베이스부를 이용하여 상기 입력된 음성 명령어에서 상기 제2음성인식장치의 식별 정보와 제어 정보를 추출하여 상기 제어메시지를 생성하는, 음성인식을 이용하여 동작하는 장치.
- 음성을 입력받아 동작하는 제2음성인식장치에 있어서,음성을 입력받는 음성입력부;기능의 제공하는 기능제공부;제1음성인식장치로부터 제어 메시지를 수신하는 통신부; 및상기 제어 메시지에 따라 상기 기능 제공부를 제어하는 제어부를 포함하는, 음성인식을 이용하여 동작하는 장치.
- 제6항에 있어서,상기 제어부는 상기 제어 메시지에서 상기 제1음성인식장치의 위치 정보를 추출하는, 음성인식을 이용하여 동작하는 장치.
- 제6항에 있어서,상기 통신부는 클라우드 서버 또는 외부의 장치로부터 상기 제1음성인식장치의 위치 정보 또는 상기 위치 정보를 포함한 맵 정보를 수신하는, 음성인식을 이용하여 동작하는 장치.
- 제7항에 있어서,상기 제어부는 상기 제2음성인식장치의 위치 정보를 기준으로 생성된 상기 제1음성인식장치의 상대 위치 정보를 상기 제어 메시지에서 추출하는, 음성인식을 이용하여 동작하는 장치.
- 제6항에 있어서,상기 제어 메시지는 상기 제1음성인식장치의 음성 입력부가 입력받은 음성 파일을 포함하는, 음성인식을 이용하여 동작하는 장치.
- 제6항에 있어서,상기 제2음성인식장치가 공기청정기 또는 에어컨인 경우상기 제어부는 상기 제1음성인식장치의 주변에 사용자가 위치한 것으로 판단하고, 상기 제어 메시지가 지시하는 기능을 제공하도록 상기 기능제공부를 제어하는, 음성인식을 이용하여 동작하는 장치.
- 제6항에 있어서,상기 제2음성인식장치가 TV인 경우상기 제어부는 상기 제1음성인식장치와의 거리에 따라 상기 제어 메시지가 지시하는 기능의 제공 여부를 판단하는, 음성인식을 이용하여 동작하는 장치.
- 음성을 입력받는 제1음성인식장치, 제3음성인식장치 및 상기 입력받은 음성 명령어에 대응하는 기능을 제공하는 제2음성인식장치에 있어서,상기 제1음성인식장치는 상기 제1음성인식장치의 음성 입력부가 입력받은 음성 명령어에 대응하는 제1제어 메시지를 전송하는 단계;상기 제3음성인식장치는 상기 제3음성인식장치의 음성 입력부가 입력받은 음성 명령어에 대응하는 제2제어 메시지를 전송하는 단계;상기 제2음성인식장치의 통신부가 상기 제1제어 메시지 및 상기 제2제어 메시지를 수신하는 단계; 및상기 제2음성인식장치의 제어부가 상기 제1음성인식장치 및 상기 제2음성인식장치의 위치 정보에 기반하여 상기 제1제어 메시지 및 제2제어 메시지가 지시하는 기능을 제공하는 단계를 포함하는, 음성인식을 이용하여 장치를 제어하는 방법.
- 제13항에 있어서,상기 제2음성인식장치의 제어부는 상기 제1제어 메시지에서 상기 제1음성인식장치의 위치 정보를 추출하고 상기 제2제어메시지에서 상기 제3음성인식장치의 위치 정보를 추출하는, 음성인식을 이용하여 장치를 제어하는 방법.
- 제13항에 있어서,상기 통신부는 클라우드 서버 또는 외부의 장치로부터 상기 제1음성인식장치 및 상기 제3음성인식장치의 위치 정보 또는 상기 위치 정보를 포함한 맵 정보를 수신하는, 음성인식을 이용하여 장치를 제어하는 방법.
- 제13항에 있어서,상기 제1음성인식장치의 인터페이스부가 제1작동음을 출력하는 단계;상기 제2음성인식장치의 인터페이스부가 제2작동음을 출력하는 단계; 및상기 제2음성인식장치의 제어부가 상기 제1작동음의 크기를 이용하여 상기 제1음성인식장치와 상기 제2음성인식장치 사이의 거리를 산출하는 단계를 더 포함하는, 음성인식을 이용하여 장치를 제어하는 방법.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021569147A JP7335979B2 (ja) | 2019-05-24 | 2019-05-24 | 音声認識を用いて装置を制御する方法、及びこれを具現する装置 |
KR1020257016693A KR20250076689A (ko) | 2019-05-24 | 2019-05-24 | 음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 |
PCT/KR2019/006252 WO2020241906A1 (ko) | 2019-05-24 | 2019-05-24 | 음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 |
EP19930394.2A EP3979238B1 (en) | 2019-05-24 | 2019-05-24 | Method for controlling device by using voice recognition, and device implementing same |
US17/613,420 US20220254344A1 (en) | 2019-05-24 | 2019-05-24 | Method for controlling device using voice recognition and device implementing the same |
KR1020217028549A KR102813950B1 (ko) | 2019-05-24 | 2019-05-24 | 음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2019/006252 WO2020241906A1 (ko) | 2019-05-24 | 2019-05-24 | 음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020241906A1 true WO2020241906A1 (ko) | 2020-12-03 |
Family
ID=73552795
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/006252 WO2020241906A1 (ko) | 2019-05-24 | 2019-05-24 | 음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220254344A1 (ko) |
EP (1) | EP3979238B1 (ko) |
JP (1) | JP7335979B2 (ko) |
KR (2) | KR102813950B1 (ko) |
WO (1) | WO2020241906A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114968166A (zh) * | 2021-02-26 | 2022-08-30 | 华为技术有限公司 | 语音交互的方法与电子设备 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12087283B2 (en) * | 2020-11-12 | 2024-09-10 | Samsung Electronics Co., Ltd. | Electronic apparatus and controlling method thereof |
KR20230050111A (ko) * | 2021-10-07 | 2023-04-14 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
KR102778750B1 (ko) * | 2021-12-13 | 2025-03-12 | 한국광기술원 | 로봇 청소기를 이용한 인공지능 스피커의 음성 인식 강화 시스템 및 그 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150043058A (ko) * | 2013-10-14 | 2015-04-22 | 주식회사 케이티 | 리모콘 서비스를 제공하는 이동 로봇 및 이동 로봇의 리모콘 서비스 제공 방법 |
US20150194152A1 (en) * | 2014-01-09 | 2015-07-09 | Honeywell International Inc. | Far-field speech recognition systems and methods |
WO2018137872A1 (en) * | 2017-01-30 | 2018-08-02 | Philips Lighting Holding B.V. | A controller for controlling a plurality of light sources. |
US20180300103A1 (en) * | 2014-12-22 | 2018-10-18 | Intel Corporation | Connected device voice command support |
KR101972545B1 (ko) * | 2018-02-12 | 2019-04-26 | 주식회사 럭스로보 | 음성 명령을 통한 위치 기반 음성 인식 시스템 |
Family Cites Families (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU568801B1 (en) * | 1986-05-21 | 1988-01-07 | Mitsubishi Denki Kabushiki Kaisha | Control system for room air conditioner |
JP2001145180A (ja) * | 1999-11-10 | 2001-05-25 | Nec Corp | 電化製品集中制御システム及び遠隔制御装置 |
JP2003330483A (ja) * | 2002-05-09 | 2003-11-19 | Matsushita Electric Ind Co Ltd | 音声入力装置、被制御機器及び音声制御システム |
KR100571837B1 (ko) * | 2004-03-05 | 2006-04-17 | 삼성전자주식회사 | 자율주행기기의 주행제어방법 및 장치 |
US8972052B2 (en) * | 2004-07-07 | 2015-03-03 | Irobot Corporation | Celestial navigation system for an autonomous vehicle |
TW200833152A (en) * | 2007-01-31 | 2008-08-01 | Bluepacket Comm Co Ltd | Multimedia switching system |
US20100293502A1 (en) * | 2009-05-15 | 2010-11-18 | Lg Electronics Inc. | Mobile terminal equipped with multi-view display and method of controlling the mobile terminal |
KR101741583B1 (ko) * | 2009-11-16 | 2017-05-30 | 엘지전자 주식회사 | 로봇 청소기 및 그의 제어 방법 |
US8635058B2 (en) * | 2010-03-02 | 2014-01-21 | Nilang Patel | Increasing the relevancy of media content |
JP4987103B2 (ja) * | 2010-07-05 | 2012-07-25 | シャープ株式会社 | 空気調節機 |
US9240111B2 (en) * | 2010-10-06 | 2016-01-19 | Microsoft Technology Licensing, Llc | Inferring building metadata from distributed sensors |
US8957847B1 (en) * | 2010-12-28 | 2015-02-17 | Amazon Technologies, Inc. | Low distraction interfaces |
JP2014020670A (ja) * | 2012-07-18 | 2014-02-03 | Mitsubishi Electric Corp | 空気調和機の室内機 |
KR102071575B1 (ko) * | 2013-04-23 | 2020-01-30 | 삼성전자 주식회사 | 이동로봇, 사용자단말장치 및 그들의 제어방법 |
WO2015011819A1 (ja) * | 2013-07-25 | 2015-01-29 | 三菱電機株式会社 | 脱臭装置 |
US9438440B2 (en) * | 2013-07-29 | 2016-09-06 | Qualcomm Incorporated | Proximity detection of internet of things (IoT) devices using sound chirps |
KR20150104311A (ko) * | 2014-03-05 | 2015-09-15 | 엘지전자 주식회사 | 로봇 청소기 및 그의 제어방법 |
CN104123333B (zh) * | 2014-03-17 | 2017-02-15 | 腾讯科技(深圳)有限公司 | 用于位置共享的数据处理方法和装置 |
KR102146462B1 (ko) * | 2014-03-31 | 2020-08-20 | 삼성전자주식회사 | 음성 인식 시스템 및 방법 |
DE202014006891U1 (de) * | 2014-07-02 | 2014-10-22 | Christian Stroetmann | Datenbrille |
JP2016024212A (ja) * | 2014-07-16 | 2016-02-08 | ソニー株式会社 | 情報処理装置、情報処理方法およびプログラム |
US10789041B2 (en) * | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
KR20160048492A (ko) * | 2014-10-24 | 2016-05-04 | 엘지전자 주식회사 | 로봇 청소기 및 이의 제어 방법 |
US9812126B2 (en) * | 2014-11-28 | 2017-11-07 | Microsoft Technology Licensing, Llc | Device arbitration for listening devices |
KR20160065574A (ko) * | 2014-12-01 | 2016-06-09 | 엘지전자 주식회사 | 로봇 청소기 및 그의 제어방법 |
US10126406B2 (en) * | 2014-12-02 | 2018-11-13 | Qualcomm Incorporated | Method and apparatus for performing ultrasonic presence detection |
US9521496B2 (en) * | 2015-02-12 | 2016-12-13 | Harman International Industries, Inc. | Media content playback system and method |
KR101659037B1 (ko) * | 2015-02-16 | 2016-09-23 | 엘지전자 주식회사 | 로봇 청소기, 이를 포함하는 원격 제어 시스템 및 이의 제어 방법 |
US10591575B2 (en) * | 2015-04-22 | 2020-03-17 | Hisep Technology Ltd. | Direction finding system device and method |
CN105187282B (zh) * | 2015-08-13 | 2018-10-26 | 小米科技有限责任公司 | 智能家居设备的控制方法、装置、系统及设备 |
US9996316B2 (en) * | 2015-09-28 | 2018-06-12 | Amazon Technologies, Inc. | Mediation of wakeword response for multiple devices |
US9653075B1 (en) * | 2015-11-06 | 2017-05-16 | Google Inc. | Voice commands across devices |
US10610146B1 (en) * | 2015-12-21 | 2020-04-07 | Dp Technologies, Inc. | Utilizing wearable devices in an internet of things environment |
US10097919B2 (en) * | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Music service selection |
DK179415B1 (en) * | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
PL3387331T3 (pl) * | 2016-07-15 | 2021-02-22 | Versuni Holding B.V. | Ukierunkowane dostarczanie czystego powietrza |
US11435888B1 (en) * | 2016-09-21 | 2022-09-06 | Apple Inc. | System with position-sensitive electronic device interface |
KR102668731B1 (ko) * | 2016-10-03 | 2024-05-24 | 구글 엘엘씨 | 디바이스 토폴로지에 기초한 음성 명령 프로세싱 |
US10181323B2 (en) * | 2016-10-19 | 2019-01-15 | Sonos, Inc. | Arbitration-based voice recognition |
KR101952414B1 (ko) * | 2016-10-25 | 2019-02-26 | 엘지전자 주식회사 | 청소기 및 그 제어방법 |
US10349224B2 (en) * | 2017-01-24 | 2019-07-09 | Essential Products, Inc. | Media and communications in a connected environment |
KR101925034B1 (ko) * | 2017-03-28 | 2018-12-04 | 엘지전자 주식회사 | 스마트 컨트롤링 디바이스 및 그 제어 방법 |
US10670285B2 (en) * | 2017-04-20 | 2020-06-02 | Trane International Inc. | Personal comfort variable air volume diffuser |
KR20180118461A (ko) * | 2017-04-21 | 2018-10-31 | 엘지전자 주식회사 | 음성 인식 장치 및 음성 인식 방법 |
KR102392297B1 (ko) | 2017-04-24 | 2022-05-02 | 엘지전자 주식회사 | 전자기기 |
KR102025391B1 (ko) * | 2017-05-15 | 2019-09-25 | 네이버 주식회사 | 사용자의 발화 위치에 따른 디바이스 제어 |
DK179560B1 (en) * | 2017-05-16 | 2019-02-18 | Apple Inc. | FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES |
KR101968725B1 (ko) * | 2017-05-19 | 2019-04-12 | 네이버 주식회사 | 음성요청에 대응하는 정보 제공을 위한 미디어 선택 |
KR101966253B1 (ko) * | 2017-06-09 | 2019-04-05 | 네이버 주식회사 | 무빙 디바이스를 이용하여 사용자의 위치 및 공간에 알맞은 정보를 제공하는 방법 및 장치 |
US10983753B2 (en) * | 2017-06-09 | 2021-04-20 | International Business Machines Corporation | Cognitive and interactive sensor based smart home solution |
DE102017113279A1 (de) * | 2017-06-16 | 2018-12-20 | Vorwerk & Co. Interholding Gmbh | System aus mindestens einem Haushaltsgerät, mindestens einem sich selbsttätig fortbewegenden Reinigungsgerät und einer Steuereinrichtung |
US11424947B2 (en) * | 2017-08-02 | 2022-08-23 | Lenovo (Singapore) Pte. Ltd. | Grouping electronic devices to coordinate action based on context awareness |
US11233782B2 (en) * | 2017-10-04 | 2022-01-25 | Resilience Magnum IP, LLC | Single node network connectivity for structure automation functionality |
KR102421824B1 (ko) * | 2017-10-17 | 2022-07-19 | 삼성전자주식회사 | 외부 장치를 이용하여 음성 기반 서비스를 제공하기 위한 전자 장치, 외부 장치 및 그의 동작 방법 |
WO2019148074A1 (en) * | 2018-01-26 | 2019-08-01 | Alexander Lawrence Reeder | Smart air vent |
US10412556B1 (en) * | 2018-03-15 | 2019-09-10 | Capital One Services, Llc | Dynamic re-configuration of a user interface based on location information |
US20200020165A1 (en) * | 2018-07-12 | 2020-01-16 | Bao Tran | Smart device |
US11183183B2 (en) * | 2018-12-07 | 2021-11-23 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US11086582B1 (en) * | 2019-04-30 | 2021-08-10 | Amazon Technologies, Inc. | System for determining positional relationships between display devices |
-
2019
- 2019-05-24 WO PCT/KR2019/006252 patent/WO2020241906A1/ko active IP Right Grant
- 2019-05-24 EP EP19930394.2A patent/EP3979238B1/en active Active
- 2019-05-24 KR KR1020217028549A patent/KR102813950B1/ko active Active
- 2019-05-24 US US17/613,420 patent/US20220254344A1/en active Pending
- 2019-05-24 JP JP2021569147A patent/JP7335979B2/ja active Active
- 2019-05-24 KR KR1020257016693A patent/KR20250076689A/ko active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150043058A (ko) * | 2013-10-14 | 2015-04-22 | 주식회사 케이티 | 리모콘 서비스를 제공하는 이동 로봇 및 이동 로봇의 리모콘 서비스 제공 방법 |
US20150194152A1 (en) * | 2014-01-09 | 2015-07-09 | Honeywell International Inc. | Far-field speech recognition systems and methods |
US20180300103A1 (en) * | 2014-12-22 | 2018-10-18 | Intel Corporation | Connected device voice command support |
WO2018137872A1 (en) * | 2017-01-30 | 2018-08-02 | Philips Lighting Holding B.V. | A controller for controlling a plurality of light sources. |
KR101972545B1 (ko) * | 2018-02-12 | 2019-04-26 | 주식회사 럭스로보 | 음성 명령을 통한 위치 기반 음성 인식 시스템 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114968166A (zh) * | 2021-02-26 | 2022-08-30 | 华为技术有限公司 | 语音交互的方法与电子设备 |
EP4290362A4 (en) * | 2021-02-26 | 2024-07-17 | Huawei Technologies Co., Ltd. | VOICE INTERACTION METHOD AND ELECTRONIC DEVICE |
Also Published As
Publication number | Publication date |
---|---|
EP3979238A4 (en) | 2023-04-26 |
US20220254344A1 (en) | 2022-08-11 |
KR102813950B1 (ko) | 2025-05-27 |
EP3979238B1 (en) | 2025-04-02 |
EP3979238A1 (en) | 2022-04-06 |
JP7335979B2 (ja) | 2023-08-30 |
KR20250076689A (ko) | 2025-05-29 |
KR20210116671A (ko) | 2021-09-27 |
JP2022534692A (ja) | 2022-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020241906A1 (ko) | 음성인식을 이용하여 장치를 제어하는 방법 및 이를 구현하는 장치 | |
WO2019156272A1 (ko) | 음성 명령을 통한 위치 기반 음성 인식 시스템 | |
WO2018147687A1 (en) | Method and apparatus for managing voice-based interaction in internet of things network system | |
WO2016035933A1 (ko) | 디스플레이 장치 및 그의 동작 방법 | |
WO2018208026A1 (ko) | 수신된 음성 입력의 입력 음량에 기반하여 출력될 소리의 출력 음량을 조절하는 사용자 명령 처리 방법 및 시스템 | |
WO2018008885A1 (ko) | 영상처리장치, 영상처리장치의 구동방법 및 컴퓨터 판독가능 기록매체 | |
CN111971647B (zh) | 语音识别设备、语音识别设备的协作系统和语音识别设备的协作方法 | |
EP3900310A1 (en) | Method for location inference of iot device, server, and electronic device supporting the same | |
US20190020493A1 (en) | Apparatus, system and method for directing voice input in a controlling device | |
WO2014107076A1 (en) | Display apparatus and method of controlling a display apparatus in a voice recognition system | |
WO2014196769A1 (ko) | 음성 향상 방법 및 그 장치 | |
JP3838029B2 (ja) | 音声認識を用いた機器制御方法および音声認識を用いた機器制御システム | |
JP2019159306A (ja) | ファーフィールド音声制御デバイス及びファーフィールド音声制御システム | |
JP2018013545A (ja) | 音声対話装置および発話制御方法 | |
WO2016198132A1 (en) | Communication system, audio server, and method for operating a communication system | |
WO2019160388A1 (ko) | 사용자의 발화를 기반으로 컨텐츠를 제공하는 장치 및 시스템 | |
WO2015126008A1 (ko) | 음향조명기기의 밸런스 조절 제어 방법 | |
WO2022097970A1 (ko) | 전자장치 및 그 제어방법 | |
WO2021206281A1 (en) | Electronic device and operation method thereof | |
WO2020138943A1 (ko) | 음성을 인식하는 장치 및 방법 | |
US11399226B2 (en) | Voice input apparatus | |
WO2023121231A1 (en) | Computer implemented method for determining false positives in a wakeup-enabled device, corresponding device and system | |
IT201800010269A1 (it) | Unità interna per impianti citofonici o videocitofonici con funzioni di assistente vocale | |
WO2022108190A1 (ko) | 전자장치 및 그 제어방법 | |
WO2022065733A1 (ko) | 전자장치 및 그 제어방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19930394 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217028549 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021569147 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019930394 Country of ref document: EP Effective date: 20220103 |
|
WWG | Wipo information: grant in national office |
Ref document number: 2019930394 Country of ref document: EP |