CN110737840A - Voice control method and display device - Google Patents
Voice control method and display device Download PDFInfo
- Publication number
- CN110737840A CN110737840A CN201911008347.XA CN201911008347A CN110737840A CN 110737840 A CN110737840 A CN 110737840A CN 201911008347 A CN201911008347 A CN 201911008347A CN 110737840 A CN110737840 A CN 110737840A
- Authority
- CN
- China
- Prior art keywords
- display
- voice
- instruction
- target search
- resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000004891 communication Methods 0.000 claims description 57
- 238000012545 processing Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 34
- 230000015654 memory Effects 0.000 description 33
- 230000006870 function Effects 0.000 description 25
- 230000003993 interaction Effects 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 241000220225 Malus Species 0.000 description 6
- 230000011664 signaling Effects 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 240000005373 Panax quinquefolius Species 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 235000021016 apples Nutrition 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004091 panning Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9538—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- User Interface Of Digital Computer (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention provides voice control methods and display equipment, the method comprises the steps of receiving voice input from an audio receiving element and generating a voice search instruction according to the voice, sending the voice search instruction input by a user to a server, wherein the voice search instruction carries a target search word, the target search word is used for adjusting the display sequence of labels when the service types are not less than two, receiving a display instruction returned by the server based on the voice search instruction, the display instruction is generated according to the display sequence of the labels, and responding to the display instruction, and sequentially displaying the labels in the display region of the labels in a resource display interface according to the adjusted display sequence.
Description
Technical Field
The embodiment of the invention relates to the technical field of voice recognition, in particular to voice control methods and display equipment.
Background
Due to the fact that more intersections and relevance exist among services, target search terms may correspond to resource information of multiple service types.
Because target search terms may correspond to resource information of multiple service types, when the resource information is displayed to a user, the resource information can be classified into corresponding label pages in a preset label page list according to service types.
However, although the target search term may correspond to resource information of multiple service types, different services of different target search terms are different, and the tag pages display the resource information of the corresponding service types according to a fixed order, so that the display order of the resource information often does not conform to the search intention of the user, the target resource of the user is arranged at a later position, and further, the time for the user to find the target search resource from the tag page list is greatly increased, and the user experience is reduced.
Disclosure of Invention
The embodiment of the invention provides voice control methods and display equipment, and aims to solve the problem that in the prior art, the display sequence of resource information often does not accord with the search intention of a user.
An th aspect of the embodiments of the present invention provides methods of voice control, including:
receiving voice input from an audio receiving element, and generating a voice search instruction according to the voice;
sending a voice search instruction input by a user to a server, wherein the voice search instruction carries a target search word, the target search word is used for adjusting the display sequence of labels when the service types are not less than two, and different display sequences of the labels correspond to different target search words;
receiving th display instructions returned by the server based on the voice search instructions, the th display instructions being generated according to the display order of the tags;
and responding to the display instruction, and sequentially displaying the labels in a label display area in the resource display interface according to the adjusted display sequence.
A second aspect of the embodiments of the present invention provides methods for controlling speech, including:
receiving a voice search instruction sent by display equipment, wherein the voice search instruction is generated according to voice input by an audio receiving element in the display equipment, and the voice search instruction carries a target search word;
acquiring resource information of a service type corresponding to the target search word;
responding to that the service types are not less than two, adjusting the display sequence of labels according to the target search terms, wherein each label is used for loading service type resource information;
generating th display instructions according to the display sequence of the labels;
and pushing the th display instruction to the display device, wherein the th display instruction is used for instructing the display device to sequentially display the labels in the label display area in the resource display interface according to the adjusted display sequence.
A third aspect of an embodiment of the present invention provides display devices, including:
a display configured to display a user interface further comprising a selector therein indicating that an item is selected, the position of the selector in the user interface being moveable by user input to cause a different said item to be selected;
a controller in communication with the display screen, the controller configured to:
receiving voice input from an audio receiving element, and generating a voice search instruction according to the voice;
sending a voice search instruction input by a user to a server, wherein the voice search instruction carries a target search word, the target search word is used for adjusting the display sequence of labels when the service types are not less than two, and different display sequences of the labels correspond to different target search words;
receiving th display instructions returned by the server based on the voice search instructions, the th display instructions being generated according to the display order of the tags;
and responding to the display instruction, and sequentially displaying the labels in a label display area in the resource display interface according to the adjusted display sequence.
A fourth aspect of the embodiments of the present invention provides kinds of servers, including:
a memory and a processor;
the memory for storing executable instructions of the processor;
the processor is configured to: receiving a voice search instruction sent by display equipment, wherein the voice search instruction is generated according to voice input by an audio receiving element in the display equipment, and the voice search instruction carries a target search word;
acquiring resource information of a service type corresponding to the target search word;
responding to that the service types are not less than two, adjusting the display sequence of labels according to the target search terms, wherein each label is used for loading service type resource information;
generating th display instructions according to the display sequence of the labels;
and pushing the th display instruction to the display device, wherein the th display instruction is used for instructing the display device to sequentially display the labels in the label display area in the resource display interface according to the adjusted display sequence.
A fifth aspect of the present invention provides storage media having stored thereon a computer program for performing the method of aspect .
A sixth aspect of the present invention provides storage media having stored therein a computer program for executing the method of the second aspect.
According to the voice control method and the display device, the display device receives voice input from the audio receiving element and generates a voice search instruction according to the voice, then the display device sends the voice search instruction input by a user to the server, wherein the voice search instruction carries target search words, the target search words are used for adjusting the display sequence of labels when the service types are not less than two, the different display sequences of the labels correspond to the different target search words, then the display device receives -th display instruction returned by the server based on the voice search instruction, and responds to -th display instructions, the display device sequentially displays the labels in the label display area in the resource display interface according to the adjusted display sequence.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, is briefly introduced in the following for the drawings needed to be used in the description of the embodiments or the prior art, it is obvious that the drawings in the following description are embodiments of the present invention, and other drawings can be obtained according to these drawings for those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an operation scenario between kinds of display devices and a control apparatus according to an embodiment of the present application;
fig. 2 is a block diagram of a hardware configuration of display devices 200 according to an embodiment of the present disclosure;
fig. 3 is a block diagram of a hardware configuration of control devices 100 according to an embodiment of the present application;
fig. 4 is a schematic functional configuration diagram of display devices 200 according to an embodiment of the present application;
fig. 5a is a schematic diagram of software configuration in display devices 200 according to an embodiment of the present application;
fig. 5b is a schematic configuration diagram of types of application programs in the display device 200 according to the embodiment of the present application;
fig. 6 is a signaling interaction diagram of voice control methods provided in the embodiments of the present application;
fig. 7a is a schematic diagram of voice wake-up interfaces provided in the embodiment of the present application;
FIG. 7b is a diagram illustrating an search results interface according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a display principle of kinds of label display areas according to an embodiment of the present application;
fig. 9a is a schematic interface diagram of display devices provided in an embodiment of the present application;
fig. 9b is an interface schematic diagram of another kinds of display devices provided in this embodiment of the present application
FIG. 10 is a flow chart illustrating exemplary speech control methods provided by embodiments of the present application;
fig. 11 is a signaling interaction diagram of another voice control methods provided by the embodiments of the present application;
fig. 12 is a signaling interaction diagram of another voice control methods provided by an embodiment of the present application;
fig. 13 is a schematic diagram illustrating a display principle of texts provided by an embodiment of the present application;
fig. 14 is a schematic interface diagram of another display devices provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of display devices according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of servers according to an embodiment of the present application.
Detailed Description
To make the purpose, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only partial embodiments of the present application , but not all embodiments.
Moreover, while the disclosure has been presented in terms of exemplary or several examples, it should be appreciated that complete solutions can be constructed individually for each aspect of the disclosure.
It should be understood that the terms "", "second", "third", and the like in the description and in the claims of the present application and in the accompanying drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises the series of elements is not necessarily limited to the explicitly listed elements, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The term "remote control" as used herein refers to components of an electronic device (such as the display device disclosed herein) that can be controlled wirelessly, typically over a short distance generally uses infrared and/or Radio Frequency (RF) signals and/or Bluetooth to connect to the electronic device and may also include WiFi, wireless USB, Bluetooth, motion sensors, etc. for example, a hand-held touch remote control replaces most of the physical built-in hard keys in a -like remote control device with a touch screen user interface.
The term "gesture" as used herein refers to a user action through hand shape changes or hand movements to express a desired idea, action, purpose, or result.
Fig. 1 is a schematic diagram of an operation scenario between display devices and a control device provided in an embodiment of the present application, as shown in fig. 1, a user may operate a display device 200 through a mobile terminal 300 and a control device 100.
The control device 100 may control the display device 200 in a wireless or other wired manner by using a remote controller, including infrared protocol communication, bluetooth protocol communication, other short-distance communication manners, and the like. The user may input a user command through a key on a remote controller, voice input, control panel input, etc. to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
In embodiments, mobile terminals, tablets, computers, laptops, and other smart devices may also be used to control the display device 200. for example, the display device 200 is controlled using an application running on the smart device.
For example, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of controlling operation and data communication to . for example, establishing a control instruction protocol with the display device 200 using the mobile terminal 300 may be implemented, synchronizing a remote control keyboard to the mobile terminal 300, implementing the function of controlling the display device 200 by controlling a user interface on the mobile terminal 300, and transmitting audio/video contents displayed on the mobile terminal 300 to the display device 200 to implement a synchronous display function.
As also shown in FIG. 1, the display device 200 may also be in data communication with a server 400 via a variety of communication means, the display device 200 may be permitted to be communicatively coupled via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks, the server 400 may provide various content and interactions to the display device 200. by way of example, the display device 200 may receive software program updates by sending and receiving information, as well as Electronic Program Guide (EPG) interactions, or access a remotely stored digital media library, the server 400 may have sets, multiple sets, or more types of servers, other web service content such as video-on-demand and announcement services may be provided via the server 400.
The display device 200, which may be a liquid crystal display, an OLED display, a projection display device, the specific display device type, size and resolution, etc. are not limiting, and it will be appreciated by those skilled in the art that the display device 200 may vary in performance and configuration as desired.
The display apparatus 200 may additionally provide an intelligent network television function of a computer support function in addition to the broadcast receiving television function, examples include a network television, an intelligent television, an Internet Protocol Television (IPTV), and the like.
Fig. 2 is a hardware configuration block diagram of display devices 200 according to an embodiment of the present disclosure, as shown in fig. 2, the display device 200 includes a controller 210, a tuning demodulator 220, a communication interface 230, a detector 240, an input/output interface 250, a video processor 260-1, an audio processor 60-2, a display 280, an audio output 270, a memory 290, a power supply, and an infrared receiver.
The display 280 is used for receiving image signals input by the video processor 260-1 and displaying video contents and images and a component of a menu control interface, the display 280 comprises a display screen component used for presenting pictures and a driving component used for driving the image display, the video contents are displayed, and the video contents can be broadcasted from , also can be received by various broadcast signals through a wired or wireless communication protocol, or can be displayed and received by various image contents sent from a network server end through a network communication protocol.
Meanwhile, the display 280 simultaneously displays a user manipulation UI interface generated in the display apparatus 200 and used to control the display apparatus 200.
And a driving assembly for driving the display according to the type of the display 280, or, in case that the display 280 is kinds of projection displays, kinds of projection devices and projection screens may be further included.
The communication interface 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communication interface 230 may be a Wifi chip 231, a bluetooth communication protocol chip 232, a wired ethernet communication protocol chip 233, or other network communication protocol chips or near field communication protocol chips, and an infrared receiver (not shown).
The display apparatus 200 may establish control signal and data signal transmission and reception with an external control apparatus or a content providing apparatus through the communication interface 230. And an infrared receiver, an interface device for receiving an infrared control signal for controlling the apparatus 100 (e.g., an infrared remote controller, etc.).
The detector 240 is a signal used by the display device 200 to collect an external environment or interact with the outside. The detector 240 includes a light receiver 242, a sensor for collecting the intensity of ambient light, and parameters such as parameter changes can be adaptively displayed by collecting the ambient light.
The image acquisition device 241, such as a camera and a camera, may be used to acquire an external environment scene, acquire attributes of a user or interact gestures with the user, adaptively change display parameters, and recognize gestures of the user, so as to implement an interaction function with the user.
In other exemplary embodiments, the detector 240, temperature sensor, etc. may be used to adjust the color temperature of the image displayed by the display device 200, such as by sensing the ambient temperature, for example, the display device 200 may be adjusted to display a cool tone when the temperature is high, or the display device 200 may be adjusted to display a warm tone when the temperature is low.
In other exemplary embodiments, detector 240, and a sound collector or the like, such as a microphone, may be used to receive a user's voice, a voice signal including control instructions for the user to control display device 200, or to collect ambient sound for identifying an ambient scene type, and display device 200 may adapt to ambient noise.
The input/output interface 250 controls data transmission between the display device 200 of the controller 210 and other external devices. Such as receiving video and audio signals or command instructions from an external device.
The input/output interface 250 may include, but is not limited to, any or more interfaces such as a high definition multimedia interface HDMI interface 251, an analog or data high definition component input interface 253, a composite video input interface 252, a USB input interface 254, an RGB port (not shown), and the like.
In other exemplary embodiments, the I/O interface 250 may form a composite I/O interface with multiple interfaces as described above.
The tuning demodulator 220 receives broadcast television signals through a wired or wireless receiving mode, can perform modulation and demodulation processing such as amplification, frequency mixing, resonance and the like, and demodulates television audio and video signals carried in television channel frequencies selected by a user and EPG data signals from a plurality of wireless or wired broadcast television signals.
The tuner demodulator 220 is responsive to the user-selected television signal frequency and the television signal carried by the frequency, as selected by the user and controlled by the controller 210.
The tuner-demodulator 220 may receive signals according to different broadcasting systems of the television signal , such as terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, and the like, and may perform digital modulation or analog modulation according to different modulation types.
In other exemplary embodiments, the tuning demodulator 220 may be in an external device, such as an external set-top box, so that the set-top box outputs television audio/video signals through modulation and demodulation, and then the television audio/video signals are input into the display device 200 through the input/output interface 250.
The video processor 260-1 is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, and the like according to a standard codec protocol of the input signal, so as to obtain a signal that can be displayed or played on the direct display device 200.
Illustratively, the video processor 260-1 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio and video data stream, and if the input MPEG-2 is input, the demultiplexing module demultiplexes the input audio and video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert an input video frame rate, such as a 60Hz frame rate into a 120Hz frame rate or a 240Hz frame rate, and the normal format is implemented in, for example, an interpolation frame mode.
The display format module is used for converting the received video output signal after the frame rate conversion, and changing the signal to conform to the signal of the display format, such as outputting an RGB data signal.
The audio processor 260-2 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, amplification processing, and the like to obtain an audio signal that can be played in the speaker.
In other exemplary embodiments, video processor 260-1 may comprise or more chips and audio processor 260-2 may comprise or more chips.
And, in other exemplary embodiments, video processor 260-1 and audio processor 260-2 may be separate chips or may be integrated together with controller 210 in or more chips.
An audio output 272, which receives the sound signal output from the audio processor 260-2 under the control of the controller 210, such as: the speaker 272, and the external sound output terminal 274 that can be output to the generation device of the external device, in addition to the speaker 272 carried by the display device 200 itself, such as: an external sound interface or an earphone interface and the like.
The power supply provides power supply support for the display device 200 from the power input from the external power source under the control of the controller 210. The power supply may include a built-in power supply circuit installed inside the display device 200, or may be a power supply interface installed outside the display device 200 to provide an external power supply in the display device 200.
A user input interface for receiving an input signal of a user and then transmitting the received user input signal to the controller 210. The user input signal may be a remote controller signal received through an infrared receiver, and various user control signals may be received through the network communication module.
For example, the user inputs a user command through the remote controller 100 or the mobile terminal 300, the user input interface responds to the user input through the controller 210 according to the user input, and the display device 200 responds to the user input.
In embodiments, the user may enter user commands on a Graphical User Interface (GUI) displayed on the display 280 through which the user input interface receives the user input commands, or alternatively, the user may enter user commands by entering specific sounds or gestures through which the user input interface receives user input commands by recognizing the sounds or gestures through the sensors.
The controller 210 controls the operation of the display apparatus 200 and responds to the user's operation through various software control programs stored in the memory 290.
As shown in FIG. 2, the controller 210 includes a RAM213 and a ROM214, a graphics processor 216, a CPU processor 212, a communication interface 218, such as an th interface 218-1 through an nth interface 218-n, and a communication bus, wherein the RAM213 and the ROM214, the graphics processor 216, the CPU processor 212, and the communication interface 218 are connected via the bus.
A ROM213 for storing instructions for various system boots. If the display apparatus 200 starts power-on upon receipt of the power-on signal, the CPU processor 212 executes a system boot instruction in the ROM, copies the operating system stored in the memory 290 to the RAM213, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 212 copies the various application programs in the memory 290 to the RAM213, and then starts running and starting the various application programs.
A graphics processor 216 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And a renderer for generating various objects based on the operator and displaying the rendered result on the display 280.
A CPU processor 212 for executing operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.
In exemplary embodiments, the CPU processor 212 may include a plurality of processors, the plurality of processors may include main processors and a plurality of or sub-processors, the main processor for performing operations of the display apparatus 200 in a pre-power-up mode and/or operations of displaying a screen in a normal mode, the plurality of or sub-processors for operations in a standby mode or the like.
The controller 210 may control the overall operation of the display apparatus 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Among them, the objects may be any of selectable objects, such as hyperlinks or icons, operations related to the selected objects, such as displaying operations connected to hyperlink pages, documents, images, etc., or operations to execute programs corresponding to icons, user commands for selecting UI objects may be commands input through various input means (e.g., a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus 200 or voice commands corresponding to voice spoken by the user.
The memory 290 includes a memory for storing various software modules for driving the display device 200. Such as: various software modules stored in memory 290, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
Wherein the basic module is a bottom layer software module for signal communication among the various hardware in the postpartum care display device 200 and for sending processing and control signals to the upper layer module. The detection module is used for collecting various information from various sensors or user input interfaces, and the management module is used for performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and may be used to play information such as multimedia image content and UI interface. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing a module for data communication between browsing servers. And the service module is used for providing various services and modules including various application programs.
Meanwhile, the memory 290 is also used to store visual effect maps and the like for receiving external data and user data, images of respective items in various user interfaces, and a focus object.
Fig. 3 is a block diagram of a configuration of control devices 100 according to an embodiment of the present disclosure, and as shown in fig. 3, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control device 100 is configured to control the display device 200 and may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200. Such as: the user responds to the channel up and down operation by operating the channel up and down keys on the control device 100.
In embodiments, the control device 100 may be a smart device, for example, the control device 100 may have various applications installed to control the display device 200 as desired by a user.
In embodiments, as shown in FIG. 1, the mobile terminal 300 or other intelligent electronic device may function similarly to the control device 100 after installation of an application for manipulating the display device 200. for example, a user may implement functions for controlling physical keys of the device 100 by installing applications, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 300 or other intelligent electronic device.
The controller 110 includes a processor 112 and RAM113 and ROM114, a communication interface 218, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components for communication and coordination and external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display device 200 under the control of the controller 110, for example, a received user input signal is transmitted to the display device 200, the communication interface 130 may include at least of other near field communication modules such as a WiFi chip, a bluetooth module, an NFC module, etc.
The user input/output interface 140 includes at least of the other input interfaces such as the microphone 141, the touch pad 142, the sensor 143, the key 144, etc. for example, the user can input the user command by voice, touch, gesture, pressing, etc., and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding command signal, and sends the digital signal to the display device 200.
For example, in the case of the radio frequency signal interface, the user input instruction needs to be converted into a digital signal, and then is modulated according to a radio frequency control signal modulation protocol, and then is sent to the display device 200 by a radio frequency sending terminal.
In , the control device 100 includes at least of the communication interface 130 and the output interface, the communication interface 130 is configured in the control device 100, such as WiFi, Bluetooth, NFC, etc., and the user input command can be transmitted to the display device 200 by encoding the user input command through WiFi protocol, Bluetooth protocol, or NFC protocol.
A memory 190 for storing various operation programs, data and applications for driving and controlling the control apparatus 200 under the control of the controller 110. The memory 190 may store various control signal commands input by a user.
And a power supply 180 for providing operational power support to the various elements of the control device 100 under the control of the controller 110. A battery and associated control circuitry.
Fig. 4 is a schematic view illustrating a functional configuration of display devices 200 according to an embodiment of the present invention, as shown in fig. 4, a memory 290 is used to store an operating system, an application program, content, user data, and the like, and performs various operations for driving the system operation of the display device 200 and responding to a user under the control of a controller 210.
The memory 290 is specifically configured to store an operating program for driving the controller 210 in the display device 200, and to store various application programs installed in the display device 200, various application programs downloaded by a user from an external device, various graphical user interfaces related to the applications, various objects related to the graphical user interfaces, user data information, and internal data of various supported applications. The memory 290 is used to store system software such as an OS kernel, middleware, and applications, and to store input video data and audio data, and other user data.
The memory 290 is specifically used for storing drivers and related data such as the audio/video processors 260-1 and 260-2, the display 280, the communication interface 230, the tuning demodulator 220, the input/output interface of the detector 240, and the like.
In embodiments, memory 290 may store software and/or programs, including, for example, a kernel, middleware, Application Programming Interface (API), and/or application programs, for representing software programs for an Operating System (OS). illustratively, a kernel may control or manage system resources, or functions implemented by other programs, such as middleware, APIs, or application programs, and a kernel may provide interfaces to allow the middleware and APIs, or applications, to access the controller to implement controlling or managing system resources.
Illustratively, the memory 290 includes a broadcast reception module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external command identification module 2907, a communication control module 2908, a light reception module 2909, a power control module 2910, an operating system 2911, and other application programs 2912, a browser module, etc. the controller 210 executes other applications such as a broadcast television signal reception demodulation function, a television channel selection control function, a volume selection control function, an image control function, a display control function, an audio control function, an external command identification function, a communication control function, a light signal reception function, a power control function, a software manipulation platform supporting various functions, and a browser function by running various software programs in the memory 290.
Fig. 5a is a block diagram of a configuration of software systems in display devices 200 according to an embodiment of the present application.
As shown in FIG. 5a, an operating system 2911, including the executing operating software for handling various basic system services and for performing hardware related tasks, acts as an intermediary for data processing performed between applications and hardware components some embodiments of the operating system kernel may include family of software for managing display device hardware resources and providing services to other programs or software code.
Other embodiments, a portion of the operating system kernel may contain or more device drivers, which may be sets of software code in the operating system that help operate or control the devices or hardware associated with the display device.
The accessibility module 2911-1 is configured to modify or access the application program to achieve accessibility and operability of the application program for displaying content.
A communication module 2911-2 for connection to other peripherals via associated communication interfaces and a communication network.
The user interface module 2911-3 is configured to provide an object for displaying a user interface, so that each application program can access the object, and user operability can be achieved.
Control applications 2911-4 for controllable process management, including runtime applications and the like.
The event transmission system 2914, which may be implemented within the operating system 2911 or within the application 2912, in embodiments, the aspect is implemented within the operating system 2911 and also within the application 2912, for listening for various user input events, and will implement or more predefined sets of operations in response to the identification of various types of events or sub-events, depending on the various event designations.
The event monitoring module 2914-1 is configured to monitor an event or a sub-event input by the user input interface.
The event identification module 2914-1 is used to input various event definitions for various user input interfaces, identify various events or sub-events, and transmit them to the process for execution of its corresponding set or sets of handlers.
The events or sub-events refer to or more sensors in the display device 200, and the input of external control devices (such as the control device 100, etc.), such as voice input various sub-events, gesture input of gesture recognition, and sub-events of remote control key command input of the control device, etc. for example, or more sub-events in the remote controller include various forms, including but not limited to or combinations of key press/down/left/right/, confirm key, key press, etc., and non-physical key operations, such as moving, pressing, releasing, etc.
The interface layout manager 2913, directly or indirectly receiving the input events or sub-events from the event transmission system 2914, monitors the input events or sub-events, and updates the layout of the user interface, including but not limited to the position of each control or sub-control in the interface, and the size, position, and level of the container, and other various execution operations related to the layout of the interface.
Fig. 5b is a schematic diagram of a configuration of applications in the display device 200 according to an embodiment of the present disclosure, as shown in fig. 5b, the application layer 2912 includes various applications that can also be executed in the display device 200, the applications may include, but are not limited to, or more applications, such as a live tv application, a video-on-demand application, a media center application, an application center, a game application, and the like.
For example, a live television application may provide television signals using input from cable television, wireless broadcasts, satellite services, or other types of live television services, and the live television application may display video of the live television signals on the display device 200.
A video-on-demand application may provide video from different storage sources. Unlike live television applications, video on demand provides a video display from some storage source. For example, the video on demand may come from a server side of the cloud storage, from a local hard disk storage containing stored video programs.
The media center application program can provide various applications for playing multimedia contents. For example, a media center, which may be other than live television or video on demand, may provide services that a user may access to various images or audio through a media center application.
The applications may be games, applications, or some other application associated with a computer system or other device but that may run on the smart television.
During the use of each program, the user inevitably needs to search for resources, and in the user's search command may be input through an audio receiving element (e.g., a microphone) in the user input interface 140, and in the user may input through the keys 144 in the user input interface 140, such as through the keys of a remote control.
Further, the item may represent an interface or a collection of interfaces on which the display device 200 is connected to an external device, or may represent a name of an external device connected to the display device, or the like. Such as: a signal source input interface set, or an HDMI interface, a USB interface, a PC terminal interface, etc.
In the prior art, voice search instructions input by users are mainly analyzed and searched through a semantic analysis method, so that resource information corresponding to target search words in the voice search instructions is obtained.
To solve the above problem, the embodiment of the present application provides voice control methods to reduce the possibility that the display order of the resource information does not conform to the search intention of the user.
The technical solutions of the embodiments of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 6 is a signaling interaction diagram of voice control methods provided in an embodiment of the present application, where the present embodiment relates to a process of determining resource information according to a voice search instruction, and the embodiment of the present application takes a display device and a server as an example to explain a method in the embodiment of the present application, and as shown in fig. 6, the method includes:
step S101, the display device receives the voice input from the audio receiving element and generates a voice searching instruction according to the voice.
The audio receiving element may be, for example, a microphone, among others.
In embodiments, the user inputs voice through the microphone in the user input interface 140, and the display device generates a voice search command according to the input voice, and in embodiments, the display device converts the input voice into text data, and then sends the text data to the voice server for parsing through the communication interface 230, and then receives a parsing result fed back by the voice server, and generates a voice search command according to the parsing result.
Fig. 7a is a schematic diagram of voice wake-up interfaces provided in an embodiment of the present application, as shown in fig. 7a, in embodiments , a user input interface is a remote controller, after a user presses a voice key of the remote controller, the remote controller sends a th key value/or a th bluetooth instruction to a display device, and a th voice interaction interface is replaced by a television according to the th key value/or the th bluetooth instruction received, where at this time, the th voice interaction interface may be superimposed on a previous interface in a floating layer manner.
FIG. 7b is a diagram of search result interfaces provided in an embodiment of the present application, in which after a user inputs a voice, a search interface presented in response to the voice is shown in FIG. 7b, and in FIG. 7b, a diagram of a user interface presented when a search result is presented by an interface layout manager 2913 in the display device 200 according to an exemplary embodiment is shown.
For example, the different view display regions may be identified by different background colors of the view display regions, may be identified visually by border lines, etc., or may have invisible borders that are not visible, or may not have visible or visible borders, and only the associated items in the circumscribed area of are displayed on the screen, having the same change attribute of size and/or arrangement, while the circumscribed area is identified by the border of the view segments.
In , the search term display area is used to display the search instruction input by the user, and the search instruction input by the user can also be displayed or included on the left side of the search term display area, and the recommended search term is displayed on the right side, and vice versa.
In , in the interactive interface, the tab display area is located at the lower side of the search term display area (if any) or above the resource display area for displaying the tabs according to the tab order determined by the server, wherein the tab with the largest weight is displayed at the left side of the area, and all the tabs from left to right are arranged by the weight from big to small, and when the tabs exceed lines, the other lines display the tabs with smaller weights than lines.
In , the resource display area is at the bottom and contains matrix-distributed empty spaces, which can load the service resources corresponding to the selected label according to the selected label.
And step S102, the display equipment sends a voice search instruction to the server.
The voice search instruction carries a target search word.
In this embodiment, the display device and the server both have a communication function and can interact with each other. The display device can record voice input by a user, generate a voice search instruction and send the voice search instruction to the server through the communication interface. For example, the display device can accept the input of voice with a mobile phone connected with the display device; or the display equipment can accept the input of voice through a remote controller connected with the display equipment; alternatively, the display device may accept the voice entry through a recording component configured by itself.
In embodiments, the display device may generate the voice search instruction using a local database for the entered voice, or may generate the text locally by first generating the voice and sending the text to the voice server to generate the voice instruction through the voice server.
The language used by the voice search instruction input by the user is not limited in the embodiment of the application, and the language can be, for example, Chinese, english, french, and the like.
The target search terms can also be understood as search keywords, and are keyword terms in the voice search instruction, which can be nouns, such as yoga and sunset, and also can be place names, such as beijing and moscow, and also can be names of audio-visual works.
Step S103, the server obtains the resource information of the service type corresponding to the target search term.
In this step, the server may access a resource library, in which resource information of different service types is stored, and each target search word corresponds to resource information of at least service types, where "yoga" corresponds to resource information of a shopping service and resource information of a movie service, and "apple" corresponds to resource information of a shopping service, resource information of a movie service, and resource information of a music service.
In the embodiments of , the server may store the corresponding relationship between the target search word and the service type of the resource in advance, and the resource information of the service type corresponding to the target search word may be determined according to the corresponding relationship between the target search word and the service type of the resource in the embodiments of , the uncertainty input by the user may determine the service type corresponding to the target search word through a deep learning model of the service type of the target search word-resource.
The method includes that business types are more crossed and related, target search words may correspond to different business types, exemplarily, "apple" may correspond to resource information of a movie type, resource information of a music type, and resource information of a shopping type, yoga "may correspond to resource information of a movie type and resource information of a shopping type, and it should be noted that the number of resource information acquired by each business type may be or more, and the number of resource information is not limited in the embodiment of the application.
In , the service type corresponding to the target search term is determined according to the target search term and the search in the resource library according to the target search term may also be two parallel threads, all media resources in the resource library include media resources of many service types, and the media resource corresponding to the target search term may only correspond to service types, or may correspond to two or more service types.
And step S104, responding to that the service types are not less than two, adjusting the display sequence of the labels by the server according to the target search words, wherein each label is used for loading service type resource information.
In this step, the resources of different service types may be placed under different service label types or each resource may have a label of a service type. And the server acquires the resource information corresponding to the target search word from the resources of the service type. If the target search word corresponds to at least two service types, the server also needs to adjust the display sequence of the tags according to the target search word.
The tags may be divided according to the service types, for example, the tags may be divided into music tags, shopping tags and novel tags, and the resource information of the service type corresponding to the target search term acquired by the server may be mapped into the corresponding tag according to the corresponding service type. For example: the movie and television type resource information can be mapped into movie and television labels, the music type resource information can be mapped into music labels, the shopping type resource information can be mapped into shopping labels, and the novel type resource information can be mapped into novel labels.
In , the server may adjust the display order of the tags according to the weight of the target search term corresponding to at least two service types.
In embodiments, the weight of the service type corresponding to the target search term may be preset with a mapping relationship, and the server adjusts the display order of the tags according to the preset mapping relationship.
Illustratively, the target keyword yoga corresponds to resource information of two business types, namely a movie type and a shopping type, the weight of the movie type is preset to be 5, the weight of the shopping type is preset to be 3, and correspondingly, as the weight of the movie type is larger than the weight of the shopping type, the label of the movie type can be arranged in front of the label of the shopping type.
In still other embodiments, the server may also determine the weight of the business type corresponding to the target search term by the dependency between the non-target keyword and the target keyword in the voice search instruction.
In step S105, the server generates th display instruction according to the display order of the labels.
In this step, when the server adjusts the display order of the tags, th display instruction may be further generated, th display instruction is used to send to the display device to instruct the display device to display the tags, wherein th display device includes the display order of the tags.
In , the server may further obtain an address corresponding to the resource information of the service type corresponding to the tag, and generate a th display instruction according to the tag after the display order is adjusted and the address corresponding to the resource information of the service type corresponding to the tag.
And S106, the server pushes th display instructions to the display device, wherein the th display instructions are used for indicating the display device to sequentially display the labels in the label display area in the resource display interface according to the adjusted display sequence.
In step S106, after the server obtains the resource information of the service type corresponding to the target search term and adjusts the display order of the tab page, an th display instruction may be pushed to the display device, so that the display device sequentially displays the tabs according to the adjusted display order through the tab page area.
Fig. 8 is a schematic diagram illustrating a display principle of tag display areas according to an embodiment of the present application, in embodiments, as shown in fig. 8, data obtained by a server in response to a target search word includes TAB data and search result data, where the TAB data includes tags to be returned and a sequence of the tags, the search result data includes service data corresponding to all the tags in the TAB data and a mapping relationship between the data and the tags, the server may package the search result data and the TAB data into a javascript object notation (JSON) format and send the data and the mapping relationship between the data and the tags to a display device, and after receiving the JSON data, the display device parses the TAB data and the search result data, displays the TAB data in the tag display area according to an order determined by the server, and loads service data (illustratively, including poster information) corresponding to a tag at a focus position according to a vacancy of the tag corresponding to the focus in the resource display area, so as to display different resources under the type of services in different vacancies.
In , address information corresponding to each resources is contained in the search result data, and the slot can display posters of the resources by loading the address information, wherein the posters include display pictures and titles of the resources.
And S107, the display equipment sequentially displays the labels in the label display area in the resource display interface according to the adjusted display sequence.
The resource display interface may include a list display area and a resource display area. The tag at the top of the sequence is set to the default selected tag.
In possible embodiments, the tab display area displayed by the display device may include tabs for acquiring the service type corresponding to the voice search, and in another possible embodiments, the tab page list displayed by the display device may include tabs of all service types, the specific number of resource information may be displayed on the tab page where the resource information is acquired, and the number of displayed tab pages where the resource information is not acquired may be zero.
Correspondingly, the server respectively acquires the resource information of the movie type corresponding to the yoga and the resource information of the shopping type corresponding to the yoga, maps the resource information of the shopping type into the label of the shopping type, maps the resource information of the movie resource type into the label of the movie type, and arranges the label of the movie type before the label of the shopping type according to the weight.
Fig. 9a is an interface schematic diagram of display devices provided in the embodiment of the present application, and fig. 9b is an interface schematic diagram of another display devices provided in the embodiment of the present application, as shown in fig. 9a and 9b, taking a target search term "yoga" and "yoga teaching" as examples, resource information related to the target search term "yoga" and "yoga teaching" is displayed on the interface of the display device through a tag display area, a tag in the tag display area may be, for example, a movie, education, a shopping platform 1, an application, a shopping platform 2, and the like, and the display device may display the resource information corresponding to the tag in the resource display area by clicking different tags by a user.
In , the label may use the same field as the service type, or different fields, when the target search word is "yoga", the corresponding media in the media (media resource) library includes multiple media, for example, "kungfu yoga", "love yoga", etc. belonging to the movie and television service, also "yoga tutorial", "follow me learning yoga", etc. belonging to the education service, also "yoga clothes", "yoga mat", etc. belonging to the shopping service, also APP resources "yoga everyday", "yoga income " belonging to the application service, and also "yoga clothes", "yoga mat", etc. belonging to the shopping service of treasure-picking.
In embodiments , the display instruction includes tag data and search result data of media assets, and the sequence of tags, where the data of the tags is determined by the server according to the service location of the target search term, for example, when the target search term is located in the movie service, or when the probability of the movie service is the highest, the tags corresponding to the movie service are ranked at , and other ranks may be randomly arranged, or may be arranged according to the size of the service probability, for example, the smaller the probability is, the later the probability is according to the historical search habit of the user, the lower the frequency of use of the user is, the later the frequency of use of the user is the highest, and the -th tag that does not affect the service location, in embodiments , the display sequence corresponding to "yoga" is that the tags for movie, education, focused shopping, application, and panning.
In alternative embodiments, if the display instruction includes an address corresponding to the resource information of the service type corresponding to the tag, the display device displays the resource information of service types corresponding to the selected tag in the resource display area in the resource display interface according to the address and the selected tag.
Correspondingly, the display device can also receive an instruction input by a user, and play or display the corresponding resource according to the identifier in the instruction. For example, taking an intelligent television as an example, resource information is displayed on the intelligent television through a tag display area, a user can switch selection of different tags through a remote controller, the resource display area loads the resource information corresponding to the selected tag, and if a resource which is desired to be played or displayed appears, the user can input an instruction to the intelligent television through the remote controller, for example, a downward key value, so that a focus can be controlled to move from the tag display area to a vacant position of the resource display area. If the smart television receives a determination instruction input by a user, the resource corresponding to the vacancy at the focus can be obtained from the server according to the identifier in the instruction, and then the resource is played or displayed.
According to the voice control method provided by the embodiment of the application, the display device receives voice input from the audio receiving element and generates a voice search instruction according to the voice, then the display device sends the voice search instruction input by a user to the server, wherein the voice search instruction carries target search words, the target search words are used for adjusting the display sequence of the labels when the service types are not less than two, the different display sequences of the labels correspond to different target search words, then the display device receives display instructions returned by the server based on the voice search instruction, and in response to display instructions, the display device sequentially displays the labels in the label display area in the resource display interface according to the adjusted display sequence.
Fig. 10 is a schematic flow chart of voice control methods provided in the embodiment of the present application, and this embodiment relates to a specific process of how the server adjusts the display order of tab pages, and the embodiment of the present application describes the method of this embodiment with the server as the main execution body, as shown in fig. 10, on the basis of the above embodiment, the method includes:
step S201, receiving a voice search instruction sent by a display device, wherein the voice search instruction carries a target search word.
Step S202, acquiring resource information of the service type corresponding to the target search term.
The technical terms, technical effects, technical features, and alternative embodiments of steps S201-S202 can be understood with reference to steps S102-S103 shown in fig. 6, and repeated content will not be described herein.
Step S203, in response to the service types not less than two, acquiring the weight of each service type corresponding to the target search word.
In , the server may obtain the weight of each service type corresponding to the target search term according to the target search term and a preset mapping relationship between the search term and the weight of the service type.
In this embodiment, the server may store a mapping relationship between a preset search word and a weight of a service type in advance, and when performing a voice search, the server may find the weight of each service type corresponding to the target search word from the pre-stored mapping relationship.
Illustratively, if the voice search instruction includes the target search word "computer", the server stores in advance a mapping relationship between the search word "computer" and the weights of the movie type, the education type and the shopping type. Based on this, the server can directly obtain the weight 1 of the movie type, the weight 2 of the education type and the weight 3 of the shopping type corresponding to the target search word "computer" from the database.
In this embodiment, the mapping relationship between the target search word and the weight of the service type may be used as attribute information of the target search word, in embodiments, the uncertainty input by the user may determine the weight of the service type corresponding to the target search word through a target search word-service type deep learning model, and when the target search word corresponds to multiple service types, the corresponding service types may be sorted according to the weight.
In , the voice search command includes at least non-target search terms in addition to the target search terms, and the server obtains the weight of each service type corresponding to the target search terms based on the dependency between the target search terms and at least non-target search terms.
In this embodiment, the determination of the weight of each service type according to the target search term only may cause deviation, and thus the determination of the weight of the service type may be assisted by the non-target search term.
The non-target search word may be a word other than the target search word in the voice search instruction, and the non-target search word may generally assist the target search word in locating the service type. The non-targeted search terms may be verbs, such as "see," "buy," "learn," etc., and may also be the names "director," "concert," "handbag," etc.
In optional embodiments, a modifier rule may be configured to form a fixed language relationship with the target search term, and the fixed language relationship is used as the dependency relationship between the target search term and the non-target search term.
Illustratively, the voice search instruction is "cheap mobile phone", wherein the target search word is "mobile phone" and the non-target search word is "cheap", and the weight of the shopping service is determined to be greater than or equal to the weight of the movie service according to the dependency between "mobile phone" and "cheap". Accordingly, the server may set the shopping service to 2 and the movie service to 1. In addition, the non-target search term "cheap" can also assist the target keyword in searching related resource information under the corresponding service type.
The target search term in is a term that matches the title of the resource to a degree higher than a predetermined threshold.
In another alternative embodiments, verb rules may be configured to form a fixed-language relationship with the target search term, and the fixed-language relationship is used as a dependency relationship between the target search term and the non-target search term.
Illustratively, the voice search instruction is "hear apples", wherein the target search term is "apples", and the non-target search term is "hear", and the weight of the music service is determined to be greater than the weight of the video service according to the dependency between "apples" and "hear". Accordingly, the server may set the weight of the music service to 2 and the weight of the movie service to 1.
In this embodiment, the voice search instruction may include a plurality of non-target search terms. Accordingly, the weight of the non-service type may be determined in consideration of the dependency relationship between the plurality of non-target search terms and the target search term.
Illustratively, the voice search instruction is "how you want to see how well you are at the flute". Firstly, determining that the target search word 'how to correspond to the Sheng, the Xiaomeier' and the video service. Then, determining the dependency relationship between three non-target search words of 'i think', 'see', 'Zhonghanliang' and a target search word 'which is the sheng xiao-mei', respectively determining the weights a1, a2 and a3 of the video service according to the dependency relationship between the non-target search words and the target search words, and summarizing the three weights a1, a2 and a3 to obtain the total weight a of the video service; correspondingly, the target search word 'how to correspond to the sheng, the xiao, and the silence' is determined to have the music service. Subsequently, the dependency relationship between three non-target search words of 'i want', 'see', 'Zhonghanliang' and the target search word 'which is the sheng xiao-mei' is determined, the weights b1, b2 and b3 of the music service are respectively determined according to the dependency relationship between the non-target search words and the target search words, and the three weights b1, b2 and b3 are summarized to obtain the total weight b of the music service.
In , the correspondence between the non-target search word and the service type may be used as attribute information of the non-target search word, in , the uncertainty input by the user may determine a weight of the service type corresponding to the non-target search word through a non-target search word-service type deep learning model, and when the non-target search word corresponds to multiple service types, the corresponding service types may be sorted according to the weight.
In , the non-target search term and the corresponding click data may be counted by big data to determine the weight of the business type corresponding to the non-target search term.
And S204, adjusting the display sequence of the labels according to the weight of each service type corresponding to the target search word.
In this step, when the server determines the weight of each service type corresponding to the target search word, the tags may be sorted from large to small according to the weight of the corresponding service type, so as to adjust the display order of the tag page according to the sorting result.
In embodiments, the server may locate a target service type according to the weight of each service type corresponding to the target search term, where the weight of the target service type is the largest, and adjust the display sequence of the tab pages according to the target service type, so that the tab corresponding to the target service type is located at the head of the tab page list.
Illustratively, if the weight of the music genre corresponding to "apple" is 3, the weight of the movie genre is 2, and the weight of the shopping genre is 1. Since the music genre is weighted the highest, the music genre can be located as the target genre and arranged at the top of the tag.
In the embodiments of , after the tag of the target type is arranged at the first position, the subsequent tags may be arranged according to the weights from large to small, in the embodiments of , after the tag of the target type is arranged at the first position, the subsequent tags may be arranged randomly, in the embodiments of , after the tag of the target type is arranged at the first position, the tag corresponding to the service type to be recommended may also be used as the second position.
For example, as shown in fig. 9a and 9b, in the tag display area in fig. 9a, the "movie" tag is located at th, and then the tags such as "education", "gathering and looking at communication", "application", "panning and shopping" and the like are sequentially arranged, as shown in fig. 9a, the default focus after searching is located on the tag located at the top, so that the resource information corresponding to the "movie" tag is downloaded in each vacant position in the resource display area.
Step S205 generates th display instruction according to the display order of the labels.
And S206, pushing th display instructions to the display equipment, wherein the th display instructions are used for indicating the display equipment to sequentially display the labels in the label display area in the resource display interface according to the adjusted display sequence.
The technical terms, technical effects, technical features, and alternative embodiments of steps S205-S206 can be understood with reference to steps S105-S106 shown in fig. 6, and repeated descriptions thereof will not be repeated here.
According to the voice control method provided by the embodiment of the application, the server obtains the weight of each service type corresponding to the target search word; and adjusting the display sequence of the labels according to the weight of each service type corresponding to the target search word. By arranging the label of the target service type with the largest weight on the home page, the resource information of the service type with higher association degree with the target search word can be preferentially displayed, so that the possibility that the display sequence of the resource information does not accord with the search intention of the user is reduced, the time for the user to find the target search resource from the label display area is reduced, and the user experience is improved.
Fig. 11 is a signaling interaction diagram of another voice control methods provided in the embodiment of the present application, which relates to a specific process of how a server acquires resource information corresponding to a target search word, the embodiment of the present application takes a server as an example to explain the method in the embodiment of the present application, as shown in fig. 11, on the basis of the foregoing embodiment, the method includes:
step S301, the display device receives the voice input from the audio receiving element and generates a voice searching instruction according to the voice.
Step S302, the display device sends a voice search instruction to the server.
Step S303, the server obtains the resource information of the service type corresponding to the target search term.
The technical terms, technical effects, technical features, and alternative embodiments of steps S301 to S303 can be understood with reference to steps S101 to S103 shown in fig. 6, and repeated content will not be described herein.
Step S304, responding to service types, the server pushes a second display instruction to the display device, wherein the second display instruction is used for indicating the display device to display a label in a label display area in the resource display interface, and according to the address and the label, resource information of service types corresponding to the label is displayed in the resource display area in the resource display interface.
In this step, if the target search word in the voice search instruction corresponds to service types, the display order of the tab page does not need to be adjusted, and the second display instruction can be directly pushed to the display device, so that the display device displays the resource information of the service type corresponding to the target search word.
For example, if the target search word in the voice search instruction is "yoga" and the attribute information of the target search word "yoga" only corresponds to the movie service, the server may directly push the second display instruction to the display device, so that the display device displays the resource of the movie service related to "yoga".
In embodiments, if the target search term only corresponds to service types, after the display device parses the received JSON data, the TAB data only contains tags, so that only these tags are shown in the tag display area, and since the focus is on the tag by default, the empty space in the resource display area loads the resource information corresponding to the tag.
In embodiments, if the target search term only corresponds to service types and there are only resource information under the service type, the display device may also directly display or play the resource corresponding to the resource information.
In the voice control method provided by the embodiment of the application, the server responds that the service types are , and pushes a second display instruction to the display device, where the second display instruction is used to instruct the display device to display the resource information of the service type corresponding to the target search word through the tag corresponding to the service type, so that when the target search word only corresponds to service types, the display device can directly display the resource information of the service type corresponding to the target search word to the user.
Fig. 12 is a signaling interaction diagram of another voice control methods provided in the embodiment of the present application, which relates to a specific process of how to accurately acquire a target search word, the embodiment of the present application takes a display device, a voice server, and a data server as examples to explain the method of the embodiment of the present application, as shown in fig. 12, on the basis of the above embodiment, the method includes:
step S401, the display device receives the voice input from the audio receiving element, and sends the voice to the voice server.
Step S402, the voice server generates a text corresponding to the voice according to the voice.
In steps S401 and S402, the display apparatus may acquire the voice input by the user through a microphone on the remote controller or a microphone on the display apparatus body. And then, the natural language input by the user and acquired by the display equipment is sent to a voice server, and the voice server converts the voice into a corresponding text.
It should be noted that, the embodiment of the present application is not limited to how to convert the speech into the corresponding text, and may be any kinds of existing conversion methods.
Step S403, the voice server pushes a third display instruction to the display device, where the third display instruction is used to instruct the display device to display a text corresponding to the voice.
Fig. 13 is a schematic view of a display principle of texts provided by an embodiment of the present application, and fig. 14 is a schematic view of an interface of another display devices provided by an embodiment of the present application, for example, as shown in fig. 13, after receiving a third display instruction sent by a voice server, the display device may create a layout file of a text corresponding to the voice search instruction, then load the layout file and initialize text control in the layout file, and finally display the text corresponding to the voice search instruction.
In , the voice server and the data server may be the same server, and the third display command is sent to the display device after the search is completed and at the same time as the th display command.
The embodiment of the application does not limit the page of the display device for displaying the text corresponding to the voice search instruction, the text corresponding to the voice search instruction can be displayed on the voice search page, and the text corresponding to the voice search instruction can also be displayed on the page of the tag display area after the search is finished.
By displaying the text corresponding to the voice search instruction on the display device, the user can judge whether the voice recognition is accurate. When the voice recognition is inaccurate, the display device may re-transmit the voice search instruction to the server after receiving a re-recognition instruction input by the user.
And S404, the display device generates a voice search instruction according to the text and sends the voice search instruction to the data server.
In , the data server and the voice server may be different servers, and the user's voice needs to be parsed at the voice server and then the text is returned to the display device and then sent by the display device to the data server.
In , the voice search command further includes information such as ID of the display device, so that the data server can accurately feed back the search result to the display device.
Step S405, the data server performs word segmentation processing on the text to obtain a target search word.
In this step, after obtaining the text corresponding to the voice search instruction, the data server may perform word segmentation on the text, thereby obtaining the target search word.
It should be noted that, the word segmentation method is not limited in the embodiments of the present application, and in alternative implementations, the positive maximum matching method may be selected.
For example, the voice search instruction input by the user is "i want to see yoga", and after the voice search instruction is converted into a text, the data server may decompose "i want to see yoga" into "i", "want", "see", "yoga" by using a maximum matching method. Then, from the target search term list pre-stored in the data server, it can be determined that the target search term corresponding to the voice search instruction is "yoga".
Step S406, the data server obtains the resource information of the service type corresponding to the target search term.
Step S407, the data server adjusts the display order of the tags according to the target search word in response to the service types not less than two, each tag being used to display the resource information of service types.
Step S408, the data server generates th display instruction according to the display order of the labels.
And S409, pushing an th display instruction to the display device by the data server, wherein the th display instruction is used for indicating the display device to sequentially display the labels in the label display area in the resource display interface according to the adjusted display sequence.
And S410, sequentially displaying the labels in the label display area in the resource display interface according to the adjusted display sequence by the display equipment.
The technical terms, technical effects, technical features and optional embodiments of steps S406 to S410 can be understood by referring to steps S103 to S107 shown in fig. 6, and repeated contents will not be described herein.
The voice control method provided by the embodiment of the application obtains the text corresponding to the voice search instruction, performs word segmentation processing on the text, and obtains the target search word and the attribute information of the target search word, thereby determining the resource information set. By improving the accuracy of the target search terms, the possibility that the search results do not match the user's intent is reduced.
It will be understood by those skilled in the art that all or part of the steps of implementing the above method embodiments may be implemented by hardware related to program instructions, the program may be stored in computer readable storage media, and the program when executed performs the steps comprising the above method embodiments, and the storage media may comprise various media such as ROM, RAM, magnetic or optical disk and the like which can store program codes.
Fig. 15 is a schematic structural diagram of display devices provided in an embodiment of the present application, where the display devices may be implemented by software, hardware, or a combination of both to execute the above-mentioned voice control method, as shown in fig. 10, the display device includes:
a display 51 configured to display a user interface further including a selector therein indicating that an item is selected, the position of the selector in the user interface being movable by user input to cause a different item to be selected;
a controller 52 in communication with the display screen, the controller configured to:
receiving voice input from the audio receiving element, and generating a voice search instruction according to the voice;
sending a voice search instruction input by a user to a server, wherein the voice search instruction carries a target search word, the target search word is used for adjusting the display sequence of the labels when the service types are not less than two, and different display sequences of the labels correspond to different target search words;
receiving th display instructions returned by the server based on the voice search instructions, wherein th display instructions are generated according to the display sequence of the tags;
and responding to the display instruction, and sequentially displaying the labels in the label display area in the resource display interface according to the adjusted display sequence.
in an alternative embodiment, the display instruction includes an address corresponding to the resource information of the service type corresponding to the tag;
the controller 52 is specifically configured to display the resource information of the service types corresponding to the selected label in the resource display area in the resource display interface according to the address and the selected label.
in an alternative embodiment, if the traffic types are , the controller 52 is further configured to:
and receiving a second display instruction pushed by the server, responding to the second display instruction, displaying the label in a label display area in the resource display interface by the display equipment, and displaying the resource information of service types corresponding to the label in the resource display area in the resource display interface according to the address and the label.
in an alternative embodiment, the controller 52 is specifically configured to:
sending voice to a voice server;
receiving a text returned by the voice server, wherein the text is generated by the voice server according to the voice;
and generating a voice search instruction according to the text.
in an alternative embodiment, the controller 52 is further configured to:
and receiving a third display instruction pushed by the voice server, and displaying a text corresponding to the voice in a search word display area in the resource display interface by the display equipment in response to the third display instruction, wherein the search word display area, the label display area and the resource display area are sequentially arranged from top to bottom.
in an alternative embodiment, the tab at the top of the sequence is set as the default selected tab.
The display device provided in the embodiment of the present application may perform the actions of the display device in the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 16 is a schematic structural diagram of servers according to an embodiment of the present disclosure, as shown in fig. 16, the electronic device may include at least processors 61 and a memory 62, and fig. 16 shows an electronic device with processors as an example.
And a memory 62 for storing programs. In particular, the program may include program code including computer operating instructions.
The memory 62 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile), such as at least disk memories.
The processor 61 is configured to execute computer-executable instructions stored in the memory 62 to implement the server-side voice control method described above.
The processor 61 may be Central Processing Units (CPUs), or an Application Specific Integrated Circuit (ASIC), or or more Integrated circuits configured to implement the embodiments of the present Application.
Alternatively, in a specific implementation, if the communication interface, the memory 62 and the processor 61 are implemented independently, the communication interface, the memory 62 and the processor 61 may be connected to each other via a bus and perform communication with each other, the bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like, the bus may be divided into an address bus, a data bus, a control bus, or the like, but does not represent only buses or types of buses.
Alternatively, in a specific implementation, if the communication interface, the memory 62 and the processor 61 are implemented integrally on chips, the communication interface, the memory 62 and the processor 61 may complete communication through an internal interface.
The present invention further provides computer-readable storage media, which may include various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and in particular, the computer-readable storage media stores program instructions for the method at the terminal side or the method at the second terminal side.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (16)
1, A speech control method, characterized in that the method comprises:
receiving voice input from an audio receiving element, and generating a voice search instruction according to the voice;
sending a voice search instruction input by a user to a server, wherein the voice search instruction carries a target search word, the target search word is used for adjusting the display sequence of labels when the service types are not less than two, and different display sequences of the labels correspond to different target search words;
receiving th display instructions returned by the server based on the voice search instructions, the th display instructions being generated according to the display order of the tags;
and responding to the display instruction, and sequentially displaying the labels in a label display area in the resource display interface according to the adjusted display sequence.
2. The method according to claim 1, wherein the th display instruction contains an address corresponding to the resource information of the service type corresponding to the tag;
after the tags are sequentially displayed in the tag display area in the resource display interface according to the adjusted display sequence, the method further comprises the following steps:
and according to the address and the selected label, displaying resource information of service types corresponding to the selected label in a resource display area in the resource display interface.
3. The method according to claim 2, wherein if the service types are , after the sending the voice search instruction input by the user to the server, the method further comprises:
and receiving a second display instruction pushed by the server, responding to the second display instruction, displaying the label in a label display area in a resource display interface by the display equipment, and displaying the resource information of service types corresponding to the label in the resource display area in the resource display interface according to the address and the label.
4. The method of claim 3, wherein the generating a voice search instruction from the speech comprises:
sending the voice to a voice server;
receiving a text returned by a voice server, wherein the text is generated by the voice server according to the voice;
and generating a voice search instruction according to the text.
5. The method of claim 4, wherein after said sending the voice to a voice server, the method further comprises:
and receiving a third display instruction pushed by the voice server, wherein the display device responds to the third display instruction to display the text corresponding to the voice in a search word display area in a resource display interface, wherein the search word display area, the label display area and the resource display area are sequentially arranged from top to bottom.
6. The method of any of , wherein the tab at the top of the sequence is set to the default selected tab.
7, A speech control method, characterized in that the method comprises:
receiving a voice search instruction sent by display equipment, wherein the voice search instruction is generated according to voice input by an audio receiving element in the display equipment, and the voice search instruction carries a target search word;
acquiring resource information of a service type corresponding to the target search word;
responding to that the service types are not less than two, adjusting the display sequence of labels according to the target search terms, wherein each label is used for loading service type resource information;
generating th display instructions according to the display sequence of the labels;
and pushing the th display instruction to the display device, wherein the th display instruction is used for instructing the display device to sequentially display the labels in the label display area in the resource display interface according to the adjusted display sequence.
8. The method of claim 7, wherein the adjusting the display order of the tags according to the target search term comprises:
and adjusting the display sequence of the labels according to the weight of the target search word corresponding to at least two service types.
9. The method of claim 7, wherein the generating th display instruction according to the display order of the labels comprises:
and generating an th display instruction according to the label after the display sequence is adjusted and the address corresponding to the resource information of the service type corresponding to the label.
10. The method of claim 8, wherein the adjusting the display order of the tags according to the weight of the target search term corresponding to at least two of the service types comprises:
acquiring the weight of each service type corresponding to the target search word;
and adjusting the display sequence of the labels according to the weight of each service type corresponding to the target search word.
11. The method of claim 10, wherein the adjusting the display order of the labels according to the weight of each service type corresponding to the target search word comprises:
positioning a target service type according to the weight of each service type corresponding to the target search word; wherein the weight of the target service type is the largest;
and adjusting the display sequence of the labels according to the target service type so as to enable the labels corresponding to the target service type to be positioned at the head.
12. The method according to claim 10 or 11, wherein the voice search instruction further includes at least non-target search terms in addition to the target search term, and the obtaining the weight of the target search term corresponding to each service type includes:
and acquiring the weight of each service type corresponding to the target search word based on the dependency relationship between the target search word and the at least non-target search words.
13. The method according to claim 10 or 11, wherein the obtaining the weight of each service type corresponding to the target search word comprises:
and acquiring the weight of each service type corresponding to the target search word according to the target search word and the mapping relation between the preset search word and the weight of the service type.
14. The method according to claim 9, wherein after the obtaining of the resource information of the service type corresponding to the target search term, the method further comprises:
and in response to that the service types are , pushing a second display instruction to the display device, wherein the second display instruction is used for instructing the display device to display the label in a label display area in a resource display interface, and displaying resource information of service types corresponding to the label in the resource display area in the resource display interface according to the address and the label.
15. The method of claim 7, wherein after receiving the voice search instruction sent by the display device, the method further comprises:
acquiring a text corresponding to the voice search instruction according to the voice search instruction;
and performing word segmentation processing on the text to obtain the target search word.
16, A display device, comprising:
a display configured to display a user interface further comprising a selector therein indicating that an item is selected, the position of the selector in the user interface being moveable by user input to cause a different said item to be selected;
a controller in communication with the display screen, the controller configured to:
receiving voice input from an audio receiving element, and generating a voice search instruction according to the voice;
sending a voice search instruction input by a user to a server, wherein the voice search instruction carries a target search word, the target search word is used for adjusting the display sequence of labels when the service types are not less than two, and different display sequences of the labels correspond to different target search words;
receiving th display instructions returned by the server based on the voice search instructions, the th display instructions being generated according to the display order of the tags;
and responding to the display instruction, and sequentially displaying the labels in a label display area in the resource display interface according to the adjusted display sequence.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310856095.6A CN117056622A (en) | 2019-10-22 | 2019-10-22 | Voice control method and display device |
CN201911008347.XA CN110737840B (en) | 2019-10-22 | 2019-10-22 | Voice control method and display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911008347.XA CN110737840B (en) | 2019-10-22 | 2019-10-22 | Voice control method and display device |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310856095.6A Division CN117056622A (en) | 2019-10-22 | 2019-10-22 | Voice control method and display device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110737840A true CN110737840A (en) | 2020-01-31 |
CN110737840B CN110737840B (en) | 2023-07-28 |
Family
ID=69270891
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911008347.XA Active CN110737840B (en) | 2019-10-22 | 2019-10-22 | Voice control method and display device |
CN202310856095.6A Pending CN117056622A (en) | 2019-10-22 | 2019-10-22 | Voice control method and display device |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310856095.6A Pending CN117056622A (en) | 2019-10-22 | 2019-10-22 | Voice control method and display device |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110737840B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111324800A (en) * | 2020-02-12 | 2020-06-23 | 腾讯科技(深圳)有限公司 | Business item display method and device and computer readable storage medium |
CN111552794A (en) * | 2020-05-13 | 2020-08-18 | 海信电子科技(武汉)有限公司 | Prompt language generation method, device, equipment and storage medium |
CN112004157A (en) * | 2020-08-11 | 2020-11-27 | 海信电子科技(武汉)有限公司 | Multi-round voice interaction method and display equipment |
CN112000820A (en) * | 2020-08-10 | 2020-11-27 | 海信电子科技(武汉)有限公司 | Media asset recommendation method and display device |
CN112165641A (en) * | 2020-09-22 | 2021-01-01 | Vidaa美国公司 | Display device |
CN112185339A (en) * | 2020-09-30 | 2021-01-05 | 深圳供电局有限公司 | Voice synthesis processing method and system for power supply intelligent client |
CN112417271A (en) * | 2020-11-09 | 2021-02-26 | 杭州讯酷科技有限公司 | Intelligent construction method of system with field recommendation |
CN112883225A (en) * | 2021-02-02 | 2021-06-01 | 聚好看科技股份有限公司 | Media resource searching and displaying method and equipment |
CN112989238A (en) * | 2020-10-21 | 2021-06-18 | 深圳市乐讯科技有限公司 | Method for rapidly presenting page based on user habits |
CN113077858A (en) * | 2021-03-19 | 2021-07-06 | 海信视像科技股份有限公司 | Control method of display device control, display device and server |
CN113158004A (en) * | 2021-04-29 | 2021-07-23 | 北京字节跳动网络技术有限公司 | Data search processing method and device, electronic equipment and storage medium |
CN113490041A (en) * | 2021-06-30 | 2021-10-08 | Vidaa美国公司 | Voice function switching method and display device |
CN113542900A (en) * | 2020-04-22 | 2021-10-22 | 聚好看科技股份有限公司 | Media information display method and display equipment |
CN113542899A (en) * | 2020-04-22 | 2021-10-22 | 聚好看科技股份有限公司 | Information display method, display device and server |
CN113593559A (en) * | 2021-07-29 | 2021-11-02 | 海信视像科技股份有限公司 | Content display method, display equipment and server |
CN113707145A (en) * | 2021-08-26 | 2021-11-26 | 海信视像科技股份有限公司 | Display device and voice search method |
CN113805738A (en) * | 2020-06-12 | 2021-12-17 | 海信视像科技股份有限公司 | User-defined setting method and starting method of control key and display device |
CN114372214A (en) * | 2020-10-15 | 2022-04-19 | 海信电子科技(武汉)有限公司 | Display device, server and content display method |
US12056326B2 (en) | 2020-09-22 | 2024-08-06 | VIDAA USA, Inc. | Display apparatus |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102833610A (en) * | 2012-09-24 | 2012-12-19 | 北京多看科技有限公司 | Program selection method, apparatus and digital television terminal |
CN102929924A (en) * | 2012-09-20 | 2013-02-13 | 百度在线网络技术(北京)有限公司 | Method and device for generating word selecting searching result based on browsing content |
WO2013138603A1 (en) * | 2012-03-16 | 2013-09-19 | Google Inc. | Providing information prior to downloading resources |
US20140052450A1 (en) * | 2012-08-16 | 2014-02-20 | Nuance Communications, Inc. | User interface for entertainment systems |
US20140196091A1 (en) * | 2013-01-07 | 2014-07-10 | Samsung Electronics Co., Ltd. | Server and method for controlling server |
CN104462510A (en) * | 2014-12-22 | 2015-03-25 | 北京奇虎科技有限公司 | Search method and device based on user search intention |
CN104462262A (en) * | 2014-11-21 | 2015-03-25 | 北京奇虎科技有限公司 | Method and device for achieving voice search and browser client side |
CN104462576A (en) * | 2014-12-29 | 2015-03-25 | 北京奇虎科技有限公司 | Method and device for comprehensive music search based on label pages |
CN105320706A (en) * | 2014-08-05 | 2016-02-10 | 阿里巴巴集团控股有限公司 | Processing method and device of search result |
CN106303667A (en) * | 2016-07-29 | 2017-01-04 | 乐视控股(北京)有限公司 | Voice search method and device, terminal unit |
CN106469210A (en) * | 2016-09-02 | 2017-03-01 | 腾讯科技(深圳)有限公司 | The methods of exhibiting of media categories label and device |
CN109271533A (en) * | 2018-09-21 | 2019-01-25 | 深圳市九洲电器有限公司 | A kind of multimedia document retrieval method |
CN109618206A (en) * | 2019-01-24 | 2019-04-12 | 青岛海信电器股份有限公司 | The method and display equipment at presentation user interface |
CN110309266A (en) * | 2019-07-05 | 2019-10-08 | 拉扎斯网络科技(上海)有限公司 | Object searching method and device, electronic equipment and storage medium |
-
2019
- 2019-10-22 CN CN201911008347.XA patent/CN110737840B/en active Active
- 2019-10-22 CN CN202310856095.6A patent/CN117056622A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013138603A1 (en) * | 2012-03-16 | 2013-09-19 | Google Inc. | Providing information prior to downloading resources |
US20140052450A1 (en) * | 2012-08-16 | 2014-02-20 | Nuance Communications, Inc. | User interface for entertainment systems |
CN102929924A (en) * | 2012-09-20 | 2013-02-13 | 百度在线网络技术(北京)有限公司 | Method and device for generating word selecting searching result based on browsing content |
CN102833610A (en) * | 2012-09-24 | 2012-12-19 | 北京多看科技有限公司 | Program selection method, apparatus and digital television terminal |
US20140196091A1 (en) * | 2013-01-07 | 2014-07-10 | Samsung Electronics Co., Ltd. | Server and method for controlling server |
CN105320706A (en) * | 2014-08-05 | 2016-02-10 | 阿里巴巴集团控股有限公司 | Processing method and device of search result |
CN104462262A (en) * | 2014-11-21 | 2015-03-25 | 北京奇虎科技有限公司 | Method and device for achieving voice search and browser client side |
CN104462510A (en) * | 2014-12-22 | 2015-03-25 | 北京奇虎科技有限公司 | Search method and device based on user search intention |
CN104462576A (en) * | 2014-12-29 | 2015-03-25 | 北京奇虎科技有限公司 | Method and device for comprehensive music search based on label pages |
CN106303667A (en) * | 2016-07-29 | 2017-01-04 | 乐视控股(北京)有限公司 | Voice search method and device, terminal unit |
CN106469210A (en) * | 2016-09-02 | 2017-03-01 | 腾讯科技(深圳)有限公司 | The methods of exhibiting of media categories label and device |
CN109271533A (en) * | 2018-09-21 | 2019-01-25 | 深圳市九洲电器有限公司 | A kind of multimedia document retrieval method |
CN109618206A (en) * | 2019-01-24 | 2019-04-12 | 青岛海信电器股份有限公司 | The method and display equipment at presentation user interface |
CN110309266A (en) * | 2019-07-05 | 2019-10-08 | 拉扎斯网络科技(上海)有限公司 | Object searching method and device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
J. -W. JEONG & D. -H. LEE: "Inferring search intents from remote control movement patterns: a new content search method for smart TV", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS * |
王琳等: "一种实现智能电视语音搜索的方案", 电信科学 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111324800A (en) * | 2020-02-12 | 2020-06-23 | 腾讯科技(深圳)有限公司 | Business item display method and device and computer readable storage medium |
CN113542899A (en) * | 2020-04-22 | 2021-10-22 | 聚好看科技股份有限公司 | Information display method, display device and server |
CN113542900A (en) * | 2020-04-22 | 2021-10-22 | 聚好看科技股份有限公司 | Media information display method and display equipment |
CN113542900B (en) * | 2020-04-22 | 2023-02-17 | 聚好看科技股份有限公司 | Media information display method and display equipment |
CN111552794A (en) * | 2020-05-13 | 2020-08-18 | 海信电子科技(武汉)有限公司 | Prompt language generation method, device, equipment and storage medium |
CN111552794B (en) * | 2020-05-13 | 2023-09-19 | 海信电子科技(武汉)有限公司 | Prompt generation method, device, equipment and storage medium |
CN113805738B (en) * | 2020-06-12 | 2023-11-14 | 海信视像科技股份有限公司 | Custom setting method and starting method for control keys and display equipment |
CN113805738A (en) * | 2020-06-12 | 2021-12-17 | 海信视像科技股份有限公司 | User-defined setting method and starting method of control key and display device |
CN112000820A (en) * | 2020-08-10 | 2020-11-27 | 海信电子科技(武汉)有限公司 | Media asset recommendation method and display device |
CN112004157A (en) * | 2020-08-11 | 2020-11-27 | 海信电子科技(武汉)有限公司 | Multi-round voice interaction method and display equipment |
US12056326B2 (en) | 2020-09-22 | 2024-08-06 | VIDAA USA, Inc. | Display apparatus |
CN112165641A (en) * | 2020-09-22 | 2021-01-01 | Vidaa美国公司 | Display device |
CN112185339A (en) * | 2020-09-30 | 2021-01-05 | 深圳供电局有限公司 | Voice synthesis processing method and system for power supply intelligent client |
CN114372214A (en) * | 2020-10-15 | 2022-04-19 | 海信电子科技(武汉)有限公司 | Display device, server and content display method |
CN112989238A (en) * | 2020-10-21 | 2021-06-18 | 深圳市乐讯科技有限公司 | Method for rapidly presenting page based on user habits |
CN112417271A (en) * | 2020-11-09 | 2021-02-26 | 杭州讯酷科技有限公司 | Intelligent construction method of system with field recommendation |
CN112417271B (en) * | 2020-11-09 | 2023-09-01 | 杭州讯酷科技有限公司 | Intelligent system construction method with field recommendation |
CN112883225A (en) * | 2021-02-02 | 2021-06-01 | 聚好看科技股份有限公司 | Media resource searching and displaying method and equipment |
CN113077858B (en) * | 2021-03-19 | 2022-11-29 | 海信视像科技股份有限公司 | Control method of display device control, display device and server |
CN113077858A (en) * | 2021-03-19 | 2021-07-06 | 海信视像科技股份有限公司 | Control method of display device control, display device and server |
CN113158004A (en) * | 2021-04-29 | 2021-07-23 | 北京字节跳动网络技术有限公司 | Data search processing method and device, electronic equipment and storage medium |
CN113490041A (en) * | 2021-06-30 | 2021-10-08 | Vidaa美国公司 | Voice function switching method and display device |
CN113593559A (en) * | 2021-07-29 | 2021-11-02 | 海信视像科技股份有限公司 | Content display method, display equipment and server |
CN113593559B (en) * | 2021-07-29 | 2024-05-17 | 海信视像科技股份有限公司 | Content display method, display equipment and server |
CN113707145A (en) * | 2021-08-26 | 2021-11-26 | 海信视像科技股份有限公司 | Display device and voice search method |
Also Published As
Publication number | Publication date |
---|---|
CN110737840B (en) | 2023-07-28 |
CN117056622A (en) | 2023-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110737840B (en) | Voice control method and display device | |
CN111314789B (en) | Display device and channel positioning method | |
CN112163086B (en) | Multi-intention recognition method and display device | |
CN111405318B (en) | Video display method and device and computer storage medium | |
CN112463269B (en) | User interface display method and display equipment | |
CN110659010A (en) | Picture-in-picture display method and display equipment | |
CN111625716B (en) | Media asset recommendation method, server and display device | |
CN111787376B (en) | Display device, server and video recommendation method | |
CN111770370A (en) | Display device, server and media asset recommendation method | |
CN111914134B (en) | A kind of association recommendation method, intelligent device and service device | |
CN112165641A (en) | Display device | |
CN112380420A (en) | Searching method and display device | |
CN112135170A (en) | Display device, server and video recommendation method | |
CN111866568B (en) | Display device, server and video collection acquisition method based on voice | |
CN111083538A (en) | Background image display method and device | |
CN112162809B (en) | Display device and user collection display method | |
CN113542899B (en) | Information display method, display device and server | |
CN118445485A (en) | Display device and voice searching method | |
KR101714661B1 (en) | Method for data input and image display device thereof | |
CN112929717B (en) | Focus management method and display device | |
CN111950288B (en) | Entity labeling method in named entity recognition and intelligent device | |
CN113542900B (en) | Media information display method and display equipment | |
CN112199560B (en) | Search method of setting items and display equipment | |
CN113573127B (en) | Method for adjusting channel control sequencing and display equipment | |
CN114372214A (en) | Display device, server and content display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Applicant after: Hisense Visual Technology Co., Ltd. Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Applicant before: QINGDAO HISENSE ELECTRONICS Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |