CN112965654A - Gesture recognition feedback method and device - Google Patents
Gesture recognition feedback method and device Download PDFInfo
- Publication number
- CN112965654A CN112965654A CN202110224675.4A CN202110224675A CN112965654A CN 112965654 A CN112965654 A CN 112965654A CN 202110224675 A CN202110224675 A CN 202110224675A CN 112965654 A CN112965654 A CN 112965654A
- Authority
- CN
- China
- Prior art keywords
- gesture
- page
- gesture information
- element set
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0425—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
- G06Q30/0643—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping graphically representing goods, e.g. 3D product representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/12—Hotels or restaurants
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Software Systems (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Primary Health Care (AREA)
- Development Economics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a gesture recognition feedback method and a gesture recognition feedback device, wherein the gesture recognition feedback method comprises the following steps: detecting gesture information of a user in a business place; acquiring the layout state of the current page; the page comprises a first element set and a second element set; determining the article elements to be added into the second element set in the first element set according to the layout state and the gesture information of the page; and adding an item element in the second element set according to the gesture information. According to the technical scheme provided by the invention, the user can conveniently add the article elements from one set to other sets in a gesture operation mode, the article element adding mode is enriched, the interaction with the user is effectively increased, the interaction requirement of the user is well met, the user experience is favorably improved, and the interestingness is increased.
Description
The application is a divisional application of an invention patent application with the application date of 2018, 2 and 27, and the application number of 201810161137.3, and the name of the invention patent application is gesture recognition feedback method and device.
Technical Field
The invention relates to the technical field of internet, in particular to a gesture recognition feedback method and device.
Background
With the continuous development of science and technology, electronic devices such as mobile phones, computers, televisions, projectors and the like are widely applied, and the electronic devices greatly enrich the daily life of people. When the electronic device needs to be operated, in the prior art, a user mostly sends an operation instruction by operating a touch screen, a keyboard, a mouse, a remote control lever, a remote controller, or a switch, for example, sliding on the touch screen to scroll through contents displayed in the touch screen, exchanging playing contents of a television by using the remote controller, or adjusting playing volume. And after the electronic equipment receives the operation instruction, feeding back according to the operation instruction. However, the operation method cannot well meet the interaction requirement of the user, and in addition, the problem of complicated operation process still exists, so that the user cannot conveniently operate.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a gesture recognition feedback method and apparatus that overcomes or at least partially solves the above mentioned problems.
According to an aspect of the present invention, there is provided a gesture recognition feedback method, including:
detecting gesture information of a user in a business place;
acquiring the layout state of the current page; the page comprises a first element set and a second element set;
determining the article elements to be added into the second element set in the first element set according to the layout state and the gesture information of the page;
and adding an item element in the second element set according to the gesture information.
Further, according to the layout state and the gesture information of the page, determining the item elements to be added to the second element set in the first element set further includes:
determining the position information of each item element in the first element set in the page according to the layout state of the page;
and determining the article elements to be added to the second element set in the first element set according to the gesture relative position information in the gesture information.
Further, adding the item element in the second set of elements according to the gesture information further comprises:
and adding the item elements in the first element set into the second element set according to the gesture tracks in the gesture information.
Further, before adding an item element in the second set of elements according to the gesture information, the method further comprises: judging whether the gesture information meets a preset adding condition or not according to a gesture track and/or a gesture direction in the gesture information;
according to the gesture information, adding the article elements in the second element set specifically comprises: and if the preset adding condition is met, adding the article elements in the second element set according to the gesture information.
Further, the preset adding conditions include: the gesture moving distance accords with a preset distance range and/or the gesture direction accords with a preset direction range.
Further, the page includes a plurality of sub-regions; determining the item elements to be added to the second element set in the first element set according to the layout state and the gesture information of the page further comprises:
determining a sub-region corresponding to the gesture information according to the gesture relative position information in the gesture information;
and determining the article elements to be added to the second element set in the first element set in the sub-region according to the layout state of the sub-region corresponding to the gesture information in the page and the gesture information.
Further, detecting gesture information of the user in the business place further comprises:
identifying frame images in a video of a service place to obtain gesture information; or,
and detecting a sliding contact event of a user on the touch screen, and obtaining gesture information according to the sliding contact event.
According to another aspect of the present invention, there is provided a gesture recognition feedback apparatus, the apparatus including:
means for detecting gesture information of a user in a business venue;
a module for obtaining a layout state of a current page; the page comprises a first element set and a second element set;
the module is used for determining article elements to be added to the second element set in the first element set according to the layout state and the gesture information of the page;
and adding an item element in the second element set according to the gesture information.
Further, the module for determining the item elements in the first element set to be added to the second element set according to the layout state and the gesture information of the page is further adapted to:
determining the position information of each item element in the first element set in the page according to the layout state of the page;
and determining the article elements to be added to the second element set in the first element set according to the gesture relative position information in the gesture information.
Further, the module for adding an item element in the second set of elements according to the gesture information is further adapted to:
and adding the item elements in the first element set into the second element set according to the gesture tracks in the gesture information.
Further, the apparatus further comprises: the module is used for judging whether the gesture information meets the preset adding condition or not according to the gesture track and/or the gesture direction in the gesture information;
the module for adding an item element in the second set of elements according to the gesture information is further adapted to: and if the preset adding condition is met, adding the article elements in the second element set according to the gesture information.
Further, the preset adding conditions include: the gesture moving distance accords with a preset distance range and/or the gesture direction accords with a preset direction range.
Further, the page includes a plurality of sub-regions; the module for determining the item elements of the first set of elements to be added to the second set of elements according to the layout state and the gesture information of the page is further adapted to:
determining a sub-region corresponding to the gesture information according to the gesture relative position information in the gesture information;
and determining the article elements to be added to the second element set in the first element set in the sub-region according to the layout state of the sub-region corresponding to the gesture information in the page and the gesture information.
Further, the module for detecting gesture information of the user in the business venue is further adapted to:
identifying frame images in a video of a service place to obtain gesture information; or,
and detecting a sliding contact event of a user on the touch screen, and obtaining gesture information according to the sliding contact event.
According to yet another aspect of the present invention, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the gesture recognition feedback method.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform an operation corresponding to the gesture recognition feedback method.
According to the technical scheme provided by the invention, the article elements to be added can be accurately determined according to the layout state and the gesture information of the page, so that a user can conveniently add the article elements from one set to other sets in a gesture operation mode, the article element adding mode is enriched, the interaction with the user is effectively increased, the interaction requirement of the user is well met, the user experience feeling is favorably improved, and the interestingness is increased.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a flow diagram of a gesture recognition feedback method according to one embodiment of the invention;
FIG. 2a shows a schematic flow diagram of a gesture recognition feedback method according to another embodiment of the invention;
FIG. 2b illustrates a schematic diagram of operations performed on the smart table using gestures;
FIG. 3a shows a schematic flow diagram of a gesture recognition feedback method according to yet another embodiment of the invention;
FIG. 3b illustrates a schematic diagram of a menu order operation on the smart table using gestures;
FIG. 4 is a block diagram of a gesture recognition feedback device according to an embodiment of the present invention;
FIG. 5 illustrates a schematic structural diagram of a computing device according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic flow diagram of a gesture recognition feedback method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S100, when the page is detected to be in the triggerable state, the gesture detection system is informed to start gesture self-circulation detection.
The method is suitable for electronic equipment with a display screen, such as an intelligent dining table, a song ordering machine, a mobile phone, a computer, a television, a projector and the like, wherein the intelligent dining table is a dining table provided with a touch display screen and an operating system, and a user can check the consumption activities of dishes, select the items of the dishes, perform the operation of ordering the dishes and the like on the touch display screen in a touch mode and the like; the song ordering machine is an electronic device provided with a song ordering system, and a user can order songs through the song ordering machine, so that K songs are played. By the method, the user can operate the page of the electronic equipment through the gesture. In an actual application scene, not every page is suitable for being operated through a gesture, and in order to quickly and effectively determine whether the page supports the gesture operation, a corresponding state attribute can be set for each page in advance, wherein the state attribute corresponding to the page supporting the gesture operation can be set to be a triggerable state, and the state attribute corresponding to the page not supporting the gesture operation is set to be a non-triggerable state, namely, the triggerable state refers to a state supporting the gesture operation, and the non-triggerable state refers to a state not supporting the gesture operation.
In order to facilitate timely feedback of the gesture of the user, the page system detects whether the current page is in a triggerable state in real time, namely detects whether the state attribute of the current page is in the triggerable state. When the page is detected to be in a triggerable state, the gesture detection system is informed to start gesture self-circulation detection. The person skilled in the art can set up the gesture detection system according to actual needs, and is not limited here. For example, the gesture detection system may include a camera device and/or a touch detection device, among others. The image pickup device may be a camera or the like, and the touch detection device may be a touch screen or the like. After the gesture detection system receives the notification, the gesture detection system automatically and cyclically detects whether gesture information exists.
Step S101, determining response actions to be executed by the page according to the layout state of the page and the gesture information detected by the gesture detection system.
After the gesture detection system detects the gesture information, the gesture detection system can send the detected gesture information to the page system, and then the page system determines the response action to be executed by the page according to the current layout state of the page and the gesture information detected by the gesture detection system. For different page layout states and the same gesture information, the response actions to be executed by the determined page may be different; the determined response action to be performed by the page may also be the same for different page layout states and different gesture information.
The layout state of the page refers to the distribution of page elements such as texts and images in the page, and specifically includes information such as the size and position of the page elements. The gesture information may include information such as gesture relative position information, gesture trajectory, and gesture direction. Specifically, the relative position relationship between the gesture and the page can be known according to the gesture relative position information.
For example, when the layout state of the page is the layout state of the menu list page and the detected gesture information is the gesture information corresponding to leftward movement of one hand, the determined response action to be executed by the page serves as displaying the menu list in a leftward scrolling manner; when the layout state of the page is the layout state of the menu list page and the detected gesture information is the gesture information corresponding to forward movement of one hand, the determined response action to be executed by the page is used for adding the menu corresponding to the gesture information in the menu list to the shopping cart; when the layout state of the page is the layout state of an MV playing page and the detected gesture information is gesture information corresponding to double hands clapping, the determined response action to be executed by the page is to play preset applause audio; and when the layout state of the page is the layout state of the MV playing page and the detected gesture information is the gesture information corresponding to the downward direction of the single finger, the determined response action to be executed by the page is to turn down the playing volume.
And step S102, triggering the page to execute a response action.
After the response action to be executed by the page is determined, the page is triggered to execute the response action, so that the real-time feedback of gesture information is realized, and a user can conveniently operate the page through gestures. If the response action to be executed by the page determined in step S101 is to display the dish list in a left scrolling manner, in step S102, the page is triggered to display the dish list in a left scrolling manner; if the response action to be executed by the page determined in step S101 is to turn down the playback volume, then in step S102, the page is triggered to turn down the playback volume.
The gesture recognition feedback method provided by the embodiment can detect gesture information by using a gesture detection system, accurately determines the response action to be executed by the page according to the current layout state of the page and the detected gesture information under the condition of detecting the gesture information, and realizes real-time feedback of the gesture information.
Fig. 2a is a schematic flow chart of a gesture recognition feedback method according to another embodiment of the present invention, and as shown in fig. 2a, the method includes the following steps:
step S200, when the page is detected to be in the triggerable state, the gesture detection system is informed to start gesture self-circulation detection.
The page system detects whether the current page is in a triggerable state in real time so as to feed back the gesture of the user in time, and when the page is detected to be in the triggerable state, the gesture detection system is informed to start gesture self-circulation detection. Specifically, a notification for starting gesture self-loop detection can be sent to the gesture detection system through a preset communication protocol, and the gesture detection system automatically and circularly detects whether gesture information exists after receiving the notification. The preset communication protocol can be set by those skilled in the art according to actual needs, and is not limited herein.
The gesture detection system may include a camera device and/or a touch detection device, etc. When the gesture detection system comprises the camera device, the camera device shoots a video of the current service place, and then a frame image in the video can be identified by using an image identification method in the prior art, so that gesture information is obtained. Taking the current business place as a dining place as an example, the dining place is provided with an intelligent dining table, a user sits beside the intelligent dining table, and the camera equipment can be arranged around the intelligent dining table and is used for shooting videos around the intelligent dining table and then identifying and processing frame images in the videos, so that gesture information of the user sitting beside the intelligent dining table is obtained; taking the current service place as a KTV place as an example, the camera device may be set in the KTV suite and configured to shoot a video of the KTV suite, and then recognize and process a frame image in the video, so as to obtain gesture information of a user in the KTV suite.
When the gesture detection system includes the touch detection device, taking the touch detection device as a touch screen as an example, the touch screen detects a sliding contact event of a finger and/or a palm of a user on the touch screen, and obtains gesture information according to the sliding contact event. Taking the current business place as a KTV place as an example, the touch screen may be disposed on a table top or a wall of a tea table in the KTV suite, and is configured to detect a sliding contact event of a finger and/or a palm of a user in the KTV suite on the touch screen, and obtain gesture information according to the sliding contact event.
Step S201, receiving detected gesture information sent by the gesture detection system through a preset communication protocol.
After the gesture detection system detects gesture information, the gesture detection system sends the detected gesture information to the page system through a preset communication protocol, and the page system receives the detected gesture information sent by the gesture detection system.
Step S202, inquiring a response action corresponding to the layout state and the gesture information of the page in a preset response information base, and determining the inquired response action as a response action to be executed by the page.
In order to facilitate the determination of the response action, a response information base may be preset, wherein the response information base records the preset layout state and the corresponding relationship between the preset gesture information and the preset response action. After receiving gesture information sent by a gesture detection system, inquiring a response action corresponding to the layout state and the gesture information of the current page in a response information base, and determining the inquired response action as a response action to be executed by the page.
Specifically, the layout state of the page is matched with a preset layout state in a response information base to obtain a matched layout state, and since the matched layout state in the response information base may correspond to a plurality of preset gesture information, in order to accurately determine the response action, after the matched layout state is obtained, the gesture track and the gesture direction in the gesture information need to be respectively matched with the gesture track and the gesture direction in the preset gesture information corresponding to the matched layout state in the response information base, so as to obtain the corresponding response action.
For example, when a user uses an intelligent dining table to order dishes, the gesture detection system detects gesture information, the page system matches the layout state of the current page with a preset layout state in the response information base, the layout state obtained through matching is assumed to be the layout state of a dish list page, according to the response information base, the layout state of the dish list page corresponds to 3 pieces of preset gesture information, which are respectively gesture information corresponding to leftward movement of one hand, gesture information corresponding to forward movement of one hand, and gesture information corresponding to forward movement of two hands, and the preset response actions corresponding to the 3 pieces of preset gesture information are respectively displaying the dish list in a leftward rolling manner, adding the dishes corresponding to the gesture information in the dish list into a shopping cart, and entering the page of the shopping cart. If the gesture track and the gesture direction in the detected gesture information are respectively matched with the gesture track and the gesture direction in the gesture information corresponding to forward movement of one hand, the obtained response action is to add the dishes corresponding to the gesture information in the dish list to the shopping cart.
Step S203, the trigger page executes the response action.
After the response action to be executed by the page is determined, the page is triggered to execute the response action, so that the real-time feedback of gesture information is realized, and a user can conveniently operate the page through gestures.
Step S204, when the page is detected to be in the triggerable state, the gesture detection system is informed to close the gesture self-circulation detection.
After the response action is executed, a page which does not support the gesture operation may be entered, in which case the gesture detection system does not need to continue to work. In order to avoid unnecessary work of the gesture detection system, after the response action is executed, the page system needs to detect whether the current page is in an untriggerable state, and if the page is detected to be in the untriggerable state, the gesture detection system can be notified to close gesture self-loop detection through a preset communication protocol.
In an actual application scenario, when a display screen of the electronic device is large in size, there may be a situation that a plurality of users need to perform gesture operations on a page at the same time. Specifically, after the gesture detection system detects gesture information, a sub-region corresponding to the gesture information can be determined according to gesture relative position information in the gesture information detected by the gesture detection system, then a response action to be executed by the sub-region is determined according to the layout state and the gesture information of the sub-region corresponding to the gesture information in the page, and then the sub-region is triggered to execute the response action, so that a plurality of users can conveniently and simultaneously perform gesture operation on the page.
For example, when a user uses the smart table to order dishes, the touch display screen of the smart table is a touch display screen with a large size (for example, 43 inches or more), and a plurality of users sit beside the smart table and can use the smart table to order dishes at the same time. As shown in fig. 2b, two users (referred to as user 1 and user 2) are respectively seated on two sides of the smart dining table relatively, in this case, the page includes two sub-areas, namely sub-area 1 and sub-area 2, the sub-area 1 is the lower half portion of the touch display screen, the sub-area 2 is the upper half portion of the touch display screen, the menu lists are respectively displayed in the two sub-areas, and the two users can operate simultaneously. Specifically, if the user 1 wants to enter the shopping cart page and the user 2 wants to scroll the menu list displayed in the sub-region 2 to the left, the user 1 may make a gesture of moving forward with two hands above the sub-region 1, and the user 2 may make a gesture of moving left with one hand above the sub-region 2, then the determined response action that the sub-region 1 needs to execute is taken as entering the shopping cart page, and the response action that the sub-region 2 needs to execute is taken as displaying the menu list in a manner of scrolling to the left, so that the gesture information is fed back in a plurality of sub-regions respectively, and is not interfered with each other, so that a plurality of users can conveniently perform gesture operations on the page at the same time.
The gesture recognition feedback method provided by the embodiment realizes feedback of gesture information in a plurality of sub-regions respectively, so that a plurality of users can conveniently and simultaneously perform gesture operation on a page, page operation modes are enriched, interaction with the users is effectively increased, and user experience is improved; according to the response information base, response actions corresponding to the layout state and the gesture information of the page can be determined quickly and accurately; and when the page is detected to be in the triggerable state, the gesture detection system is timely informed to close the gesture self-circulation detection, so that the gesture detection system is effectively prevented from carrying out unnecessary work, and the energy is saved.
Fig. 3a is a schematic flow chart of a gesture recognition feedback method according to another embodiment of the present invention, and as shown in fig. 3a, the method includes the following steps:
step S300, when the page is detected to be in the triggerable state, the gesture detection system is informed to start gesture self-circulation detection.
When the page is detected to be in the triggerable state, the page is indicated to support gesture operation, and then the gesture detection system is informed to start gesture self-circulation detection. The gesture detection system may include a camera device and/or a touch detection device, etc. Use gesture detection system to include camera equipment to be applied to dining and drinking place for example, this dining and drinking place is provided with intelligent dining table, and the user sits by intelligent dining table, and camera equipment can set up around intelligent dining table for shoot the video around the intelligent dining table, then carry out identification process to the frame image in the video, thereby obtain the gesture information of sitting the user by intelligent dining table.
The page can comprise a first element set and a second element set, the first element set comprises at least one item element, and the item element can be an item provided by a shopping platform or a shop, such as a dish provided by a dining shop, a hairdressing item provided by a hairdressing shop, a nail item provided by a nail shop, a commodity provided by a business company or an amusement item provided by an amusement game shop. The user can add the item element in the first element set to the second element set through gesture operation. For example, when a user wants to add an item element in a first element set displayed on a page to a second element set, the user may make a gesture to move the item element in the first element set to a direction in which the second element set is located.
Step S301, receiving detected gesture information sent by the gesture detection system through a preset communication protocol.
Step S302, according to the layout state of the page, a first element set and a second element set in the page are determined.
In order to accurately add the item elements in the first element set to the second element set, after gesture information detected by the gesture detection system is received, the first element set and the second element set in the page need to be determined according to the layout state of the page. Specifically, the position information of the first element set and the second element set in the page, the item elements included in the first element set and the second element set, the position information of the item elements in the page, and the like can be determined. Taking the page as the page displayed on the smart table as an example, the first element set in the determined page may be a dish list, and the second element set may be a shopping cart.
Step S303, judging whether the gesture information corresponds to a gesture event for adding the article elements in the first element set to the second element set; if yes, go to step S304; if not, the method ends.
Specifically, the corresponding article element in the first element set may be determined according to the gesture relative position information in the gesture information, and then, whether the gesture information meets the preset adding condition is determined according to the gesture track and/or the gesture direction in the gesture information. For example, the preset adding condition may include that the gesture moving distance meets a preset distance range and/or the gesture direction meets a preset direction range, and the like. If the gesture information is judged to meet the preset adding condition, determining that the gesture information corresponds to a gesture event for adding the article elements in the first element set to the second element set, and executing the step S304; and if the gesture information is judged to be not in accordance with the preset adding condition, determining that the gesture information is not in accordance with the gesture event for adding the article elements in the first element set to the second element set, and ending the method.
Since the gesture detection system starts gesture self-loop detection, the gesture detection system may further continue to detect gesture information, and then the steps S301 to S303 are further performed when the gesture detection system detects gesture information.
Step S304, determining the response action to be executed by the page as adding the item element in the second element set.
And under the condition that the gesture information is judged to correspond to the gesture event for adding the item element in the first element set to the second element set, determining the response action to be executed by the page as adding the item element in the second element set. And the object element is the object element corresponding to the gesture information.
Step S305, adding the item element in the second element set by the trigger page.
Wherein the item element may still be included in the first element set after the item element in the first element set is added to the second element set. Specifically, the page program can add the article elements in the first element set to the second element set according to the gesture tracks in the gesture information, so that an animation effect of moving the added article elements is brought to a user, the user experience is improved, and interestingness is increased.
In practical application scenarios, there may be situations where multiple users need to perform gesture operations on a page at the same time, in this case, the page may include a plurality of sub-regions that, after the gesture detection system detects gesture information, according to the gesture relative position information in the gesture information detected by the gesture detection system, determining the sub-region corresponding to the gesture information, then according to the layout state of the sub-area corresponding to the gesture information in the page, determining a first element set and a second element set in the sub-area, and determining whether the gesture information corresponds to a gesture event for adding an item element in the first element set in the sub-region to the second element set, if so, determining the response action to be executed by the sub-area as adding the item element in the second element set, and then triggering the sub-area to add the item element in the second element set.
For example, when a menu ordering operation is performed on the smart table by using a gesture, as shown in fig. 3b, two users (referred to as user 1 and user 2) are respectively and oppositely seated at two sides of the smart table, in this case, the page includes two sub-areas, namely sub-area 1 and sub-area 2, sub-area 1 is a lower half portion of the touch display screen, sub-area 2 is an upper half portion of the touch display screen, a shopping cart is displayed in a middle portion of the touch display screen, menu lists are respectively displayed in both sub-areas 1 and 2, and the two users can perform menu ordering operation by using the gesture at the same time. Specifically, if the user 1 wants to add the dish 1 in the dish list displayed in the sub-area 1 to the shopping cart and the user 2 wants to add the dish 2 in the dish list displayed in the sub-area 2 to the shopping cart, the user 1 may make a gesture of moving the dish 1 in the direction of the shopping cart above the dish 1 in the sub-area 1, and the user 2 may make a gesture of moving the dish 2 in the direction of the shopping cart above the dish 2 in the sub-area 2, and then the determined response action that the sub-area 1 needs to execute is used as adding the dish 1 in the shopping cart, and the response action that the sub-area 2 needs to execute is used as adding the dish 2 in the shopping cart, so that the gesture information is fed back in a plurality of sub-areas respectively, and does not interfere with each other, so that a plurality of users can conveniently and simultaneously perform a dish order single operation.
The gesture recognition feedback method provided by the embodiment realizes feedback of gesture information in a plurality of sub-regions respectively, so that a plurality of users can conveniently and simultaneously perform gesture operation on a page; and the user can conveniently add the article elements from one set to other sets in a gesture operation mode, so that the article element adding mode is enriched, the interaction with the user is effectively increased, the interaction requirement of the user is well met, the user experience is favorably improved, and the interestingness is increased.
Fig. 4 is a block diagram illustrating a structure of a gesture recognition feedback apparatus according to an embodiment of the present invention, and as shown in fig. 4, the gesture recognition feedback apparatus 410 includes: a notification module 401, an action determination module 402 and a trigger module 403.
The notification module 401 is adapted to: when the page is detected to be in a triggerable state, the gesture detection system 420 is notified to initiate gesture self-loop detection.
The action determining module 402 is adapted to: according to the layout state of the page and the gesture information detected by the gesture detection system 420, the response action to be executed by the page is determined.
Optionally, the action determining module 402 is further adapted to: inquiring response actions corresponding to the layout state and the gesture information of the page in a preset response information base; the response information base records a corresponding relation between a preset layout state and preset gesture information and a preset response action; and determining the inquired response action as the response action to be executed by the page.
Optionally, the action determining module 402 is further adapted to: matching the layout state of the page with a preset layout state in a response information base to obtain a matched layout state; and respectively matching the gesture track and the gesture direction in the gesture information with the gesture track and the gesture direction in the preset gesture information corresponding to the matched layout state in the response information base to obtain the corresponding response action.
Optionally, the action determining module 402 is further adapted to: determining a first element set and a second element set in a page according to the layout state of the page; determining whether the gesture information corresponds to a gesture event for adding an item element in the first set of elements to the second set of elements; and if so, determining the response action to be executed by the page as adding the item element in the second element set.
The triggering module 403 is adapted to: the trigger page performs a response action.
For example, in the event that the action determination module 402 determines that the responsive action to be performed by the page is to add an item element in the second set of elements, the triggering module 403 triggers the page to add an item element in the second set of elements.
Optionally, the apparatus may further comprise: the receiving module 404 is adapted to receive the detected gesture information sent by the gesture detection system 420 through the preset communication protocol.
Optionally, the notification module 401 is further adapted to: when the page is detected to be in the non-triggerable state, the gesture detection system 420 is notified to turn off gesture self-loop detection.
In the case where the page includes a plurality of sub-regions, the apparatus may further include: the region determining module 405 is adapted to determine a sub-region corresponding to the gesture information according to the gesture relative position information in the gesture information detected by the gesture detecting system 420. In this case, the action determining module 402 is further adapted to: determining response actions to be executed by the sub-regions according to the layout state and the gesture information of the sub-regions corresponding to the gesture information in the page; the triggering module 403 is further adapted to: triggering the sub-region to perform the responsive action.
The gesture recognition feedback device provided by the embodiment can detect gesture information by utilizing a gesture detection system, and can accurately determine the response action to be executed by a page according to the current layout state of the page and the detected gesture information under the condition of detecting the gesture information, so that the real-time feedback of the gesture information is realized.
The invention also provides a nonvolatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the executable instruction can execute the gesture recognition feedback method in any method embodiment.
Fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 5, the computing device may include: a processor (processor)502, a Communications Interface 504, a memory 506, and a communication bus 508.
Wherein:
the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically perform the relevant steps in the above-described gesture recognition feedback method embodiment.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to enable the processor 502 to execute the gesture recognition feedback method in any of the above-described method embodiments. For specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the gesture recognition feedback embodiments described above, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose preferred embodiments of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Claims (16)
1. A gesture recognition feedback method, the method comprising:
detecting gesture information of a user in a business place;
acquiring the layout state of the current page; the page comprises a first element set and a second element set;
determining the item elements to be added to the second element set in the first element set according to the layout state of the page and the gesture information;
adding the item element in the second element set according to the gesture information.
2. The method of claim 1, wherein the determining, according to the layout state of the page and the gesture information, an item element of the first set of elements to be added to the second set of elements further comprises:
determining the position information of each item element in the first element set in the page according to the layout state of the page;
and determining the article elements to be added to the second element set in the first element set according to the gesture relative position information in the gesture information.
3. The method of claim 1, wherein the adding the item element in the second set of elements according to the gesture information further comprises:
and adding the item elements in the first element set into the second element set according to the gesture tracks in the gesture information.
4. The method of claim 1, wherein prior to said adding the item element in the second set of elements according to the gesture information, the method further comprises: judging whether the gesture information meets a preset adding condition or not according to a gesture track and/or a gesture direction in the gesture information;
the adding the item elements in the second element set according to the gesture information specifically includes: and if the preset adding condition is met, adding the article elements in the second element set according to the gesture information.
5. The method of claim 4, wherein the preset addition condition comprises: the gesture moving distance accords with a preset distance range and/or the gesture direction accords with a preset direction range.
6. The method of any of claims 1-5, wherein the page includes a plurality of sub-regions; the determining, according to the layout state of the page and the gesture information, the item elements to be added to the second element set in the first element set further includes:
determining a sub-region corresponding to the gesture information according to the gesture relative position information in the gesture information;
and determining the article elements to be added to the second element set in the first element set in the subarea according to the layout state of the subarea corresponding to the gesture information in the page and the gesture information.
7. The method of any of claims 1-6, wherein the detecting gesture information of the user in the business venue further comprises:
identifying frame images in a video of a service place to obtain gesture information; or,
and detecting a sliding contact event of a user on the touch screen, and obtaining gesture information according to the sliding contact event.
8. A gesture recognition feedback device, the device comprising:
means for detecting gesture information of a user in a business venue;
a module for obtaining a layout state of a current page; the page comprises a first element set and a second element set;
determining item elements to be added to the second element set in the first element set according to the layout state of the page and the gesture information;
means for adding the item element in the second set of elements according to the gesture information.
9. The apparatus of claim 8, wherein the means for determining, from the layout state of the page and the gesture information, an item element of the first set of elements to be added to the second set of elements is further adapted to:
determining the position information of each item element in the first element set in the page according to the layout state of the page;
and determining the article elements to be added to the second element set in the first element set according to the gesture relative position information in the gesture information.
10. The apparatus of claim 8, wherein the means for adding the item element in the second set of elements according to the gesture information is further adapted to:
and adding the item elements in the first element set into the second element set according to the gesture tracks in the gesture information.
11. The apparatus of claim 8, wherein the apparatus further comprises: the module is used for judging whether the gesture information meets a preset adding condition or not according to the gesture track and/or the gesture direction in the gesture information;
the module for adding the item element in the second set of elements according to the gesture information is further adapted to: and if the preset adding condition is met, adding the article elements in the second element set according to the gesture information.
12. The apparatus of claim 11, wherein the preset addition condition comprises: the gesture moving distance accords with a preset distance range and/or the gesture direction accords with a preset direction range.
13. The apparatus of any of claims 8-12, wherein the page comprises a plurality of sub-regions; the module for determining, according to the layout state of the page and the gesture information, item elements in the first set of elements to be added to the second set of elements is further adapted to:
determining a sub-region corresponding to the gesture information according to the gesture relative position information in the gesture information;
and determining the article elements to be added to the second element set in the first element set in the subarea according to the layout state of the subarea corresponding to the gesture information in the page and the gesture information.
14. The apparatus according to any of claims 8-13, wherein the means for detecting gesture information of a user in a business venue is further adapted to:
identifying frame images in a video of a service place to obtain gesture information; or,
and detecting a sliding contact event of a user on the touch screen, and obtaining gesture information according to the sliding contact event.
15. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the gesture recognition feedback method according to any one of claims 1-7.
16. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the gesture recognition feedback method of any one of claims 1-7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110224675.4A CN112965654A (en) | 2018-02-27 | 2018-02-27 | Gesture recognition feedback method and device |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810161137.3A CN108446072B (en) | 2018-02-27 | 2018-02-27 | Gesture recognition feedback method and device |
| CN202110224675.4A CN112965654A (en) | 2018-02-27 | 2018-02-27 | Gesture recognition feedback method and device |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810161137.3A Division CN108446072B (en) | 2018-02-27 | 2018-02-27 | Gesture recognition feedback method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN112965654A true CN112965654A (en) | 2021-06-15 |
Family
ID=63192992
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110224675.4A Pending CN112965654A (en) | 2018-02-27 | 2018-02-27 | Gesture recognition feedback method and device |
| CN201810161137.3A Active CN108446072B (en) | 2018-02-27 | 2018-02-27 | Gesture recognition feedback method and device |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810161137.3A Active CN108446072B (en) | 2018-02-27 | 2018-02-27 | Gesture recognition feedback method and device |
Country Status (2)
| Country | Link |
|---|---|
| CN (2) | CN112965654A (en) |
| WO (1) | WO2019165936A1 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112965654A (en) * | 2018-02-27 | 2021-06-15 | 口碑(上海)信息技术有限公司 | Gesture recognition feedback method and device |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130097566A1 (en) * | 2011-10-17 | 2013-04-18 | Carl Fredrik Alexander BERGLUND | System and method for displaying items on electronic devices |
| CN104083869A (en) * | 2014-07-11 | 2014-10-08 | 京东方科技集团股份有限公司 | Multiplayer game machine and display system |
| CN104216646A (en) * | 2013-05-30 | 2014-12-17 | 华为软件技术有限公司 | Method and device for creating application program based on gesture |
| CN104536659A (en) * | 2014-12-15 | 2015-04-22 | 小米科技有限责任公司 | Target object information processing method and device |
| CN105511768A (en) * | 2015-12-09 | 2016-04-20 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
| CN105592366A (en) * | 2016-03-01 | 2016-05-18 | 钟林 | Method and device for operating character input for intelligent television by means of positional gestures |
| CN107493495A (en) * | 2017-08-14 | 2017-12-19 | 深圳市国华识别科技开发有限公司 | Interaction locations determine method, system, storage medium and intelligent terminal |
| CN107563286A (en) * | 2017-07-28 | 2018-01-09 | 南京邮电大学 | A kind of dynamic gesture identification method based on Kinect depth information |
| CN108279845A (en) * | 2018-01-30 | 2018-07-13 | 口碑(上海)信息技术有限公司 | Article element slides adding method and device |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112965654A (en) * | 2018-02-27 | 2021-06-15 | 口碑(上海)信息技术有限公司 | Gesture recognition feedback method and device |
-
2018
- 2018-02-27 CN CN202110224675.4A patent/CN112965654A/en active Pending
- 2018-02-27 CN CN201810161137.3A patent/CN108446072B/en active Active
-
2019
- 2019-02-22 WO PCT/CN2019/075835 patent/WO2019165936A1/en not_active Ceased
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130097566A1 (en) * | 2011-10-17 | 2013-04-18 | Carl Fredrik Alexander BERGLUND | System and method for displaying items on electronic devices |
| CN104216646A (en) * | 2013-05-30 | 2014-12-17 | 华为软件技术有限公司 | Method and device for creating application program based on gesture |
| CN104083869A (en) * | 2014-07-11 | 2014-10-08 | 京东方科技集团股份有限公司 | Multiplayer game machine and display system |
| CN104536659A (en) * | 2014-12-15 | 2015-04-22 | 小米科技有限责任公司 | Target object information processing method and device |
| CN105511768A (en) * | 2015-12-09 | 2016-04-20 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
| CN105592366A (en) * | 2016-03-01 | 2016-05-18 | 钟林 | Method and device for operating character input for intelligent television by means of positional gestures |
| CN107563286A (en) * | 2017-07-28 | 2018-01-09 | 南京邮电大学 | A kind of dynamic gesture identification method based on Kinect depth information |
| CN107493495A (en) * | 2017-08-14 | 2017-12-19 | 深圳市国华识别科技开发有限公司 | Interaction locations determine method, system, storage medium and intelligent terminal |
| CN108279845A (en) * | 2018-01-30 | 2018-07-13 | 口碑(上海)信息技术有限公司 | Article element slides adding method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108446072A (en) | 2018-08-24 |
| CN108446072B (en) | 2021-01-05 |
| WO2019165936A1 (en) | 2019-09-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3221817B1 (en) | Screenshot based indication of supplemental information | |
| CN110110203B (en) | Resource information pushing method, server, resource information display method and terminal | |
| TWI744368B (en) | Play processing method, device and equipment | |
| KR101885775B1 (en) | Method for capturing content and mobile terminal thereof | |
| KR102008495B1 (en) | Method for sharing content and mobile terminal thereof | |
| KR101894395B1 (en) | Method for providing capture data and mobile terminal thereof | |
| KR101619559B1 (en) | Object detection and user settings | |
| CN115022653A (en) | Information display method and device, electronic equipment and storage medium | |
| CN113518264B (en) | Interactive method, device, terminal and storage medium | |
| KR20130097488A (en) | Method for providing information and mobile terminal thereof | |
| US20250335977A1 (en) | Method and device for controlling live content streaming service | |
| CN109154943A (en) | Server-based conversion of autoplay content to click-to-play content | |
| CN105488145B (en) | Display methods, device and the terminal of web page contents | |
| US20150347461A1 (en) | Display apparatus and method of providing information thereof | |
| CN107885823B (en) | Audio information playing method and device, storage medium and electronic equipment | |
| WO2016155446A1 (en) | Information display method, channel management platform, and terminal | |
| US20150019976A1 (en) | Portable terminal and method for providing information using the same | |
| CN117009687A (en) | Information display methods, devices, electronic equipment and storage media | |
| CN115086774B (en) | Resource display method, device, electronic equipment and storage medium | |
| CN108446072B (en) | Gesture recognition feedback method and device | |
| CN110213307B (en) | Multimedia data pushing method and device, storage medium and equipment | |
| CN114936000A (en) | Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework | |
| CN107291358A (en) | Content display control method, electronic equipment, and videoconference client | |
| CN115379113A (en) | Shooting processing method, device, equipment and storage medium | |
| CN114090896B (en) | Information display method, device and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210615 |