WO2021134160A1 - Procédé de pilotage d'un écran, dispositif de surveillance de suivi et support de stockage - Google Patents
Procédé de pilotage d'un écran, dispositif de surveillance de suivi et support de stockage Download PDFInfo
- Publication number
- WO2021134160A1 WO2021134160A1 PCT/CN2019/129778 CN2019129778W WO2021134160A1 WO 2021134160 A1 WO2021134160 A1 WO 2021134160A1 CN 2019129778 W CN2019129778 W CN 2019129778W WO 2021134160 A1 WO2021134160 A1 WO 2021134160A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- eye
- image
- display
- user
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 210000000887 face Anatomy 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 11
- 210000001747 pupil Anatomy 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 description 4
- 230000004438 eyesight Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
Definitions
- the present disclosure generally relates to a method for driving a display, a tracking monitor and a storage medium.
- the surface of the medical electronic device may contaminate the hand of the medical professional through physical contact.
- the medical professional needs to frequently clean his or her hand. It wastes a lot of time and effort of the medical professional.
- a method for driving a display towards a user including: identifying a face of the user in an image captured by an image capturing device; if the face of the user is identified, determining whether an eye on the face focuses on a predefined interesting area, wherein the predefined interesting area includes at least an area where the display is located; and if the eye on the face focuses on the predefined interesting area, controlling to drive the display according to a position of the face. Since the method drives the display when the eye on the face focused on the predefined interesting area, the unnecessary moving of the display by motor would be reduced. Identifying a face of the user would be identifying the existence of a face without further comparing.
- determining whether the eye on the face focuses on the predefined interesting area includes: extracting contour data of the eye on the face from the image; and analyzing the contour data of the eye on the face to determine whether the eye focuses on the predefined interesting area.
- extracting the contour data of the eye on the face from the image includes: extracting a plurality of feature points from the eye on the face detected in the image; determining whether the number of the plurality of feature points is greater than a preset quantity threshold; and if the number of the plurality of feature points is greater than the preset quantity threshold, determining that the contour data of the eye on the face is extracted from the image.
- the contour data of the eye on the face comprise at least three points selected from a feature point group, or at least two lengths, wherein each of the lengths includes a distance between any two points selected from the feature point group, and the feature point group consists of: both end points of the eye along the longitude direction of the eye on the face, a first point farthest away from the center of the eye contour which is located at an end of the eye on the face, and a second point farthest away from the center of the eye contour which is located at another end of the eye on the face, and at least one set of points selected from a group consisting of vertexes of an inscribed polygon of an iris, vertexes of an circumscribed polygon of the iris, vertexes of an inscribed polygon of a pupil, vertexes of an circumscribed polygon of the pupil, points on edges of the iris, and points on edges of the pupil.
- analyzing the contour data of the eye on the face to determine whether the eye focuses on the predefined interesting area includes: calculating a ratio of an eye contour based on the contour data of the eye on the face; determining whether the ratio is greater than a preset ratio threshold; and if the ratio is greater than the preset ratio threshold, determining that the eye focuses on the predefined interesting area.
- the ratio is obtained according to equation (1) :
- r represents the ratio
- p1, p2, p3, p4, p5 and p6 are feature points of the eye contour extracted from the image
- represents norm operation
- represents a width of the eye contour of the eye on the face
- represents a first height of the eye contour of the eye on the face
- represents a second height of the eye contour of the eye on the face.
- the ratio is obtained according to equation (2) :
- r represents the ratio
- ⁇ represents a correction factor
- p1, p2, p3, p4, p5 and p6 are feature points of the eye contour extracted from the image
- represents norm operation
- represents a width of the eye contour of the eye on the face
- represents a first height of the eye contour of the eye on the face
- represents a second height of the eye contour of the eye on the face;
- d 2 represents a distance between the image capturing device and the eye of the user
- s represents a distance between the image capturing device and the center of the display.
- identifying the face of the user in the image captured by the image capturing device includes: detecting a candidate face from the image; comparing the candidate face with at least one pre-stored face data; and if the candidate face matches any one of the pre-stored face data, determining that the candidate face is identified.
- detecting a candidate face from the image includes: if a plurality of faces are detected from the image, determining a first face with a largest area among the plurality of faces as the candidate face.
- identifying the face of the user in the image captured by the image capturing device further includes: if the candidate face fails to match the at least one pre-stored face data, sending a notice; and upon receiving a user instruction of permitting access of the candidate face in response to the notice, storing data of the candidate face and determining that the candidate face is identified.
- determining whether the eye on the identified face of the user continues to focus on the predefined interesting area includes: identifying the face of the user in a plurality of images subsequent to the image from which the face of the user is identified; extracting contour data of the eye on the face of the user from each of the plurality of images; calculating ratios of an eye contour based on all the contour data obtained from the plurality of images; and if differences among ratios of the eye contour for the plurality of images are smaller than a preset tolerance threshold, determining that the eye on the face continues to focus on the predefined interesting area.
- controlling to drive the display according to the position of the face of the user includes: determining whether the position of the face of the user falls within a preset region on the image, wherein the preset region on the image is relevant to positions of the image capturing device, the user and the display; if the position of the face falls within the preset region on the image, remaining the display at a given position; and if the position of the face falls beyond the preset region on the image, controlling to adjust the display so that the face falls back into the preset region.
- the preset region on the image is relevant to a correction angle obtained according to equation (4) :
- ⁇ represents the correction angle
- ⁇ ’ represents an angle between an optical axis of the image capturing device and the horizontal plane
- d 2 represents a distance between the image capturing device and the eye of the user
- s represents a distance between the image capturing device and the center of the tracking device.
- a storage medium storing computer instructions, wherein once the computer instructions are executed, the above method is performed.
- a tracking monitor including: a display; a storage medium storing computer instructions; and a processor; wherein once the computer instructions are executed, the processor controls to drive the display according to the above method. Since the display of the tracking monitor is driven while the eye on the face focused on the predefined interesting area, the unnecessary moving of the display would be reduced.
- said at least one motor includes: a first motor, configured to adjust the display to move in a horizontal direction.
- the medical staff may operate the medical electronic device while standing in most of clinic environment.
- the height of the medical staff is relatively constant in contrast to the position of the medical electronic device in the medical facility. Therefore, the adjustment of the display in the horizontal direction may avoid the hand-adjusting of the monitor with minor cost increase.
- the display is a touch screen.
- the image capturing device is disposed on the display.
- a method for driving a display towards a user including: detecting a face from the image; if the candidate face fails to match at least one pre-stored face data, sending a notice; upon receiving a user instruction of permitting access of the candidate face in response to the notice, storing data of the candidate face and determining that the candidate face is identified; and controlling to drive the display according to a position of the detected face.
- the tracking operation may not be triggered until it is determined that the user is focusing on the display, which reduces interference and avoids misoperation in practical application. Therefore, the display can follow a particular user without interferences. In other words, the user can control the display to follow him through eyesight without touching it. In this way, convenience of operation can be realized.
- FIG. 1 schematically illustrates a flow chart of a method for driving a display towards a user according to an embodiment of the present disclosure
- FIG. 2 schematically illustrates a front view of a structure of a tracking monitor according to an embodiment of the present disclosure
- FIG. 3 schematically illustrates a rear view of the structure of the tracking monitor according to an embodiment of the present disclosure
- FIG. 4 schematically illustrates a flow chart of a method for identifying a face of a user in an image captured by an image capturing device according to an embodiment of the present disclosure
- FIG. 5 schematically illustrates a flow chart of a method for determining whether an eye on the face focuses on a predefined interesting area according to an embodiment of the present disclosure
- FIG. 6 schematically illustrates an eye contour of the eye on the face according to an embodiment of the present disclosure
- FIG. 7 schematically illustrates a parameter correction process according to an embodiment of the present disclosure.
- FIG. 1 schematically illustrates a flow chart of a method for driving a display towards a user according to an embodiment of the present disclosure.
- the method for driving a display toward a user includes S101, S103, S105, and S107.
- an image is captured by an image capturing device.
- the image shows a scenario in front of the display.
- the scenario in front of the display represents an area where people may stand. Based on analysis of the image, it may be determined that there is someone standing in front of the display.
- the image capturing device may be integrated on the display.
- the display may be a touch screen.
- FIG. 2 schematically illustrates a front view of a structure of a tracking monitor according to an embodiment of the present disclosure
- FIG. 3 schematically illustrates a rear view of the structure of the tracking monitor according to an embodiment of the present disclosure.
- a tracking monitor 1 may include a display 11.
- the tracking monitor 1 is equipped with an image capturing device 12.
- the image capturing device 12 may be a camera and is disposed on top of the display 11.
- the image capturing device 12 and the display 11 may be integrated together.
- the image capturing device 12 may be integrated on left side, right side or on the bottom of the display 11.
- the image capturing device may be an independent device and is or is not disposed on the display.
- the image capturing device may be a surveillance camera which is set up in a room where the display is located.
- the tracking monitor may further include an infrared sensor, which is configured to detect whether there is someone standing in front of the display.
- the image capturing device 12 may be started to capture an image.
- the image captured by the image capturing device 12 is transmitted to a processor (not shown) to identify a user.
- the tracking monitor 1 further includes the processor, which is coupled with the image capturing device 12.
- the processor may also be integrated in the tracking monitor 1.
- the image capturing device may be integrated in a mobile phone, which is disposed on or near the display.
- the processor may connect to the mobile phone to receive the image captured by the mobile phone via wire, BLUETOOTH, Wi-Fi, etc.
- a face recognition process may be performed to detect at least one candidate face from the image.
- the face recognition process may be executed automatically, or it may be executed in response to a request of the user.
- FIG. 4 schematically illustrates a flow chart of a method for identifying a face of a user in an image captured by an image capturing device according to an embodiment of the present disclosure.
- the method for identifying the face of the user in the image captured by the image capturing device includes S301, S303, S305 and S307.
- a candidate face is detected from the image.
- a first face with a largest area among the plurality of faces is determined as the candidate face.
- the user represents a nurse and/or a doctor.
- a closest face to the display among the plurality of faces may be determined as the candidate face. For example, for each one of the plurality of faces, a distance between the image capturing device and the face may be calculated. As a result, the face with the largest distance is determined as the candidate face.
- the detected face is compared with at least one pre-stored face data.
- an image capturing process and a face recognition process may be performed in advance to collect pre-stored face data of a plurality of authorized users, and the collected pre-stored face data may be stored in a database.
- the at least one pre-stored face data may be loaded from the database to a memory configured in the processor.
- the pre-stored face data may be image data, or digital data such as characteristic values extracted from the authorized face.
- the characteristic values of the candidate face are compared with those of the at least one pre-stored face data.
- the characteristic values of the plurality of authorized faces may be stored in the database in advance, so that the storage space of the processor may be reduced.
- both image data of authorized faces and characteristic values of authorized faces are stored in the database, so that the accuracy of comparison results may be improved.
- the candidate face matches any one of the at least one pre-stored face data, go to S305. It is determined that the candidate face is identified.
- the similarity between the candidate face and any one of the at least one pre-stored face data is larger than a preset similarity threshold, it is determined that the candidate face is identified, which means the person having the candidate face is authorized.
- the preset similarity threshold may be 80 percent. In practice, the specific value of the preset similarity threshold may be set as needed.
- a notice may sent by the processor.
- the notice may be displayed on the display 12 to ask whether open access to the detected person.
- the notice may include the detected face and/or a display ID. If the access is opened, the candidate face is identified as the authorized user.
- the tracking monitor may be equipped with a LED, which may blink when the candidate face matches none of the at least one pre-stored face data.
- the user instruction upon receiving a user instruction of permitting access of the candidate face in response to the notice, data of the candidate face are stored, and it is determined that the candidate face is identified.
- the user instruction may include a verification of authorization, such as a password, a verification code, etc.
- the data of the candidate face may be updated to the database.
- the data of the candidate face may include image data and/or digital data of the candidate face.
- the data of the candidate face may be stored into the database without comparing with the pre-stored face data.
- an attention detection process is performed to determine whether the identified user is watching the display. If it is determined that the user is watching the display, the display may be initiated to be driven according to the location of the user, especially according to the position of the user’s face.
- the attention detection process is performed via analyzing eyesight on the face. Referring to FIG. 1, in S105, if the face of the user is identified, determining whether an eye on the face focuses on a predefined interesting area.
- the predefined interesting area includes at least an area where the display is located.
- the predefined interesting area may be larger than the display, for example, the display is located in the middle of the predefined interesting area.
- the predefined interesting area may be defined by a location of the image capturing device.
- the predefined interesting area may be where the image capturing device 12 is located.
- the predefined interesting area may be defined by the display and the image capturing device.
- the predefined interesting area may be an area where the tracking monitor 1 is located. If the eye on the face focuses on any part of the tracking monitor 1, it may be determined that the user is watching the display 11.
- FIG. 5 schematically illustrates a flow chart of a method for determining whether an eye on the face focuses on a predefined interesting area according to an embodiment of the present disclosure.
- FIG. 6 schematically illustrates an eye contour of the eye on the face according to an embodiment of the present disclosure.
- the method for determining whether the eye on the face focuses on the predefined interesting area includes S401 and S403.
- contour data of the eye on the face are extracted from the image.
- a feature point extraction process may be performed by the processor to extract the contour data of the eye on the identified face.
- extracting contour data of the eye on the face from the image may include, extracting a plurality of feature points from the eye on the face detected in the image; determining whether the number of the plurality of feature points is greater than a preset quantity threshold; and if the number of the plurality of feature points is greater than the preset quantity threshold, determining that the contour data of the eye on the face is extracted from the image.
- a plurality of feature points may be extracted, such as feature points p1, p2, p3, p4, p5 and p6 as shown in FIG. 6, wherein a connection among the plurality of feature points fits an eye contour of an eye 2.
- the quantity of the plurality of feature points is variable. The more the feature points are extracted, the more accurate the eye contour may be fitted.
- the preset quantity threshold represents a minimum value needed to fit an eye contour of the eye on the face.
- the specific value of the preset quantity threshold may be set as needed.
- the preset quantity threshold is three, and one of the three feature points is not collinear with the other two.
- At least three feature points may be selected from the feature point group. Not all feature points is/are collinear with the others.
- the feature point group may consist of: both end points of the eye along the longitude direction of the eye on the face.
- the longitude direction is parallel with a line between canthi of the eye on the face.
- p1 and p4 represent the both end points of the eye along the longitude direction of the eye 2 on the face (not shown in FIG. 6) .
- p1 may represent a first point farthest away from the center of the eye contour which is located at an end of the eye 2 on the face
- p4 may represent a second point farthest away from the center of the eye contour which is located at another end of the eye 2 on the face.
- the feature point group may consist of: a group consisting of vertexes of an inscribed polygon of an iris of the eye on the face.
- the feature point group may consist of: a group consisting of vertexes of an inscribed polygon of a pupil of the eye on the face. For example, referring to FIG. 6, a quadrangle may be detected from the image and four vertexes, such as p2, p3, p5 and p6 may be extracted as the feature points.
- the feature point group may consist of: a group consisting of vertexes of a circumscribed polygon of the iris of the eye on the face.
- the feature point group may consist of: a group consisting of vertexes of a circumscribed polygon of the pupil of the eye on the face.
- the feature point group may consist of: a group consisting of points on edges of the iris of the eye on the face.
- the feature point group may consist of: a group consisting of points on edges of the pupil of the eye on the face.
- the feature point group may consist of all of the feature points described above, so that the contour data of the eye on the face is more accurate.
- the contour data of the eye on the face may include at least two length, each of which is a distance between any two points selected from the feature point group described above.
- the contour data of the eye 2 on the face may be calculated based on p1, p2, p3, p4, p5 and p6 extracted from the image.
- a width of the eye 2 may be calculated based on p1 and p4
- a first height of the eye 2 may be calculated based on p2 and p6
- a second height of the eye 2 may be calculated based on p3 and p5.
- the contour data of the eye on the face are analyzed to determine whether the eye focuses on the predefined interesting area. Based on the analysis result, it is possible to determine whether the user is closing his/her eye at present. As a result, if the user’s eye is opening in the image, it may be determined that the user is watching the display. Otherwise, it may be determined that the user is not watching the display.
- analyzing the contour data of the eye on the face to determine whether the eye focuses on the predefined interesting area includes: calculating a ratio of an eye contour based on the contour data of the eye on the face; determining whether the ratio is greater than a preset ratio threshold; and if the ratio is greater than the preset ratio threshold, determining that the eye focuses on the predefined interesting area.
- the ratio may be obtained according to equation (2) :
- r represents the ratio
- ⁇ represents a correction factor
- p1, p2, p3, p4, p5 and p6 are feature points of the eye contour extracted from the image according to embodiments described above
- represents norm operation
- represents the width of the eye contour of the eye 2 on the face
- represents the first height of the eye contour of the eye 2 on the face
- represents the second height of the eye contour of the eye 2 on the face.
- a distortion of the image captured by the image capturing device may exist, which may result in a miscalculation of the ratio.
- the correction factor ⁇ is applied to eliminate effects of the image distortion.
- the ratio may be obtained according to equation (1) :
- ⁇ >0, and the specific value of the correction factor ⁇ may be set according to the degree of distortion of the image.
- the degree of distortion of the image may be obtained based on the visual angle of the image capturing device and/or the location of the image capturing device with respect to the display.
- FIG. 7 schematically illustrates a parameter correction process according to an embodiment of the present disclosure.
- the image capturing device 12 is disposed on top of the display 11, a user 3 is standing in front of the display 11 and an image of the face of the user 3 is captured by the image capturing device 12.
- the image capturing device 12 Since the image capturing device 12 is disposed on top of the display 11, the image of the face of the user 3 is a high angle one, which may lead to deviation between a theoretical height of the eye on the face calculated from the image and an actual height of the eye on the face which is actually measured. For example, in a scene of high angle shot shown in FIG. 7, the theoretical height is smaller than the actual height, so that the parameter
- the correction factor ⁇ may be obtained according to equation (3) :
- d 2 represents a distance between the image capturing device 12 and the eye of the user 3
- s represents a distance between the image capturing device 12 and the center of the display 11 in the direction of gravity.
- d 2 may be measured as long as the image capturing device 12 has a diagonal function.
- d 2 may be calculated based on a size of the face of the user 3 detected in the image captured by the image capturing device 12 and a plurality of standard data. For example, a plurality of standard sizes of a face and a plurality of corresponding distances between the image capturing device 12 and the eye of the user 3 are pre-stored as the plurality of standard data. Based on the size of the face of the user 3 detected in the image at present, a corresponding standard distance may be searched among the pre-stored data and determined as the distance d 2 .
- the preset ratio threshold is zero. For example, if the calculated ratio is greater than 0, which means at least one heights of the eye contour of the eye is greater than zero , so that it may be determined that the user’s eye is open in the image. Otherwise, since the first height and the second height are both zero, it may be determined that the user is closing his/her eye when the image is captured.
- the preset ratio threshold may be a bit greater than zero to optimize accuracy. As a result, the eye is determined focusing on the display only when it opens wide enough.
- the method may further includes: determining whether the eye on the identified face of the user continues to focus on the predefined interesting area.
- a plurality of images subsequent to the image from which the face of the user is identified are captured in a preset time range.
- the ratio of the eye contour is calculated based on the contour data of the eye on the identified face obtained from the image.
- a continuous state of the eye on the face in the preset time range can be obtained, which makes it possible to determine whether the user is continuous watching the display in the preset time range.
- the eye on the face continues to focus on the display if differences among ratios of the eye contour for the plurality of images are smaller than a preset tolerance threshold.
- the preset time range is five seconds. In practice, the specific value of the preset time range may be set as needed.
- a range of the preset tolerance threshold is 0 to 0.25.
- the specific value of the preset tolerance threshold may be set as needed.
- the plurality of images subsequent to the image from which the face of the user is identified may be captured in a preset image amount.
- the preset image amount is 10, wherein every two images are captured in a same interval.
- the processor may obtain a new image from the image capturing device and perform the method described above to determine whether to drive the display according to the location of the user.
- the processor may start to identify a new face in the subsequent image. Furthermore, the processor may method described above to identify a new face of a new user and determine whether control to drive the display according to the position of the new face.
- the tracking monitor 1 may further include: at least one motor coupled with the processor, wherein the at least one motor is configured to adjust angles of the display 11 in at least one dimension in response to a driving signal received from the processor.
- the tracking monitor 1 includes at least three rotational axes, wherein two of the at least three rotational axes, such as a first rotational axis 13 and a second rotational axis 14 as shown in FIG. 3, are configured to drive the display to rotate on a horizontal direction, and one of the at least three rotational axes, such as a third rotational axis 15 is configured to drive the display to rotate on a vertical direction.
- a first motor 131 is configured to adjust the display 11 to move in the horizontal direction around the first rotational axis 13.
- controlling to drive the display according to the position of the face may include: driving the display to follow a movement of the face of the user.
- a driving signal is sent from the processor to the first motor 131, which is to adjust the display 11 turn right in the horizontal direction, so that the position of the face of the user is back to the center region of the display 11.
- the display may be driven by the at least one monitor to follow the movement of the user’s face, wherein the reference object is an inanimate object extracted from the image.
- controlling to drive the display according to the position of the face may include: determining whether the position of the face of the user falls within a preset region on the image, wherein the preset region on the image is relevant to positions of the image capturing device, the user and the display. If the position of the face falls within the preset region of the image, the display is remained at a given position.
- the processor may control the at least one motor to adjust the display so that the face falls back into the preset region.
- a visual angle deviation may exist between the position of the face of the user 3 with respect to the display 11 and the position of the face of the user 3 in the image captured by the image capturing device 12.
- the position of the face of the user 3 in the image may locate a bit below the center region of the image captured by the image capturing device 12, so that the correction angle is necessary to eliminate effects of the visual angle deviation.
- the correction angle is obtained according to equation (4) :
- ⁇ represents the correction angle
- ⁇ ’ represents an angle between an optical axis of the image capturing device 12 and a direction of the image capturing device 12 towards the eye of the user 3
- d 2 represents a distance between the image capturing device 12 and the eye of the user 3
- s represents a distance between the image capturing device 12 and the center of the display 11.
- the tracking operation may not be triggered until it is determined that the user is focusing on the display, which reduces interference and avoids misoperation in practical application. Therefore, the display can follow a particular user without interferences. In other words, the user can control the display to follow him through eyesight without touching it. In this way, convenience of operation can be realized.
- a storage medium having computer instructions stored therein is also provided according to the present disclosure, wherein once the computer instructions are executed, the method illustrated in FIG. 1 to FIG. 7 is performed.
- a tracking monitor 1 is also provided according to the present disclosure, includes a display 11; a storage medium (which is not shown in FIG. 2 and FIG. 3) storing computer instructions; and a processor (which is not shown in FIG. 2 and FIG. 3) .
- the processor controls to drive the display 11 according to the method illustrated in FIG. 1 to FIG. 7.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Studio Devices (AREA)
Abstract
L'invention porte sur un procédé de pilotage d'un écran (11), sur un dispositif de surveillance de suivi (1) et sur un support de stockage. Le procédé de pilotage d'un écran (11) à l'égard d'un utilisateur consiste à : identifier un visage de l'utilisateur dans une image capturée par un dispositif de capture d'image (S103); si le visage de l'utilisateur est identifié, déterminer si un œil sur le visage est focalisé sur une zone d'intérêt prédéfinie (S104), la zone d'intérêt prédéfinie comprenant au moins une zone dans laquelle se trouve l'écran (11); et si l'œil sur le visage est focalisé sur la zone d'intérêt prédéfinie, commander le pilotage de l'écran (11) en fonction d'une position du visage. L'opération de suivi peut ne pas être déclenchée jusqu'à ce qu'il soit déterminé que l'utilisateur se concentre sur l'écran (11), ce qui réduit la perturbation et évite une fausse manœuvre dans une application pratique.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/129778 WO2021134160A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé de pilotage d'un écran, dispositif de surveillance de suivi et support de stockage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/129778 WO2021134160A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé de pilotage d'un écran, dispositif de surveillance de suivi et support de stockage |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021134160A1 true WO2021134160A1 (fr) | 2021-07-08 |
Family
ID=76685797
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/129778 WO2021134160A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé de pilotage d'un écran, dispositif de surveillance de suivi et support de stockage |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021134160A1 (fr) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060071135A1 (en) * | 2002-12-06 | 2006-04-06 | Koninklijke Philips Electronics, N.V. | Apparatus and method for automated positioning of a device |
US20060281969A1 (en) * | 2005-06-02 | 2006-12-14 | Vimicro Corporation | System and method for operation without touch by operators |
US20080253519A1 (en) * | 2005-02-18 | 2008-10-16 | Koninklijke Philips Electronics, N.V. | Automatic Control of a Medical Device |
US20100202696A1 (en) * | 2009-02-06 | 2010-08-12 | Seiko Epson Corporation | Image processing apparatus for detecting coordinate position of characteristic portion of face |
CN103369214A (zh) * | 2012-03-30 | 2013-10-23 | 华晶科技股份有限公司 | 图像获取方法与图像获取装置 |
CN103529853A (zh) * | 2012-07-03 | 2014-01-22 | 上海微电子装备有限公司 | 显示器视角调节装置及其调整方法 |
CN103543828A (zh) * | 2013-10-18 | 2014-01-29 | 深圳宝龙达信息技术股份有限公司 | 一种显示系统及自动调整显示屏位置的方法 |
US20150309569A1 (en) * | 2014-04-23 | 2015-10-29 | Google Inc. | User interface control using gaze tracking |
US20170068315A1 (en) * | 2015-09-07 | 2017-03-09 | Samsung Electronics Co., Ltd. | Method and apparatus for eye tracking |
CN106709303A (zh) * | 2016-11-18 | 2017-05-24 | 深圳超多维科技有限公司 | 一种显示方法、装置及智能终端 |
US20180074581A1 (en) * | 2015-03-23 | 2018-03-15 | Haim Melman | Eye Tracking System |
WO2019080358A1 (fr) * | 2017-10-28 | 2019-05-02 | 深圳市前海安测信息技术有限公司 | Robot de navigation chirurgicale utilisant des images 3d et son procédé de commande |
-
2019
- 2019-12-30 WO PCT/CN2019/129778 patent/WO2021134160A1/fr active Application Filing
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060071135A1 (en) * | 2002-12-06 | 2006-04-06 | Koninklijke Philips Electronics, N.V. | Apparatus and method for automated positioning of a device |
US20080253519A1 (en) * | 2005-02-18 | 2008-10-16 | Koninklijke Philips Electronics, N.V. | Automatic Control of a Medical Device |
US20060281969A1 (en) * | 2005-06-02 | 2006-12-14 | Vimicro Corporation | System and method for operation without touch by operators |
US20100202696A1 (en) * | 2009-02-06 | 2010-08-12 | Seiko Epson Corporation | Image processing apparatus for detecting coordinate position of characteristic portion of face |
CN103369214A (zh) * | 2012-03-30 | 2013-10-23 | 华晶科技股份有限公司 | 图像获取方法与图像获取装置 |
CN103529853A (zh) * | 2012-07-03 | 2014-01-22 | 上海微电子装备有限公司 | 显示器视角调节装置及其调整方法 |
CN103543828A (zh) * | 2013-10-18 | 2014-01-29 | 深圳宝龙达信息技术股份有限公司 | 一种显示系统及自动调整显示屏位置的方法 |
US20150309569A1 (en) * | 2014-04-23 | 2015-10-29 | Google Inc. | User interface control using gaze tracking |
US20180074581A1 (en) * | 2015-03-23 | 2018-03-15 | Haim Melman | Eye Tracking System |
US20170068315A1 (en) * | 2015-09-07 | 2017-03-09 | Samsung Electronics Co., Ltd. | Method and apparatus for eye tracking |
CN106709303A (zh) * | 2016-11-18 | 2017-05-24 | 深圳超多维科技有限公司 | 一种显示方法、装置及智能终端 |
WO2019080358A1 (fr) * | 2017-10-28 | 2019-05-02 | 深圳市前海安测信息技术有限公司 | Robot de navigation chirurgicale utilisant des images 3d et son procédé de commande |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3129849B1 (fr) | Systèmes et procédés d'étalonnage de suivi oculaire | |
CN107949863B (zh) | 使用生物体信息的认证装置以及认证方法 | |
WO2015172514A1 (fr) | Dispositif et procédé d'acquisition d'image | |
KR101341212B1 (ko) | 생체 인증 장치, 생체 인증 시스템, 및 생체 인증 방법 | |
US11006864B2 (en) | Face detection device, face detection system, and face detection method | |
US20110142297A1 (en) | Camera Angle Compensation in Iris Identification | |
JP2010086336A (ja) | 画像制御装置、画像制御プログラムおよび画像制御方法 | |
US10148940B2 (en) | Device for use in identifying or authenticating a subject | |
JP5001930B2 (ja) | 動作認識装置及び方法 | |
TWI533234B (zh) | 基於眼部動作的控制方法及其應用之裝置 | |
EP2953057A1 (fr) | Terminal et procédé de reconnaissance de l'iris | |
WO2017189267A1 (fr) | Identification biométrique à modalités multiples | |
CN113557519A (zh) | 信息处理设备、信息处理系统、信息处理方法以及记录介质 | |
JP5751019B2 (ja) | 生体情報処理装置、生体情報処理方法、および生体情報処理プログラム | |
KR20150080781A (ko) | 얼굴 인식에 의한 출입관리장치 및 방법 | |
US10176375B2 (en) | High speed pupil detection system and method | |
CN107223255B (zh) | 一种基于虹膜识别的图像预览方法及装置 | |
JP5254897B2 (ja) | 手画像認識装置 | |
JP5796523B2 (ja) | 生体情報取得装置、生体情報取得方法、生体情報取得制御プログラム | |
KR102151474B1 (ko) | 스마트 단말기를 이용한 비접촉 지문인증 방법 | |
WO2021134160A1 (fr) | Procédé de pilotage d'un écran, dispositif de surveillance de suivi et support de stockage | |
KR101919138B1 (ko) | 원거리 멀티 생체 인식 방법 및 장치 | |
CN213844158U (zh) | 生物特征采集识别系统和终端设备 | |
JP2009205203A (ja) | 虹彩認証装置 | |
KR20020065248A (ko) | 비접촉식 홍채인식을 위한 영상의 정규화 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19958523 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19958523 Country of ref document: EP Kind code of ref document: A1 |