[ Invention ]
In order to solve the problem of myopia caused by bad eye habit of a user when the user uses the mobile equipment, the invention provides a method for preventing myopia and the mobile equipment.
The technical scheme includes that when a screen is lightened, a front camera is used for tracking a face image of a user and calibrating coordinates of feature points on the face image, whether the user accords with a correct watching posture or not is determined according to the coordinates of the feature points on the face image, if not, duration of the user maintaining the incorrect watching posture is determined, and when the duration is larger than a first threshold, a first prompt message is sent to the user, and the first prompt message is used for prompting the user to maintain the correct eye posture.
Preferably, the first prompting message is used for prompting that the viewing distance of the user is unhealthy, or the first prompting message is used for prompting that the viewing angle of the user is unhealthy, or the first prompting message is used for prompting that the viewing gesture of the user is unhealthy.
Preferably, when the first prompting message is used for prompting that the viewing distance of the user is unhealthy, the determining whether the user accords with the correct viewing posture according to the coordinates of each feature point on the face image includes determining the comprehensive imaging size of the feature part of the user according to the coordinates of each feature point, calculating the viewing distance according to the comprehensive imaging size of the feature part and preset standard data, and determining that the user does not accord with the correct viewing posture when the viewing distance is greater than the eye safety distance.
Preferably, the feature is a pupil, the integrated imaging size of the feature is a pupil distance, and the determining the integrated imaging size of the feature of the user includes calculating the pupil distance L between the two eyes of the user by the following formula: The coordinates of the left pupil are (x 1, y 1), and the coordinates of the right pupil are (x 2, y 2).
The calculating of the viewing distance comprises the steps of calculating a transverse viewing angle and a longitudinal viewing angle according to the deviation of the left pupil and the right pupil from a picture origin based on the following formula, namely alpha= | (x 2 +x1) |, beta= | (y 2 +y1) |, wherein alpha is related to the transverse viewing angle, beta is related to the longitudinal viewing angle, the picture center point is a coordinate system origin calibrated in advance, when alpha is smaller than a second threshold value and beta is smaller than a third threshold value, a viewing angle is calculated according to the transverse viewing angle and the longitudinal viewing angle, and the viewing distance is calculated according to the viewing angle and the pupil distance of two eyes of the user through the following formula, namely M=L/cos delta, wherein delta is used for representing the viewing angle, and L is used for representing the pupil distance of the two eyes of the user.
Preferably, when the α is greater than the second threshold or the β is greater than the third threshold, a first prompting message for prompting that the viewing angle of the user is unhealthy is sent to the user.
Preferably, the first prompting message is used for prompting that the user is unhealthy in viewing posture, the determining whether the user accords with the correct viewing posture comprises the steps of connecting a pupil center point in the face image to a human middle hole and extending the pupil center point in a two-way mode to divide the face image, determining a left face area and a right face area respectively according to the sum of skin color pixels counted by the divided face image, calculating the strabismus degree according to a formula d= (L-R)/(L+R), wherein d is used for representing the strabismus degree, L is used for representing the left face area, R is used for representing the right face area, and determining that the user does not accord with the correct viewing posture when the strabismus degree is larger than a fourth threshold.
Preferably, the first prompting message is used for prompting that the user is unhealthy in viewing posture, and the determining whether the user accords with the correct viewing posture includes determining that the user is viewing the mobile device according to a face image of the user, deducing the face space posture from a relative spatial position of a mobile phone and a face and horizontal posture data of a nine-axis gyroscope in the mobile device, and determining whether the user is in a side prone posture or a supine posture for viewing if the horizontal posture data is in a preset interval range and the face space posture is in a preset interval, and determining that the user does not accord with the correct viewing posture.
Preferably, the first prompt message comprises a message displayed above a screen of the mobile device, or covering the screen of the mobile device, or sending information to a preset telephone number.
The invention further provides mobile equipment for preventing myopia, which comprises a camera, a processor and a display module, wherein the camera is used for tracking a face image of a user when a screen is lightened and calibrating coordinates of each characteristic point on the face image, the processor is used for determining whether the user accords with a correct viewing posture according to the coordinates of each characteristic point on the face image, if not, the duration of the user maintaining the incorrect viewing posture is determined, and the display module is used for sending a first prompt message to the user when the duration is larger than a first threshold value, wherein the first prompt message is used for prompting the user to maintain the correct viewing posture.
The embodiment of the application provides a myopia prevention method, which specifically comprises the steps of tracking a face image of a user through a front camera when a screen is lightened, calibrating coordinates of each feature point on the face image, determining whether the user accords with a correct watching posture according to the coordinates of each feature point on the face image, if not, determining the duration of the user maintaining the incorrect watching posture, and sending a first prompting message to the user when the duration is larger than a first threshold value, wherein the first prompting message is used for prompting the user to maintain the correct eye using posture. When the user is determined to be incorrect in using the mobile device, reminding the user of maintaining the correct eye-using posture so as to prevent myopia.
[ Detailed description ] of the invention
The following description of the technical solutions in the embodiments of the present invention will be clear and complete, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the problem of myopia caused by bad eye habit of a user when the user uses the mobile equipment, the invention provides a method for preventing myopia and the mobile equipment, which are used for prompting the user to correct the bad eye habit when the user uses the mobile equipment so as to prevent myopia.
Referring to fig. 1, a flowchart of a myopia prevention method provided by the present invention specifically includes the following steps:
101. When the screen is lightened, tracking a face image of a user through a front-facing camera, and calibrating coordinates of each characteristic point on the face image;
When a user uses the mobile device and the screen of the mobile device is displayed to be bright, the face image of the user is tracked through the front camera of the mobile device. After the front camera captures the face image, the coordinates of each feature point are calibrated on the face image. The feature points in the face image may be pupil, corner feature points, center feature points of the eyes, and the like, and are not particularly limited herein.
In addition, various ways of calibrating the coordinates of each feature point on the face image are available, for example, preprocessing the face image, extracting the corner points in the preprocessed face image, filtering and merging the corner points to obtain a connected region of the corner points, and extracting the centroid of the connected region of the corner points may be specifically performed by calculating the brightness difference between the current pixel point and surrounding pixel points according to a predefined 3X3 template, and extracting the pixel point with the brightness difference greater than or equal to a first threshold value as the corner point, where the 3X3 template is formed by using the current pixel point as the center and the pixel points on the left, right, upper, lower, upper left, upper right, lower left and lower right of the current pixel point. And matching the extracted mass center with a face template, calculating the matching probability of the mass center and the face template, and positioning a region formed by the mass center with the matching probability larger than or equal to a preset value as a candidate face region, wherein the face template can be a rectangular template and at least comprises three points, for example, each point is represented by (P, w, h), wherein P is a two-dimensional coordinate of the point, w is a maximum transverse range allowed to appear about the point, and h is a maximum longitudinal range allowed to appear up and down about the point.
Or the coordinates of each characteristic point on the face image can be calibrated through AI face recognition, for example, 2.1 points, 5 points and 6 points are marked, the most critical points of the face are 5 points, namely left and right mouth corners, centers of two eyes and a nose, the 5 key points belong to key points in the face, and the pose of the face can be calculated according to the key points. Of course, there are also schemes of labeling 4 points and 6 points in the early stage. The FRGC key points of eyes, nose, mouth and chin are marked in FRGC-V2 (Face Recognition GRAND CHALLENGE version 2.0) published in 2005. The Caltech 10000Web Faces dataset published in 2007 is labeled with 4 key points of eyes, nose and mouth. The 2013 AFW dataset marked 6 key points for eyes, nose and lips, wherein the lips have 3 points. The MTFL/MAFL dataset published in 2014 labels 5 key points of eyes, nose and 2 corners of mouth. Or 68-point labeling is the most common labeling scheme nowadays, and is proposed in Xm2vtsdb dataset in 1999 in early days, and the 300W dataset, XM2VTS dataset and other datasets also adopt 68-key-point schemes and are adopted in Dlib algorithm in OpenCV.
The labeling of 68 keypoints is also a number of different versions, here we introduce the most general version in Dlib, which divides the face keypoints into internal keypoints comprising 51 total of eyebrows, eyes, nose, mouth and contour keypoints comprising 17 total of keypoints. The adopted 68 face key point marks of Dlib are 5 key points on the single eyebrow, and the left boundary to the right boundary are uniformly sampled, and the total number is 5 multiplied by 2=10. The eye was divided into 6 key points, left and right boundaries, upper and lower eyelids were sampled uniformly, 6×2=12 total. The lips were divided into 20 key points except for 2 corners of the mouth, and into upper and lower lips. The outer boundaries of the upper and lower lips were each evenly sampled at 5 points, and the inner boundaries of the upper and lower lips were each evenly sampled at 3 points, for a total of 20 points. The labeling of the nose increases 4 key points of the nose bridge part, and the nose tip part uniformly collects 5 key points, namely 9 key points. The facial contours uniformly sample 17 keypoints. If the forehead part is added, more, such as 81 key points, can be obtained. Therefore, the technology of calibrating each feature point on the face image is the prior art, and detailed description thereof is omitted here.
Therefore, the method for obtaining the coordinates of each feature point on the face image is not particularly limited.
102. Determining whether the user accords with the correct watching posture according to the coordinates of each characteristic point on the face image;
103. If not, determining a duration for which the user maintains the incorrect viewing pose;
After the coordinates of each feature point on the face image are determined, whether the user accords with the correct viewing posture or not is determined according to the coordinates of each coordinate point, wherein the correct viewing posture can comprise that the viewing distance of the user is correct, the viewing angle is correct and the viewing posture is correct. Referring to fig. 2, a flow chart for determining whether a user accords with a correct viewing posture according to the present invention includes:
1021. determining the comprehensive imaging size of the characteristic part of the user according to the coordinates of the characteristic points
In practical applications, one or more groups of feature points in the face image may be pupils, or feature points equivalent to the pupils, such as corner of eye feature points, center of eye feature points, and the like. In the application, the characteristic part is taken as a pupil example, the comprehensive imaging size of the characteristic part is the pupil distance,
Specifically, the pupil distance L of the two eyes of the user can be calculated by the following formula:
The coordinates of the left pupil are (x 1, y 1), and the coordinates of the right pupil are (x 2, y 2).
1022. Calculating the watching distance according to the comprehensive imaging size of the characteristic part and preset standard data;
After determining the comprehensive imaging size of the user feature, i.e., the pupil distance of both eyes, a lateral viewing angle and a longitudinal viewing angle are calculated from the deviation of the left pupil and the right pupil from the origin of the picture, specifically, the lateral viewing angle and the longitudinal viewing angle are calculated by the following formulas:
α=|(x2+x1)|;
β=|(y2+y1)|;
wherein α is used to denote a correlation with the landscape viewing angle and β is used to denote a correlation with the portrait viewing angle.
Alternatively, the lateral and longitudinal viewing angles may also be calculated by:
α=|(x2+x1)|;
β=|(y2+y1)|;
The alpha is related to the transverse view angle, the beta is related to the longitudinal view angle, and the picture center point is a coordinate system origin point calibrated in advance.
When the alpha is less than the second threshold and the beta is less than the third threshold,
The method comprises the following steps:
and calculating the watching distance according to the observing angle and the pupil distance of the two eyes of the user.
The viewing distance is calculated by the following formula m=l x/cos δ, wherein δ is used to represent the viewing angle and L is used to represent the pupillary distance of the eyes of the user.
1023. When the viewing distance is greater than the eye-safe distance, then it is determined that the user does not conform to the correct viewing pose.
And comparing the viewing distance with the eye-use safety distance after the viewing distance is obtained, and determining that the user does not accord with the correct viewing posture when the viewing distance is larger than the eye-use safety distance.
Optionally, in the embodiment of the present application, the manner of determining whether the user accords with the correct viewing gesture may further include the following manners:
1. And when the alpha is larger than the second threshold value or the beta is larger than the third threshold value, sending a first prompt message for prompting that the viewing angle of the user is unhealthy to the user.
2. The method comprises the steps of connecting a pupil center point in a face image to a human middle hole and extending the pupil center point in a two-way mode to divide the face image, calculating skin color pixel sum according to the divided face image, respectively determining left face area and right face area, calculating strabismus degree according to a formula d= (L-R)/(L+R), wherein d is used for representing the strabismus degree, L is used for representing the left face area, R is used for representing the right face area, and determining that the user does not accord with the correct watching posture when the strabismus degree is larger than a fourth threshold value.
In addition, in the smart phone, the gyroscope is a sensor which is mainly used for detecting the gesture of the smart phone, the user can not play the smart phone with little sense of motion, the smart phone can be used for preventing shake when photographing, and in addition, the smart phone navigation can be used for better positioning sometimes. The earliest gyroscopes are mechanical, have large volume and really have gyroscopes rotating at high speed, and the mechanical things have very high requirements on processing precision and are afraid of vibration, so the precision of the navigation system based on the mechanical gyroscopes is not very high all the time. In smart phones today, gyroscopic sensors have evolved into a small chip, which belongs to a sensor that is an upgrade to acceleration sensors. The acceleration sensor can monitor and sense a certain axial linear motion, and the gyroscope can detect and sense the linear motion and motion of the 3D space. Thus, the direction can be recognized, the posture can be determined, and the angular velocity can be calculated.
In view of the above, the embodiment of the application can also judge whether the user accords with the correct viewing posture through the gyroscope, specifically, the third embodiment determines that the user is watching the mobile device according to the face image of the user, deduces the face space posture through the relative spatial position of the mobile phone and the face and the horizontal posture data of the nine-axis gyroscope in the mobile device, and determines that the user does not accord with the correct viewing posture if the horizontal posture data is in the preset interval range and the face space posture is in the preset interval range and whether the user watches in the side prone posture or the supine posture.
In summary, the embodiments of the present application provide various methods for determining whether a user meets a correct viewing posture, and may combine various ways to be integrated in the mobile device together, so as to more accurately determine that the user's viewing posture is incorrect, and prompt the user to use eye health to prevent myopia.
104. And when the duration is greater than a first threshold, sending a first prompt message to the user, wherein the first prompt message is used for prompting the user to keep the correct eye-using posture.
When the duration is greater than a first threshold, a first alert message is sent to the user to alert the user to maintain a correct eye pose. Specifically, the first prompt message may be that the first prompt message is displayed above a screen of the mobile device, or the screen of the mobile device is covered to prompt the user that the distance from the screen is smaller than a safety eye distance, or the screen is extinguished and the first prompt message is broadcast in a voice mode, or the first prompt message is sent to a preset telephone number to inform. For example, when the viewing distance is too close, the first prompt message is a viewing distance unhealthy early warning signal, or when the viewing angle is greater than a second threshold, the first prompt message is a viewing angle unhealthy early warning signal, or when the strabismus degree is greater than a third threshold, the user is determined to be not in accordance with the correct viewing posture, and the first prompt message is a viewing posture unhealthy early warning signal.
In addition, the embodiment of the application also provides mobile equipment for preventing myopia, which comprises a camera, a processor and a display module, wherein the camera is used for tracking a face image of a user when a screen is lightened and calibrating coordinates of all feature points on the face image, the processor is used for determining whether the user accords with a correct viewing posture according to the coordinates of all feature points on the face image, if not, the duration of the user maintaining the incorrect viewing posture is determined, and the display module is used for sending a first prompt message to the user when the duration is larger than a first threshold value, wherein the first prompt message is used for prompting the user to maintain the correct viewing posture.
The foregoing description is only one or several embodiments of the present invention, and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions of the present invention and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the present invention.