CN115984973B - Human body abnormal behavior monitoring method for peeping-preventing screen - Google Patents
Human body abnormal behavior monitoring method for peeping-preventing screen Download PDFInfo
- Publication number
- CN115984973B CN115984973B CN202310272365.9A CN202310272365A CN115984973B CN 115984973 B CN115984973 B CN 115984973B CN 202310272365 A CN202310272365 A CN 202310272365A CN 115984973 B CN115984973 B CN 115984973B
- Authority
- CN
- China
- Prior art keywords
- edge line
- pixel points
- points
- image
- suspicious
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 208000008918 voyeurism Diseases 0.000 title claims abstract description 38
- 238000012544 monitoring process Methods 0.000 title claims abstract description 35
- 206010000117 Abnormal behaviour Diseases 0.000 title claims abstract description 16
- 230000033001 locomotion Effects 0.000 claims abstract description 75
- 230000006399 behavior Effects 0.000 claims abstract description 19
- 238000004364 calculation method Methods 0.000 claims abstract description 16
- 239000013598 vector Substances 0.000 claims description 18
- 238000012216 screening Methods 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims 2
- 230000002265 prevention Effects 0.000 abstract description 6
- 238000012545 processing Methods 0.000 abstract description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 244000144985 peep Species 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image data processing, in particular to a human body abnormal behavior monitoring method for a peeping-preventing screen, which can not only judge whether a person is suspicious through face recognition in peeping-preventing abnormal behavior monitoring of the screen, but also analyze subsequent movement behaviors of the suspicious person and judge whether peeping is intentional. However, the calculation amount and the calculation time are reduced by calculating the gray level difference, the distance characteristic and the angular point number of the pixel points of the edge line, the calculation amount and the calculation time are reduced by the angular point and the characteristic pixel points, the warning coefficient is determined according to the overall motion amount condition obtained by the angular point and the characteristic pixel points, and whether the peeping behavior exists or not is judged by monitoring the warning coefficient, so that the peeping prevention monitoring result is more accurate and timely.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a human body abnormal behavior monitoring method for an anti-peeping screen.
Background
With the rapid development of the internet, the information transmission and diffusion speed is increased, and meanwhile, the self-protection consciousness and the right consciousness of people are continuously enhanced, so that the personal privacy protection is more and more important to people. The mobile terminals such as smart phones or smart televisions are used in public places more and more times, privacy leakage opportunities are also more and more, for example, when offices use smart televisions for meeting, people passing around peep the screens, so that important information is leaked. The current peeping prevention method on the mobile terminal adopts a camera to detect the number of human faces or human eyes, judges whether people without viewing rights exist, and pops up a prompt box or directly locks a screen to prevent peeping if the people without viewing rights are found.
The inventors have found in practice that the above prior art has the following drawbacks: the existing peeping prevention method utilizes a computer vision algorithm, and a screen is directly closed after unauthorized suspicious personnel are identified through a shot image. However, in an actual scene, suspicious personnel do not have peeping behaviors, the follow-up behavior actions of the suspicious personnel are not analyzed in the prior art, and the direct closing of a screen causes the waste of hardware control resources and seriously affects the use experience of a user.
Disclosure of Invention
In order to solve the technical problems that in the prior art, a computer vision algorithm is utilized, a suspicious person with no authority is identified through a shot image, a screen is directly closed, subsequent behavior actions of the suspicious person are not analyzed, and the waste of hardware control resources is caused, and the use experience of a user is seriously influenced, the invention aims to provide a human body abnormal behavior monitoring method for an anti-peeping screen, and the adopted technical scheme is as follows:
acquiring real-time images of a plurality of continuous frames of the peep-proof object camera, identifying personnel in the real-time images, and judging whether the personnel are suspicious;
taking a continuous multi-frame real-time image with suspicious personnel as an image to be analyzed, and downsampling the image to be analyzed to obtain a downsampled image;
obtaining edge lines and corner points in the downsampled image; calculating the gray difference between each edge line pixel point on the same edge line in the downsampled image and other edge line pixel points in a first preset neighborhood range; calculating the distance characteristics between each edge line pixel point on the same edge line and the corner point in the second preset neighborhood range; calculating the number of corner points of the edge where each edge line pixel point is located; screening the edge line pixel points according to the gray level difference, the distance characteristics and the corner number to obtain characteristic pixel points;
acquiring the overall motion quantity of suspicious personnel according to corner points and characteristic pixel points in two adjacent frames of downsampled images, and calculating the variation quantity of the overall motion quantity; and obtaining a warning coefficient through the accumulated value and the variation of the overall motion quantity of the continuous multi-frame downsampled image, and monitoring whether the suspicious personnel have peeping or not through the numerical value of the warning coefficient.
Further, the step of obtaining the gray scale difference of the edge line pixel point includes:
and calculating the average value of the absolute values of the difference values of the gray values of each edge line pixel point on the same edge line in the downsampled image and other edge line pixel points in the first preset neighborhood range, and obtaining the gray difference of the edge line pixel points.
Further, the step of obtaining the distance characteristic of the edge line pixel point includes:
and calculating the average value of Euclidean distances between each edge line pixel point on the same edge line in the downsampled image and the corner point in the second preset neighborhood range, and obtaining the distance characteristic of the edge line pixel points.
Further, the step of obtaining the feature pixel point includes:
calculating the product of gray level difference, distance characteristics and corner number of the edge line pixel points to obtain the possibility that the edge line pixel points are characteristic pixel points; presetting a possibility threshold, and screening edge line pixel point blocks exceeding the possibility threshold as characteristic pixel points.
Further, the step of obtaining the amount of the whole motion includes:
according to the angular points and the characteristic pixel points which are obtained by calculation in the downsampled images, the motion vectors of the angular points and the characteristic pixel points are obtained by a three-step search method, and the average value of the modular length of the motion vectors of which all the motion vectors in two adjacent downsampled images are not zero is calculated, so that the whole motion quantity is obtained.
Further, the step of acquiring the amount of change in the amount of overall motion includes:
and calculating the difference value of the obtained overall motion quantity of the second frame and the last frame of images in the downsampled images of the continuous multiframes to obtain the variation quantity of the overall motion quantity.
Further, the step of obtaining the warning coefficient includes:
and calculating the ratio of the accumulated value of the whole exercise amount to the variation of the whole exercise amount, and subtracting the ratio of the accumulated value to the variation to obtain a warning coefficient.
Further, the step of monitoring the peep prevention through the value of the warning coefficient comprises the following steps:
when the obtained warning coefficient exceeds the warning coefficient threshold value, the suspicious personnel is considered to be the intentional peeping object, and at the moment, a warning is sent out and the screen is closed.
Further, the method for identifying the person in the real-time image and judging whether the person is a suspicious person comprises the following steps:
and identifying the personnel appearing in the real-time image according to the convolutional neural network, identifying the personnel appearing through a face identification algorithm, and judging whether the personnel appearing are suspicious.
The invention has the following beneficial effects: firstly, taking continuous multi-frame real-time images with suspicious personnel as images to be analyzed, and aiming at judging whether the suspicious personnel are peeped intentionally or not according to the follow-up moving action of the suspicious personnel, and not directly closing the screen according to the face recognition result. Because the application scene of the invention has higher timeliness requirement on the monitoring result, more image pixels can increase the calculated amount and the calculated time, the image to be analyzed is downsampled, and the calculated amount of the subsequent algorithm is reduced. Further, the gray level difference of the edge line pixel points in the downsampled image is calculated to show the obvious degree of the edge line pixel points in the downsampled image edge lines, the distance characteristic of the edge line pixel points is calculated to show the distance condition of the edge line pixel points from the corner points, and the gray level difference, the distance characteristic and the number of the corner points of the edge line pixel points are combined to judge whether the edge line pixel points can be feature pixel points or not, so that effective extraction of the feature pixel points is realized, and the reference degree of subsequent motion information is increased. The corner points and the characteristic pixel points are used as the basis for acquiring the motion information, so that the calculated amount and time for calculating the whole motion amount of suspicious personnel are reduced, and the timeliness of the monitoring result is improved; the subsequent movement behavior of the suspicious personnel can be clearly analyzed through the change amount and the accumulated value of the whole movement amount, and whether the suspicious personnel are intentionally peeped can be directly judged through the warning coefficient obtained through the change amount and the accumulated value of the whole movement amount; therefore, the invention not only judges whether peeping exists according to face recognition, but also judges whether intentional peeping exists according to the follow-up movement of suspicious personnel, reduces the calculated amount and time in the process of analyzing the follow-up movement of suspicious personnel, and can also improve the timeliness of peeping monitoring results.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for monitoring abnormal behavior of a human body for a peeping-proof screen according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of a human body abnormal behavior monitoring method for a peep-proof screen according to the invention, which are presented in conjunction with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the human body abnormal behavior monitoring method for the peep-proof screen provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for monitoring abnormal behavior of a human body for a peeping-preventing screen according to an embodiment of the invention is shown, and the method includes the following steps.
Step S1, acquiring real-time images of a plurality of continuous frames of the peep-proof object camera, identifying personnel in the real-time images, and judging whether the personnel are suspicious.
In the embodiment of the invention, the peep-proof object refers to an intelligent large-screen television used in meeting and office, and the camera is arranged above the peep-proof object and can shoot all the angle ranges in which the screen can be seen. It should be noted that, the peep-proof object can be a mobile terminal with a display function in the implementation process, and the installation position of the camera can be determined by itself according to the scene, so that only the angle intervals capable of shooting all the screens can be ensured. The person having the viewing authority refers to a person permitted to use and view the screen, and the unauthorized suspicious person refers to a person not permitted to use and view the screen.
In the embodiment of the invention, the camera shoots a frame of real-time image every second, and the shooting frequency can be determined by an operator in the implementation process. In order to obtain a clearer image and reduce the calculated amount so as to facilitate the subsequent analysis, the photographed image needs to be subjected to graying and noise removal. It should be noted that, the weighted average graying method and the gaussian filtering used in the preprocessing are all technical means well known to those skilled in the art, and specific steps are not repeated.
The specific steps of identifying the personnel in the real-time image and judging whether the personnel are suspicious comprise the following steps:
and carrying out personnel identification on the preprocessed real-time image through a convolutional neural network, and carrying out face recognition algorithm processing on the real-time image when the personnel appear in the real-time image, so as to judge whether the real-time image is suspicious. In the embodiment of the invention, the face recognition is performed by using a face recognition LBPH algorithm, and the specific training process of the face recognition is as follows: and using the face information of the personnel with the authority as a training object, performing face recognition by using an LBPH algorithm, using the face data of the personnel with the authority as a template image, comparing the acquired real-time image of the personnel with the template image by using the LBPH algorithm, and further judging whether the acquired real-time image has suspicious personnel according to the similarity of the face data. If the identified person is a person with authority, the subsequent analysis is not performed on the person, the camera normally continues to monitor and shoot pictures, and if the identified person is a suspicious person without authority, the subsequent action condition of the person needs to be tracked and analyzed to judge whether the condition of intentionally peeping the screen information exists. It should be noted that, the specific algorithm structure and training method of the convolutional neural network and the face recognition algorithm are technical means well known to those skilled in the art, and specific steps are not repeated.
And S2, taking a continuous multi-frame real-time image with suspicious personnel as an image to be analyzed, and downsampling the image to be analyzed to obtain a downsampled image.
When suspicious personnel pass through the screen, the suspicious personnel can be inadvertently scanned to the screen, and the face image can be shot by the camera and judge to be unauthorized suspicious personnel at the moment, in order to judge whether intentional peeping is needed, follow-up mobile behaviors of the suspicious personnel are needed to be analyzed. And taking the real-time image with suspicious personnel as an image to be analyzed. It should be noted that, since the real-time image is an image of a plurality of consecutive frames, the image to be analyzed that is screened by identifying suspicious persons is also an image of a plurality of consecutive frames.
Because the anti-peeping scene applied by the invention has higher timeliness requirements on the monitoring result, the calculation and analysis time of the image to be analyzed is shortened as much as possible, and in order to reduce the calculated amount in the image data processing and improve the timeliness of the monitoring result, the image to be analyzed needs to be subjected to downsampling treatment to obtain a downsampled image before the image to be analyzed is calculated and analyzed. The specific steps for obtaining the downsampled image include:
in the embodiment of the invention, the super-pixel segmentation algorithm is used for carrying out the blocking operation on the image to be analyzed, the pre-blocking size is 10 x 10, and the downsampled image of the image to be analyzed is obtained. It should be noted that, the implementer may determine the size of the pre-partition during the implementation process; the super-pixel segmentation is a technical means well known to those skilled in the art, and specific steps are not repeated.
Step S3, obtaining edge lines and corner points in the downsampled image; calculating the gray difference between each edge line pixel point on the same edge line in the downsampled image and other edge line pixel points in a first preset neighborhood range; calculating the distance characteristics between each edge line pixel point on the same edge line and the corner point in the second preset neighborhood range; calculating the number of corner points of the edge where each edge line pixel point is located; and screening the edge line pixel points through gray level differences, distance characteristics and corner numbers to obtain characteristic pixel points.
The purpose of acquiring edge lines and corner points in the downsampled image is to: because motion vectors of all pixels in the downsampled image need to be calculated when the movement behavior of suspicious personnel is analyzed, the motion vectors of a large number of pixels can be calculated for a long time and a certain hardware requirement is needed, the timeliness of the monitoring result can not be improved, the cost is high, and the accuracy of motion information can be influenced when some meaningless or weaker pixels participate in the motion analysis. Therefore, in order to ensure the accuracy of the monitoring result and improve the timeliness, the feature pixel points and the angular points are screened out to calculate the motion vectors, and the number of the calculated pixel points can be reduced by only calculating the motion vectors of the feature pixel points and the angular points. Because the characteristic pixel points capable of expressing the movement behavior of suspicious personnel are mostly on the edge line in the downsampled image; the corner points are pixel points with characteristics in the image, and can express the movement behaviors of suspicious personnel, so that the specific steps of acquiring edge lines and corner points in the downsampled image are needed firstly, and acquiring the edge lines and the corner points in the downsampled image are needed:
the edge detection algorithm is used for acquiring the edge information in the downsampled image, and in the embodiment of the invention, the Canny operator is used for carrying out edge detection on the downsampled image to acquire the edge line in the downsampled image. And (3) using corner detection to acquire the corners in the downsampled image, and acquiring the corners in the downsampled image through SUSAN corner detection. It should be noted that, both Canny operator edge detection and SUSAN corner detection are technical means well known to those skilled in the art, and specific steps are not repeated.
After the downsampled image and the angular points are acquired, if only the motion vectors of the angular points are calculated, the quantity of the angular points is insufficient to accurately calculate the movement behavior of suspicious personnel. Therefore, the method also needs to obtain as many pixel points as possible on the edge line in the downsampled image as characteristic pixel points, and the specific steps of obtaining the characteristic pixel points by calculating gray scale difference, distance characteristics and corner number and screening the edge line pixel points include:
(1) The acquiring formula of the gray difference between each edge line pixel point on the same edge line in the downsampled image and other edge line pixel points in the first preset neighborhood range specifically comprises the following steps:
in the method, in the process of the invention,representing the first in the downsampled imageGray scale difference values of the pixel points of the edge lines,represent the firstThe gray values of the pixel points of the edge lines,representing the first preset neighborhood rangeThe gray value of each pixel point,the number of the residual pixels of the edge line pixel points to be calculated is removed from the first preset neighborhood range,the formula means the average value of the absolute values of the gray value differences between the pixel points of the edge line to be calculated and other pixel points in the first preset neighborhood range. It should be noted that the edge line pixel points are pixel points on the edge line where the downsampled image is located.
In the embodiment of the present invention, the first preset neighborhood range refers to a range of an even number of adjacent pixels on the same edge line where the pixel blocks of the edge line to be calculated are locatedFor example, a first preset neighborhood range formed by 8 pixels on the left and right sides of the pixel to be calculated on a certain edge line is selected, and it should be noted that in the implementation process, an operator can set the number of even number of pixels as the first preset neighborhood range of the pixel of the edge line to be calculated by himself, if the pixel of the edge line to be calculated is at the edge of the edge line, and if the number of pixels in the first preset neighborhood range is insufficient, the pixel of the edge line is discarded, and the gray scale difference is not calculated.
When the gray level difference degree between the edge line pixel point and the pixel point in the first preset neighborhood range is larger, the edge line pixel point is more obvious in the edge line, and the possibility that the edge line pixel point is used as a characteristic pixel point is also higher.
(2) Calculating the distance characteristic between each edge line pixel point on the same edge line in the downsampled image and the corner point in the second preset neighborhood range, wherein the distance characteristic acquiring formula specifically comprises the following steps:
in the method, in the process of the invention,for downsampling the first imageThe distance characteristics of the pixel points of the edge line,represent the firstPixel points of the edge line and the first preset neighborhood rangeThe euclidean distance of the individual corner points,and expressing the number of the corner points in the second preset neighborhood range, wherein the formula is the average value of Euclidean distances between the pixel points of the edge line to be calculated on the same edge line in the downsampled image and the corner points in the second preset neighborhood range.
In the embodiment of the present invention, the second preset neighborhood range refers to a range of the number of corner points on the edge line where the edge line pixel point to be calculated is located, where the corner points are most adjacent to the edge line pixel point to be calculated, and in the embodiment of the present inventionThe value is 10, namely, 10 corner points closest to the pixel point of the edge line to be calculated are found on the same edge line of the pixel point of the edge line to be calculated, and it is to be noted that an operator can set a second preset neighborhood range by himself in the implementation process, and if the edge line of the pixel point of the edge line to be calculated is less than 10 corner points, the actual corner point number is calculated.
The purpose of calculating the distance characteristic of the edge line pixel points in the downsampled image is to: when determining the movement behavior of the suspicious person, more characteristic pixel points need to be found as much as possible to calculate the motion vector of the characteristic pixel points so as to obtain the movement behavior of the suspicious person. The determined characteristic pixel points are angular points in the downsampled image, and because the angular points are defined as extreme points in the image, namely points which are more prominent in terms of some attributes, the edge line pixel points which are closer to the angular points on the same edge line do not need to be used as the characteristic pixel points any more to increase the calculation amount of the subsequent calculation motion vector, the edge line pixel points which are farther from the angular points on the same edge line are used as the characteristic pixel points, the distance characteristics of the edge line pixel points are calculated, and when the distance characteristics are larger, the distance characteristics mean that the distance points are farther from the angular points, the edge line pixel points are more likely to become the characteristic pixel points; when the distance feature is smaller, meaning that the distance corner is closer, the probability that the edge line pixel point becomes a feature pixel point is also smaller.
(3) In the obtained downsampled image, the obtained edge information not only comprises the human contour edge, but also can obtain the clothing information of the human body, and as most of the clothing information of the human body is linear textures, the obtained linear edges are more, and the edges possibly have no corner points, so that the edge lines without the corner points can be ignored, and even if the edges are not used, the characteristic pixel points can be obtained according to other edges. In the embodiment of the invention, in order to further facilitate the operation of the subsequent index, the number of the counted corner points of the edge line where the pixel points of each edge line are located is normalized, and the specific expression comprises:
in the method, in the process of the invention,represent the firstThe number of corner points on the edge line where the pixel points of the edge line are located,is made of natural constantAs a function of the base of the exponentiation,representing the number of corner points on the edge line where the pixel points of the edge line to be calculated are located, wherein the formula means that when the number of corner points on the edge where the pixel points of the edge line are located is larger, the formula means thatThe closer to 1, the fewer the number of corner points on the edge where the edge line pixel points are located, the moreThe closer to 0, i.e. the purpose of the formula is to normalize the number of corner points of the edge line where the pixel points of the edge line are located to 0,1]。
When the number of corner points of the edge line where the edge line pixel points are located is larger, the possibility that the edge line pixel points on the edge line become characteristic pixel points is higher; when the number of corner points of the edge line where the pixel points of the edge line are located is 0, the edge line is ignored, and the pixel points of the edge line on the edge line are not analyzed.
(4) Screening edge line pixel points through gray level differences, distance features and corner numbers, and obtaining specific steps of the feature pixel points: as can be seen from the calculation of the gray level difference of the edge line pixel points, the greater the gray level difference between the edge line pixel points and the pixel points within the first preset neighborhood range, the more obvious the edge line pixel points are in the edge line, and the greater the possibility that the edge line pixel points are used as feature pixel points. As can be seen from the calculation of the distance feature of the edge line pixel, the greater the distance feature, which means the further away from the corner point, the greater the likelihood that the edge line pixel will become a feature pixel. By calculating the number of corner points of the edge line where the edge line pixel points are located, when the number of corner points of the edge line where the edge line pixel points are located is larger, the likelihood that the edge line pixel points on the edge line become feature pixel points is larger.
Therefore, whether the edge line pixel points become characteristic pixel points can be screened according to the gray level difference, the distance characteristics and the number of corner points, and the gray level difference and the distance characteristics are normalized through the range difference so that the numerical ranges are 0 and 1. The step of calculating the possibility that the edge line pixel point becomes the characteristic pixel point comprises the following steps: multiplying the gray level difference, the distance characteristic and the angular point quantity of the same edge line pixel point in the numerical range of [0,1] to obtain the probability value that the edge line pixel point becomes the characteristic pixel point, wherein when the product result is closer to 1, the probability is higher; the closer the product result is to 0, the lower the likelihood. And presetting a possibility threshold, and screening out the edge line pixel as a characteristic pixel when the possibility of the edge line pixel exceeds the possibility threshold. In the embodiment of the present invention, the preset probability threshold is 0.8, and in different application scenarios, the specific preset probability threshold value may be specifically set according to a specific embodiment.
Step S4, obtaining the overall motion quantity of suspicious personnel according to corner points and characteristic pixel points in the downsampled images of two adjacent frames, and calculating the variation quantity of the overall motion quantity; and obtaining a warning coefficient through the accumulated value and the variation of the overall motion quantity of the continuous multi-frame downsampled image, and monitoring whether the suspicious personnel have peeping or not through the numerical value of the warning coefficient.
After the angular points and the characteristic pixel points of the downsampled images of the continuous multiframes are screened, the motion vectors of the angular point characteristic pixel points in the downsampled images are obtained through a three-step search method, the time for searching and matching the angular points and the characteristic pixel points to obtain the motion vectors through the three-step search method is reduced by a large amount of calculation amount and time compared with the time for searching and matching all the pixel points, and the motion vectors refer to the position change condition of the same angular point or characteristic pixel point in two adjacent frames of images. It should be noted that the three-step search method is a technical means well known to those skilled in the art, and specific steps are not described herein.
And calculating the average value of motion vectors of all angular points or characteristic pixel points in the two adjacent frames of downsampling images, wherein the mode length of the motion vectors is not zero, so that the overall motion quantity of the two adjacent frames of downsampling images is obtained.
The specific steps for judging whether the suspicious personnel have peeping include:
the acquisition formula of the variation of the overall exercise amount specifically includes:
in the method, in the process of the invention,the amount of change in the amount of overall movement is indicated,representing the amount of overall motion of the first person of the suspect obtained by adjacent downsampled images of the first and second frames after the occurrence of the suspect,representing the amount of motion of the whole body obtained by the downsampled image of the last frame and the penultimate frame after continuously acquiring the downsampled images of a plurality of frames, namely, continuously acquiring the downsampled images of 10 frames in the embodiment of the inventionThe whole motion quantity is obtained through the ninth frame and the tenth frameIt should be noted that, the practitioner may determine the number of consecutive multi-frame images by himself during the implementation process.
The result of (a) may be greater than zero or not greater than zero whenNear zero or less than 0, this indicates that the final amount of global motion is not reduced or even greater than the initial amount of global motion, indicating that suspicious persons in the downsampled image are moving at a uniform speed or accelerating. At the moment, the possibility of passing by suspicious personnel is considered to be higher, and the possibility of peeping is considered to be lower; when (when)The result is far greater than zero, indicating that suspicious personnel are moving at a slow speed, and that the likelihood of peeping is considered to be high.
The acquisition formula of the accumulated value of the whole exercise amount specifically includes:
in the method, in the process of the invention,representing the total motion amount accumulated value in the successive multi-frame images after the suspicious individual appears,expressed as the number of set continuous multi-frame downsampled images, selected in the embodiment of the inventionAt the time of the number of the times of 10,indicating that any frame down-sampled image of the first frame is removed,is expressed by the firstFrames and methodsThe frame image calculates the amount of the overall motion. In an embodiment of the present invention, in the present invention,the meaning of (a) is that after a suspicious person is found, the sum of the total motion amounts calculated in the following 9 frames of images is obtained from the two downsampled images of the front and rear adjacent frames, so that the 10 downsampled images can obtain 9 total motion amount values, and the sum of the 9 total motion amounts is the accumulated value of the total motion amounts.
When (when)When the screen is larger, the whole suspicious personnel in front of the screen is indicated to move faster, the movement amount is larger, the possibility of passing through the screen is higher, and the possibility of peeping is lower; when (when)The smaller the time, the slower the moving speed of the suspicious personnel in front of the screen is, the smaller the moving amount is, the longer the time for watching the screen is, and the greater the peeping possibility is.
The acquisition formula of the warning coefficient specifically comprises:
in the method, in the process of the invention,to monitor the warning coefficient of whether a suspicious individual is peeped,for the accumulated value of the normalized overall motion quantity,the amount of change of the overall motion after normalization. When the suspicious personnel in front of the screen move slowly and the change amount of the whole movement is smaller, the method is requiredThe larger the movement speed of the suspicious person is, the slower the total value of the whole movement quantity is, namelyThe smaller the time, the greater the peeping possibility of suspicious personnel in front of the screen.
Monitoring whether the peeping behavior of suspicious personnel exists through the change of the warning coefficient, presetting a warning coefficient threshold, judging that the peeping behavior of the suspicious personnel exists when the warning coefficient exceeds the warning coefficient threshold, sending out the peeping warning and closing the screen. In the embodiment of the invention, the preset warning coefficient threshold is 0.7, and in different application scenarios, the specific preset warning coefficient threshold value can be specifically set according to specific implementation manners.
In summary, in the monitoring of the peeping prevention abnormal behavior of the screen, the embodiment of the invention not only can judge whether the person is suspicious through face recognition, but also can analyze the subsequent movement behavior of the suspicious person to judge whether the person is peeped intentionally. However, the calculation amount and the calculation time are reduced by calculating the gray level difference, the distance characteristic and the angular point number of the pixel points of the edge line, the calculation amount and the calculation time are reduced by the angular point and the characteristic pixel points, the warning coefficient is determined according to the overall motion amount condition obtained by the angular point and the characteristic pixel points, and whether the peeping behavior exists or not is judged by monitoring the warning coefficient, so that the peeping prevention monitoring result is more accurate and timely.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
Claims (7)
1. A method for monitoring abnormal human body behaviors of a peeping-proof screen, which is characterized by comprising the following steps:
acquiring real-time images of a plurality of continuous frames of the peep-proof object camera, identifying personnel in the real-time images, and judging whether the personnel are suspicious;
taking a continuous multi-frame real-time image with suspicious personnel as an image to be analyzed, and downsampling the image to be analyzed to obtain a downsampled image;
obtaining edge lines and corner points in the downsampled image; calculating the gray difference between each edge line pixel point on the same edge line in the downsampled image and other edge line pixel points in a first preset neighborhood range; calculating the distance characteristics between each edge line pixel point on the same edge line and the corner point in the second preset neighborhood range; calculating the number of corner points of the edge where each edge line pixel point is located; screening the edge line pixel points according to the gray level difference, the distance characteristics and the corner number to obtain characteristic pixel points;
acquiring the overall motion quantity of suspicious personnel according to corner points and characteristic pixel points in two adjacent frames of downsampled images, and calculating the variation quantity of the overall motion quantity; obtaining a warning coefficient through accumulated values and variation of the overall motion quantity of the continuous multi-frame downsampled images, and monitoring whether suspicious personnel are peeped or not through the numerical value of the warning coefficient;
the step of obtaining the whole exercise amount comprises the following steps:
according to the angular points and the characteristic pixel points which are obtained by calculation in the downsampled images, the motion vectors of the angular points and the characteristic pixel points are obtained by a three-step search method, and the average value of the modular length of the motion vectors of which all the motion vectors in two adjacent downsampled images are not zero is calculated, so that the overall motion quantity is obtained;
the step of obtaining the amount of change of the overall motion includes:
and calculating the difference value of the obtained overall motion quantity of the second frame and the last frame of images in the downsampled images of the continuous multiframes to obtain the variation quantity of the overall motion quantity.
2. The method for monitoring abnormal behavior of a human body on a peep-proof screen according to claim 1, wherein the step of obtaining the gray scale difference of the edge line pixels comprises:
and calculating the average value of the absolute values of the difference values of the gray values of each edge line pixel point on the same edge line in the downsampled image and other edge line pixel points in the first preset neighborhood range, and obtaining the gray difference of the edge line pixel points.
3. The method for monitoring abnormal behavior of a human body on a peep-proof screen according to claim 1, wherein the step of obtaining the distance characteristic of the edge line pixel point comprises:
and calculating the average value of Euclidean distances between each edge line pixel point on the same edge line in the downsampled image and the corner point in the second preset neighborhood range, and obtaining the distance characteristic of the edge line pixel points.
4. The method for monitoring abnormal behavior of a human body on a peep-proof screen according to claim 1, wherein the step of obtaining the feature pixel point comprises:
calculating the product of gray level difference, distance characteristics and corner number of the edge line pixel points to obtain the possibility that the edge line pixel points are characteristic pixel points; presetting a possibility threshold, and screening edge line pixel points exceeding the possibility threshold as characteristic pixel points.
5. The method for monitoring abnormal behavior of a human body on a peep-proof screen according to claim 1, wherein the step of obtaining the warning coefficient comprises:
and calculating the ratio of the accumulated value of the whole exercise amount to the variation of the whole exercise amount, and subtracting the ratio of the accumulated value to the variation to obtain a warning coefficient.
6. The method for monitoring abnormal behavior of a human body on a peep-proof screen according to claim 5, wherein the step of monitoring peep-proof by a value of a warning coefficient comprises:
when the obtained warning coefficient exceeds the warning coefficient threshold value, the suspicious personnel is considered to be the intentional peeping object, and at the moment, a warning is sent out and the screen is closed.
7. The method for monitoring abnormal human body behaviors of a peep-proof screen according to claim 1, wherein the method for identifying the person in the real-time image and judging whether the person is a suspicious person comprises the following steps:
and identifying the personnel appearing in the real-time image according to the convolutional neural network, identifying the personnel appearing through a face identification algorithm, and judging whether the personnel appearing are suspicious.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310272365.9A CN115984973B (en) | 2023-03-21 | 2023-03-21 | Human body abnormal behavior monitoring method for peeping-preventing screen |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310272365.9A CN115984973B (en) | 2023-03-21 | 2023-03-21 | Human body abnormal behavior monitoring method for peeping-preventing screen |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115984973A CN115984973A (en) | 2023-04-18 |
CN115984973B true CN115984973B (en) | 2023-06-27 |
Family
ID=85965235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310272365.9A Active CN115984973B (en) | 2023-03-21 | 2023-03-21 | Human body abnormal behavior monitoring method for peeping-preventing screen |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115984973B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116311383B (en) * | 2023-05-16 | 2023-07-25 | 成都航空职业技术学院 | Intelligent building power consumption management system based on image processing |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657628A (en) * | 2016-12-07 | 2017-05-10 | 努比亚技术有限公司 | Anti-peeping method, device and terminal of mobile terminal |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5455547B2 (en) * | 2009-10-19 | 2014-03-26 | キヤノン株式会社 | Image processing apparatus and image processing method |
CN104902265B (en) * | 2015-05-22 | 2017-04-05 | 深圳市赛为智能股份有限公司 | A kind of video camera method for detecting abnormality and system based on background edge model |
CN106020456A (en) * | 2016-05-11 | 2016-10-12 | 北京暴风魔镜科技有限公司 | Method, device and system for acquiring head posture of user |
CN110298237B (en) * | 2019-05-20 | 2024-08-20 | 平安科技(深圳)有限公司 | Head gesture recognition method, head gesture recognition device, computer equipment and storage medium |
CN110706259B (en) * | 2019-10-12 | 2022-11-29 | 四川航天神坤科技有限公司 | Space constraint-based cross-shot tracking method and device for suspicious people |
CN112507772A (en) * | 2020-09-03 | 2021-03-16 | 广州市标准化研究院 | Face recognition security system and suspicious person detection and early warning method |
-
2023
- 2023-03-21 CN CN202310272365.9A patent/CN115984973B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657628A (en) * | 2016-12-07 | 2017-05-10 | 努比亚技术有限公司 | Anti-peeping method, device and terminal of mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN115984973A (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9104914B1 (en) | Object detection with false positive filtering | |
CN109086718A (en) | Biopsy method, device, computer equipment and storage medium | |
CN106056079B (en) | A kind of occlusion detection method of image capture device and human face five-sense-organ | |
CN105893920A (en) | Human face vivo detection method and device | |
CN110059634B (en) | Large-scene face snapshot method | |
CN109670430A (en) | A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning | |
WO2019114145A1 (en) | Head count detection method and device in surveillance video | |
EP2813970A1 (en) | Monitoring method and camera | |
CN111144366A (en) | Strange face clustering method based on joint face quality assessment | |
CN107958441B (en) | Image splicing method and device, computer equipment and storage medium | |
CN107483894A (en) | Judge to realize the high ferro station video monitoring system of passenger transportation management based on scene | |
CN116385316B (en) | Multi-target image dynamic capturing method and related device | |
CN108898042B (en) | Method for detecting abnormal user behavior in ATM cabin | |
CN115984973B (en) | Human body abnormal behavior monitoring method for peeping-preventing screen | |
Zhang et al. | Moving objects detection method based on brightness distortion and chromaticity distortion | |
WO2008066217A1 (en) | Face recognition method by image enhancement | |
Hadis et al. | The impact of preprocessing on face recognition using pseudorandom pixel placement | |
CN111985331A (en) | Detection method and device for preventing secret of business from being stolen | |
CN117994165B (en) | Intelligent campus management method and system based on big data | |
CN113902942A (en) | A homogeneous user group mining method based on multimodal features | |
CN113989732A (en) | Real-time monitoring method, system, equipment and readable medium based on deep learning | |
CN111814565A (en) | Target detection method and device | |
CN113014914B (en) | Neural network-based single face-changing short video identification method and system | |
CN116703755A (en) | Omission risk monitoring system for medical waste refrigeration house | |
KR102500516B1 (en) | A protection method of privacy using contextual blocking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |