CN113628251B - Smart hotel terminal monitoring method - Google Patents
Smart hotel terminal monitoring method Download PDFInfo
- Publication number
- CN113628251B CN113628251B CN202111180117.9A CN202111180117A CN113628251B CN 113628251 B CN113628251 B CN 113628251B CN 202111180117 A CN202111180117 A CN 202111180117A CN 113628251 B CN113628251 B CN 113628251B
- Authority
- CN
- China
- Prior art keywords
- target object
- moving target
- moving
- pixel point
- monitoring area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a smart hotel terminal monitoring method, which comprises the steps of carrying out video acquisition at different angles on a monitoring area through a plurality of cameras arranged at different terminal positions in the same monitoring area, carrying out moving target object detection on an acquired video image, and extracting moving target object foreground pixel points; obtaining a boundary curve of a moving target object; establishing a moving target object template by using the boundary curve, updating the moving target object template according to cameras at different view angles and different terminal positions, matching and tracking the moving target object, generating a fusion track MI of the moving target object, predicting a moving direction, and allocating a plurality of cameras of adjacent areas in the moving direction for preparation for tracking; and finally, judging the behaviors of the moving target object according to the moving route of the moving target object in each monitoring area, if the behaviors are abnormal, storing a video picture of the abnormal behaviors of the moving target object as an evidence, and simultaneously sending an alarm to an administrator in a background.
Description
Technical Field
The invention relates to the field of business management, in particular to a smart hotel terminal monitoring method.
Background
As a comprehensive system with strong precaution capacity, the intelligent hotel terminal monitoring system always occupies an important position in the intelligent hotel monitoring field. The development of the society and the progress of the era provide increasingly wide application range for monitoring by utilizing terminal monitoring videos. The continuous cross derivation of the terminal video monitoring network and other disciplines endows the video monitoring with a new definition.
The functions of the video monitoring system depend on three technologies of communication, embedding and image processing to a great extent, and in recent years, with the continuous development of the three technologies, the functions of the video monitoring system show a diversified trend. As more and more industries pursue a management mode of cost and refinement, the video monitoring system meets the requirements of functions and technology improvement, and simultaneously takes economy, practicability and stability as important standards for measuring the quality of the video monitoring system. In the prior art, the terminal monitoring system of the smart hotel also has the following problems: a front-end processor of the video monitoring system has low real-time performance and overlarge power consumption; in the video information processing, in the extraction and segmentation of the moving target, the background can be changed due to the illumination, so that the disturbed background is classified as the moving target; in the tracking process of the moving target, if the moving target moves a slight distance, the traditional method is easy to cause target tracking loss; the characteristic extraction is not obvious in the aspect of behavior analysis of the moving target, and the method is a bottleneck of behavior identification.
For example, patent document CN101179707A proposes a method for tracking and measuring a wireless network video image multi-view cooperative target, which implements target measurement in a single wireless video image monitoring node by a dynamic background construction method to obtain a minimum rectangular boundary containing only a target; through cooperation among the nodes, a progressive distributed data fusion method is adopted, and the measurement results of all the wireless video image monitoring nodes are fused, so that the cooperative positioning measurement of the moving target is realized. However, in the technical scheme, a multi-parameter evaluation method based on energy entropy and Mahalanobis distance, such as energy consumption, residual energy, information effectiveness, node characteristics, information feedback and the like, is adopted in the cooperation process, and the application economy, practicability and stability are ignored while the functions and the technology are satisfied.
For another example, patent document US2009324010a1 proposes an automatic tracking and recognition system and method controlled by a neural network, which includes a fixed view field acquisition module, a full-function variable view field acquisition module, a video image recognition algorithm module, a neural network control module, a suspicious target tracking module, a database comparison and alarm judgment module, a monitoring feature recording and rule setting module, a light monitoring module, a backlight module, an alarm output/display/storage module, and a safety monitoring sensor. However, the technical scheme needs to be based on neural network control, is suitable for roughly tracking and identifying in a large range of human flow and vehicle flow, and has insignificant feature extraction on behavior analysis of moving targets.
Disclosure of Invention
In order to solve the technical problem, the invention provides a smart hotel terminal monitoring method, which comprises the following steps:
the method comprises the following steps that firstly, video collection is carried out on a monitoring area through a plurality of cameras with different visual angles, which are arranged at different terminal positions in the same monitoring area, moving target object detection is carried out on collected video images, and foreground pixel points of the moving target object are extracted;
combining all foreground pixel points of the moving target object to obtain a boundary curve of the moving target object;
establishing a moving target object template by using the boundary curve, updating the moving target object template according to cameras with different viewing angles and different terminal positions, matching and tracking the moving target object, and performing information fusion by using video image information acquired by a plurality of cameras to generate a fusion track MI of the moving target object;
step four, predicting the moving direction according to the fusion track MI of the moving target object, and allocating a plurality of cameras of adjacent areas in the moving direction to perform tracking preparation;
and step five, finally judging the behaviors of the moving target object according to the moving route of the moving target object in each monitoring area, comparing the behaviors with the defined behavior modes of the behavior definition library, determining whether the behaviors are normal behaviors, if the behaviors are abnormal behaviors, storing a video picture of the abnormal behaviors of the moving target object as evidence, and simultaneously sending an alarm to an administrator in a background.
Further, in the first step, a moving target object detection algorithm based on mixed Gaussian foreground modeling is adopted to detect the pixel value I of the current pixel pointtWith the mean value of Gaussian distribution of each backgroundMaking a difference between the absolute value and the standard deviation of the distributionComparing the two times, and judging the foreground pixel points as follows:
wherein t represents the current frame, t-1 represents the previous frame, and i represents the current pixel point;
if the absolute value is larger than D times of the distribution standard deviation, the pixel point is a foreground pixel point of the moving target object, otherwise, the pixel point is a background pixel point.
Further, for gaussian distribution with colors, the foreground pixel points are determined according to the following formula:
In the formulaAndis a threshold value; if the pixel value I of the current pixel pointtAnd (3) if the motion foreground pixel point satisfies one of the formulas (2) or (3), judging that the current pixel point is the motion foreground pixel point.
Further, in the second step, the foreground pixel points of the moving target object extracted in the first step are combined to obtain the outline of the moving target object, and the outline is subjected to polygon fitting to obtain a boundary curve.
Further, the polygon fitting specifically includes:
assigning a weight value to each pixel point P (i) on the contour, wherein the weight value is the chord height C (P (i)) of the pixel point P (i), and the chord height is greater than a threshold TCThe pixel point P (i) of (1) is reserved, and the formed point set is P = { P = { (P) }1,P2,…,PmAnd m is the number of pixel points subjected to polygon fitting.
Further, in step three, the process of updating the moving target object template is as follows:
in the same monitoring area, at the time of k-1, a moving target object template of the current camera is established, and the state vector of the moving target object is set as Xk-1At time k, the moving target object moves to the next camera, and the state vector of the moving target object is XkThen, the motion state of the moving target object is calculated according to the following formula:
Xk=AXk-1+BUk−1+Wk-1 (4);
wherein A is a state transition matrix; b is a control matrix, Uk−1And Wk-1And updating the moving target object template according to the formula (4) for the variable quantity of the distance and the variable quantity of the angle between the next camera and the current camera.
Further, in the third step, after the moving target object leaves the monitoring area, in the monitoring area where the moving target object just leaves, converting image coordinates in the monitoring picture captured by the multi-position camera into three-dimensional coordinates in a world coordinate system, and obtaining a fusion track MI of the moving target object in the monitoring area.
Further, in the fourth step, the fusion trajectory MI is divided into a plurality of short line segments, the moving direction of the moving target object on each short line segment is calculated, Dx and Dy are respectively set as the difference between the x coordinate and the y coordinate of the two end points of each short line segment, and the calculation formula of the moving direction orientation (x, y) of the moving target object is as follows:
orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) (5);
the orientation (x, y) is the moving direction of the moving target object on the short line segments, the moving direction of the moving target object on each short line segment is predicted through the orientation (x, y), the moving directions of the moving target objects on all the short line segments are combined to form an overall moving path of the moving target object fused with the MI, and accordingly the moving path of the moving target object is continuously monitored in multiple dimensions through the plurality of cameras of the adjacent monitoring areas in the moving direction in a linkage mode.
Further, calculating the boundary gradient of the boundary of the image in the monitoring area, and accurately predicting the moving direction of the moving target object by limiting the gradient amplitude and filtering abnormal values;
wherein C (u) represents a gradient correlation,andeach represents a gradient function of the gray levels of two adjacent image blocks at the boundary of the image, u represents the gray level of the image block, and x represents a complex conjugate.
Further, in the fifth step, if the behavior is artificially determined to be abnormal behavior and is not defined, the abnormal behavior is saved as a sample in the behavior definition library.
Has the advantages that:
according to the intelligent hotel terminal monitoring method, the monitoring area is subjected to video acquisition through the plurality of cameras with different visual angles arranged at different terminal positions in the monitoring area, the acquired video image is subjected to moving target object detection, and finally the behavior of the moving target object is judged according to the moving route of the moving target object in each monitoring area, so that the intelligent hotel terminal system can timely, accurately and quickly identify abnormal conditions at the terminal positions, and the security of hotel management is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic flow chart of the smart hotel terminal monitoring method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, which is a schematic flow chart of the intelligent hotel terminal monitoring method of the present invention, a plurality of cameras with different viewing angles are arranged at different terminal positions in different monitoring sections of a hotel, video acquisition is performed on the monitoring areas through the plurality of cameras arranged at the terminals, and a processor performs detection on the acquired video file to find out the change characteristics of each frame of the video. The specific monitoring method comprises the following steps:
firstly, a plurality of cameras arranged at terminal positions of different visual angles in the same monitoring area are used for carrying out video acquisition of different angles on the monitoring area, and a background server is used for carrying out real-time detection on acquired video files to find out the change characteristics of each frame of picture of the video.
Specifically, a video file acquired by a camera at a terminal position of a moving target object is acquired, a static background picture in the video file is extracted for modeling, so that the background model can be used as a theoretical standard for comparing with a real-time video image, difference calculation is carried out on two frames of image pictures (namely a frame picture for capturing the moving target object and a frame picture when the moving target object does not enter an acquisition area at the previous moment) in the comparison process through a detection algorithm, threshold processing is carried out, pixel points of an image change area (namely the outline of the moving target object and the area inside the outline) are rapidly captured, and discrete pixel points for describing the edge and inside of the change area are combined and marked.
In a preferred embodiment, the method adopts a moving target object detection algorithm based on mixed Gaussian foreground modeling, utilizes a parameter learning mechanism of mixed Gaussian, and uses a Gaussian function with larger weight to describe a background pixel value with high frequency and a Gaussian function with smaller weight to describe a foreground pixel value. Generally, an image is modeled with a mixed gaussian foreground, with a minimum of three gaussian functions. In the Gaussian distribution of the background and the foreground, the background of the pixel points is described by at least two Gaussian functions, and the foreground is described by at least one Gaussian function. In the preferred embodiment, the foreground modeling with 6 gaussian function mixture is selected, so that the moving target object can be distinguished more accurately.
The current pixel value ItWith the mean value of Gaussian distribution of each backgroundMaking a difference between the absolute value and the standard deviation of the distributionComparing the two times, and judging the foreground pixel points as follows:
wherein t represents the current frame, t-1 represents the previous frame, and i represents the current pixel point.
If the absolute value is larger than D times of the distribution standard deviation, the pixel point is a foreground pixel point of the moving target object, otherwise, the pixel point is a background pixel point. As long as the pixel value ItMatch any one of the background Gaussian distributions, then ItIs the pixel value corresponding to the background pixel point. For the selection of the parameter D, the value of this embodiment is 3.
In a preferred embodiment, for a Gaussian distribution where there is color, the standard deviation of the Gaussian distributionIf the size is too large or too small, some pixel points will be missed if the judgment is continued according to the formula (1). To make the extracted foreground more sufficient, the present embodiment determines according to the following formula:
Performing an OR operation on the formula (2) or (3), i.e. if the pixel value ItIf one of the formulas (2) or (3) is satisfied, the pixel value I is determinedtThe corresponding pixel points are motion foreground pixel points.
Secondly, in the second step, all the marked motion foreground pixel points in the previous step are combined to obtain the approximate outline of the motion target object, and the outline curve is subjected to polygon fitting to obtain the accurate outline curve, namely the boundary curve, of the motion target object.
Specifically, after the approximate contour C of the moving target object is digitally processed,represented as a sequence of points on a plane: c = { p (i) = (x)i,yi) I =1, 2, ·, n }, where n is the number of pixel points on the image boundary curve. Assuming that P (i), P (i-1) and P (i +1) are three adjacent pixels, defining the chord height of a certain pixel P (i) refers to the vertical distance C (P (i)) from the pixel P (i) to the line segment P (i-1) and P (i +1) with the connecting line of the pixel P (i-1) and P (i +1) as the base, and setting the initial threshold of the chord height as TC。
A weight is assigned to each pixel P (i) on the approximate contour, which is the chord height C (P (i)) defined above, and the contribution of the pixel to the shape of the boundary curve is analyzed based on the chord height. If the contribution value is small, deleting the pixel point, keeping the pixel points with larger influence of the shape of the boundary curve, and finally reaching the threshold value TCThe fitting requirements of (1). If the weight value is larger than the threshold value, the algorithm is ended, otherwise, the pixel point with the minimum weight value is deleted, and the steps are repeated. The set of pixel points on the last remaining boundary curve is P = { P = { P }1,P2,…,PmAnd m is the number of pixel points subjected to polygon fitting.
And thirdly, taking the boundary curve fitted in the second step as a moving target object template, after the moving target object template is established, continuously updating the moving target object template in real time according to different visual angles of a plurality of cameras, matching and tracking the moving target object in real time, and performing information fusion by using image frame information acquired by the plurality of cameras, thereby achieving the purpose of estimating the motion track of the moving target object.
Specifically, in the same monitoring area, at the moment of k-1, a moving target object template of the current camera is established, and the state vector of the moving target object is set as Xk-1At time k, the moving target object moves to the next camera, and the state vector of the moving target object is XkThen, the motion state of the moving target object is calculated according to the following formula:
Xk=AXk-1+BUk−1+Wk-1 (4);
wherein A isA state transition matrix; b is a control matrix, Uk−1And Wk-1The change of the distance and the change of the angle between the next camera and the current camera. And updating the moving target object template according to the formula (4).
Establishing an X-Y axis coordinate system by taking a picture frame of a camera acquiring the moving target object at the k-1 moment or the k moment as a two-dimensional plane, and taking a state vector X of the moving target object at the k-1 momentk-1Is a 4-dimensional vector Xk-1=(Sx,k-1、Sy,k-1、Vx,k-1、Vy,k-1)T、Sx,k-1、Sy,k-1The positions of the X axis and the Y axis of the last monitoring area picture at the moment of moving target object k-1 are obtained; vx,k-1、Vy,k-1The component velocity of the moving target object k in the X-axis and Y-axis directions at the moment is shown.
By updating the moving target object template, the method is beneficial to better matching and tracking the moving target object when the cameras with different angles and positions are shot.
When the moving target object leaves the current monitoring area, namely after the moving target object is captured in the adjacent monitoring area, the image information captured by the cameras at multiple positions is subjected to information fusion in the monitoring area where the moving target object just leaves, so that the purpose of estimating the motion track of the moving target object is achieved.
Specifically, a plurality of camera positions in the same monitoring area form a camera network structure, and each camera can convert the image coordinates in the monitoring picture into three-dimensional coordinates in a world coordinate system according to camera calibration information of a detected moving target object. Therefore, it is possible to regard each camera position as one three-dimensional position sensor, and perform the fusion trajectory MI of the target of the multiple cameras on the basis thereof.
And finally, predicting the moving direction of the behavior according to the fused track MI of the target, and allocating a plurality of cameras of adjacent areas in the moving direction to prepare for tracking in advance.
Specifically, according to the third step, in the monitoring area formed by the camera network structure, the fused moving target object moving fusion track MI is formed, so that the moving information of the whole monitoring area can be obtained, wherein because a very large gradient abnormal value is easily generated at the boundary of the monitoring area, in order to accurately predict the moving direction of the behavior, the boundary gradient at the boundary of the monitoring area image is calculated, and the moving direction of the moving target object is predicted by limiting the gradient amplitude and filtering the abnormal value.
In a preferred embodiment, the boundary gradient is calculated as follows:
here, C (u) represents a gradient correlation,andeach represents a gradient function of the gray levels of two adjacent image blocks at the boundary of the image, u represents the gray level of the image block, and x represents a complex conjugate. In addition, the larger the gradient correlation value is, the more similar the two image blocks at the boundary are, the gradient threshold value is set, and the image blocks with the gradient correlation value smaller than the gradient threshold value are filtered out. An image block here refers to a number of small areas, called "image blocks", dividing an image window at the edge of an image.
Based on the fusion track MI after the image blocks with abnormal gradients are filtered out, the fusion track MI is divided into a plurality of short line segments, the more the short line segments are, the better the short line segments are, the shorter the short line segments are, the more the short line segments are, the larger the calculated amount is, but the more accurate the calculation result is. Respectively calculating the moving direction of the moving target object on each short line segment, setting Dx and Dy as the difference value of the x coordinate and the y coordinate of two end points of each short line segment, and the calculation formula of the moving direction orientation (x, y) of the moving target object is as follows:
orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) (5);
the orientation (x, y) is the moving direction of the moving target object on the short line segment, the signs of Dx and Dy are considered during calculation, the moving direction of each short line segment of the moving target object is predicted through the orientation (x, y), the moving directions of all the short line segments are combined to form an overall moving path of the moving target object of the fusion track MI, and the moving path of the moving target object is continuously monitored in multiple dimensions by linking a plurality of cameras in adjacent monitoring areas in the moving direction.
And step five, finally judging the behaviors of the moving target object according to the track of the moving target object in each monitoring area, comparing the behaviors with the defined behavior modes of the behavior definition library to determine whether the behaviors are normal behaviors, if the behaviors are artificially judged to be abnormal behaviors and are not defined, storing the abnormal behaviors as samples into the behavior definition library, storing the video pictures of the abnormal behaviors of the moving target object as evidences, and simultaneously sending an alarm to an administrator in a background.
According to the intelligent hotel terminal monitoring method, the monitoring area is subjected to video acquisition through the plurality of cameras with different visual angles arranged at different terminal positions in the monitoring area, the acquired video image is subjected to moving target object detection, and finally the behavior of the moving target object is judged according to the moving route of the moving target object in each monitoring area, so that the intelligent hotel terminal system can timely, accurately and quickly identify abnormal conditions at the terminal positions, and the security of hotel management is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (6)
1. A smart hotel terminal monitoring method is characterized by comprising the following steps:
the method comprises the following steps that firstly, video collection is carried out on a monitoring area through a plurality of cameras with different visual angles, which are arranged at different terminal positions in the same monitoring area, moving target object detection is carried out on collected video images, and foreground pixel points of the moving target object are extracted;
combining all foreground pixel points of the moving target object to obtain a boundary curve of the moving target object;
combining the foreground pixel points of the moving target object extracted in the step one to obtain the outline of the moving target object, and performing polygon fitting on the outline to obtain a boundary curve; the polygon fitting specifically comprises:
assigning a weight value to each pixel point P (i) on the contour, wherein the weight value is the chord height C (P (i)) of the pixel point P (i), and the chord height is greater than a threshold TCThe pixel point P (i) of (1) is reserved, and the formed point set is P = { P = { (P) }1,P2,…,PmM is the number of pixel points subjected to polygon fitting;
establishing a moving target object template by using the boundary curve, updating the moving target object template according to cameras with different viewing angles and different terminal positions, matching and tracking the moving target object, and performing information fusion by using video image information acquired by a plurality of cameras to generate a fusion track MI of the moving target object;
step four, predicting the moving direction according to the fusion track MI of the moving target object, and allocating a plurality of cameras of adjacent areas in the moving direction to perform tracking preparation;
dividing the fusion track MI into a plurality of short line segments, respectively calculating the moving direction of the moving target object on each short line segment, setting Dx and Dy as the difference value of the x coordinate and the y coordinate of two end points of each short line segment, and calculating the moving direction orientation (x, y) of the moving target object according to the following formula:
orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) (5);
the method comprises the following steps that orientation (x, y) is the moving direction of a moving target object on short line segments, the moving direction of the moving target object on each short line segment is predicted through the orientation (x, y), the moving directions of the moving target objects on all the short line segments are combined to form an integral moving route of the moving target object fused with a MI, and accordingly, a plurality of cameras in adjacent monitoring areas in the moving direction are linked to continue multi-dimensionally monitor the moving route of the moving target object;
calculating gradient correlation at the boundary of the image in the monitoring area, and accurately predicting the moving direction of the moving target object by limiting the gradient amplitude and filtering abnormal values;
wherein C (u) represents a gradient correlation,anda gradient function respectively representing the gray levels of two adjacent image blocks at the boundary of the image, u representing the gray level of the image block, and x representing a complex conjugate;
and step five, finally judging the behaviors of the moving target object according to the moving route of the moving target object in each monitoring area, comparing the behaviors with the defined behavior modes of the behavior definition library, determining whether the behaviors are normal behaviors, if the behaviors are abnormal behaviors, storing a video picture of the abnormal behaviors of the moving target object as evidence, and simultaneously sending an alarm to an administrator in a background.
2. The intelligent hotel terminal monitoring method as claimed in claim 1, wherein in step one, a moving target object detection algorithm based on mixed Gaussian foreground modeling is adopted to obtain the pixel value I of the current pixel pointtWith the mean value of Gaussian distribution of each backgroundMaking a difference between the absolute value and the standard deviation of the distributionD times ofIn comparison, the judgment formula of the foreground pixel point is as follows:
wherein t represents the current frame, t-1 represents the previous frame, and i represents the current pixel point;
if the absolute value is larger than D times of the distribution standard deviation, the pixel point is a foreground pixel point of the moving target object, otherwise, the pixel point is a background pixel point.
3. The intelligent hotel terminal monitoring method as recited in claim 1, wherein the foreground pixels are determined for the presence of gaussian distribution of color according to the following formula:
4. The intelligent hotel terminal monitoring method as recited in claim 1, wherein in step three, the process of updating the moving target object template is as follows:
in the same monitoring area, at the time of k-1, a moving target object template of the current camera is established, and the state vector of the moving target object is set as Xk-1At time k, the moving object moves to the next camera, moving objectThe state vector of the object being XkThen, the motion state of the moving target object is calculated according to the following formula:
Xk=AXk-1+BUk−1+Wk-1 (4);
wherein A is a state transition matrix; b is a control matrix, Uk−1And Wk-1And updating the moving target object template according to the formula (4) for the variable quantity of the distance and the variable quantity of the angle between the next camera and the current camera.
5. The intelligent hotel terminal monitoring method as claimed in claim 4, wherein in step three, after the moving target object leaves the monitoring area, in the monitoring area where the moving target object just leaves, the image coordinates in the monitoring picture captured by the multi-position camera are converted into three-dimensional coordinates in the world coordinate system, so as to obtain the fusion track MI of the moving target object in the monitoring area.
6. The intelligent hotel terminal monitoring method as claimed in claim 1, wherein in step five, if the behavior is artificially determined to be abnormal behavior and is not defined, the abnormal behavior is stored as a sample in the behavior definition library.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111180117.9A CN113628251B (en) | 2021-10-11 | 2021-10-11 | Smart hotel terminal monitoring method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111180117.9A CN113628251B (en) | 2021-10-11 | 2021-10-11 | Smart hotel terminal monitoring method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113628251A CN113628251A (en) | 2021-11-09 |
CN113628251B true CN113628251B (en) | 2022-02-01 |
Family
ID=78390886
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111180117.9A Active CN113628251B (en) | 2021-10-11 | 2021-10-11 | Smart hotel terminal monitoring method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113628251B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116887057B (en) * | 2023-09-06 | 2023-11-14 | 北京立同新元科技有限公司 | Intelligent video monitoring system |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101144716A (en) * | 2007-10-15 | 2008-03-19 | 清华大学 | A multi-view moving target detection, location and corresponding method |
CN102004922A (en) * | 2010-12-01 | 2011-04-06 | 南京大学 | High-resolution remote sensing image plane extraction method based on skeleton characteristic |
CN202736000U (en) * | 2012-03-01 | 2013-02-13 | 桂林电子科技大学 | Multipoint touch screen system device based on computer visual technique |
CN103116875A (en) * | 2013-02-05 | 2013-05-22 | 浙江大学 | Adaptive bilateral filtering de-noising method for images |
CN104020751A (en) * | 2014-06-23 | 2014-09-03 | 河海大学常州校区 | Campus safety monitoring system and method based on Internet of Things |
CN104125433A (en) * | 2014-07-30 | 2014-10-29 | 西安冉科信息技术有限公司 | Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure |
CN104244802A (en) * | 2012-04-23 | 2014-12-24 | 奥林巴斯株式会社 | Image processing device, image processing method, and image processing program |
CN105338248A (en) * | 2015-11-20 | 2016-02-17 | 成都因纳伟盛科技股份有限公司 | Intelligent multi-target active tracking monitoring method and system |
CN107273866A (en) * | 2017-06-26 | 2017-10-20 | 国家电网公司 | A kind of human body abnormal behaviour recognition methods based on monitoring system |
CN108846335A (en) * | 2018-05-31 | 2018-11-20 | 武汉市蓝领英才科技有限公司 | Wisdom building site district management and intrusion detection method, system based on video image |
CN110517293A (en) * | 2019-08-29 | 2019-11-29 | 京东方科技集团股份有限公司 | Method for tracking target, device, system and computer readable storage medium |
CN111640104A (en) * | 2020-05-29 | 2020-09-08 | 研祥智慧物联科技有限公司 | Visual detection method for screw assembly |
CN111754540A (en) * | 2020-06-29 | 2020-10-09 | 中国水利水电科学研究院 | A real-time tracking method and system for monitoring the displacement trajectory of slope particles |
CN112001948A (en) * | 2020-07-30 | 2020-11-27 | 浙江大华技术股份有限公司 | Target tracking processing method and device |
CN113326719A (en) * | 2020-02-28 | 2021-08-31 | 华为技术有限公司 | Method, equipment and system for target tracking |
CN113362374A (en) * | 2021-06-07 | 2021-09-07 | 浙江工业大学 | High-altitude parabolic detection method and system based on target tracking network |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101655357B (en) * | 2009-09-11 | 2011-05-04 | 南京大学 | Method for acquiring phase gradient correlated quality diagram for two-dimensional phase unwrapping |
CN103310427B (en) * | 2013-06-24 | 2015-11-18 | 中国科学院长春光学精密机械与物理研究所 | Image super-resolution and quality enhancement method |
CN104376564B (en) * | 2014-11-24 | 2018-04-24 | 西安工程大学 | Method based on anisotropic Gaussian directional derivative wave filter extraction image thick edge |
CN108038833B (en) * | 2017-12-28 | 2020-10-13 | 瑞芯微电子股份有限公司 | Image self-adaptive sharpening method for gradient correlation detection and storage medium |
CN108280838A (en) * | 2018-01-31 | 2018-07-13 | 桂林电子科技大学 | A kind of intermediate plate tooth form defect inspection method based on edge detection |
US11138768B2 (en) * | 2018-04-06 | 2021-10-05 | Medtronic Navigation, Inc. | System and method for artifact reduction in an image |
WO2020124147A1 (en) * | 2018-12-18 | 2020-06-25 | Genvis Pty Ltd | Video tracking system and data processing |
CN110797034A (en) * | 2019-09-23 | 2020-02-14 | 重庆特斯联智慧科技股份有限公司 | Automatic voice and video recognition intercom system for caring old people and patients |
TWI736083B (en) * | 2019-12-27 | 2021-08-11 | 財團法人工業技術研究院 | Method and system for motion prediction |
-
2021
- 2021-10-11 CN CN202111180117.9A patent/CN113628251B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101144716A (en) * | 2007-10-15 | 2008-03-19 | 清华大学 | A multi-view moving target detection, location and corresponding method |
CN102004922A (en) * | 2010-12-01 | 2011-04-06 | 南京大学 | High-resolution remote sensing image plane extraction method based on skeleton characteristic |
CN202736000U (en) * | 2012-03-01 | 2013-02-13 | 桂林电子科技大学 | Multipoint touch screen system device based on computer visual technique |
CN104244802A (en) * | 2012-04-23 | 2014-12-24 | 奥林巴斯株式会社 | Image processing device, image processing method, and image processing program |
CN103116875A (en) * | 2013-02-05 | 2013-05-22 | 浙江大学 | Adaptive bilateral filtering de-noising method for images |
CN104020751A (en) * | 2014-06-23 | 2014-09-03 | 河海大学常州校区 | Campus safety monitoring system and method based on Internet of Things |
CN104125433A (en) * | 2014-07-30 | 2014-10-29 | 西安冉科信息技术有限公司 | Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure |
CN105338248A (en) * | 2015-11-20 | 2016-02-17 | 成都因纳伟盛科技股份有限公司 | Intelligent multi-target active tracking monitoring method and system |
CN107273866A (en) * | 2017-06-26 | 2017-10-20 | 国家电网公司 | A kind of human body abnormal behaviour recognition methods based on monitoring system |
CN108846335A (en) * | 2018-05-31 | 2018-11-20 | 武汉市蓝领英才科技有限公司 | Wisdom building site district management and intrusion detection method, system based on video image |
CN110517293A (en) * | 2019-08-29 | 2019-11-29 | 京东方科技集团股份有限公司 | Method for tracking target, device, system and computer readable storage medium |
CN113326719A (en) * | 2020-02-28 | 2021-08-31 | 华为技术有限公司 | Method, equipment and system for target tracking |
CN111640104A (en) * | 2020-05-29 | 2020-09-08 | 研祥智慧物联科技有限公司 | Visual detection method for screw assembly |
CN111754540A (en) * | 2020-06-29 | 2020-10-09 | 中国水利水电科学研究院 | A real-time tracking method and system for monitoring the displacement trajectory of slope particles |
CN112001948A (en) * | 2020-07-30 | 2020-11-27 | 浙江大华技术股份有限公司 | Target tracking processing method and device |
CN113362374A (en) * | 2021-06-07 | 2021-09-07 | 浙江工业大学 | High-altitude parabolic detection method and system based on target tracking network |
Non-Patent Citations (2)
Title |
---|
Multi-UAV trajectory planning using gradient-based sequence minimal optimization;QiaoyangXia 等;《Robotics and Autonomous Systems》;20210331;第1-11页 * |
基于多层深度卷积特征的抗遮挡实时跟踪算法;崔洲涓 等;《光学学报》;20190731;第0715002-1至0715002-14页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113628251A (en) | 2021-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112101433B (en) | Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepSORT | |
CN111724439B (en) | Visual positioning method and device under dynamic scene | |
CN109190508B (en) | Multi-camera data fusion method based on space coordinate system | |
US9323991B2 (en) | Method and system for video-based vehicle tracking adaptable to traffic conditions | |
CN103069434B (en) | For the method and system of multi-mode video case index | |
CN110400352B (en) | Camera calibration with feature recognition | |
CN104966304B (en) | Multi-target detection tracking based on Kalman filtering and nonparametric background model | |
CN104951775B (en) | Railway highway level crossing signal region security intelligent identification Method based on video technique | |
CN109598794B (en) | Construction method of three-dimensional GIS dynamic model | |
JP4429298B2 (en) | Object number detection device and object number detection method | |
US9224211B2 (en) | Method and system for motion detection in an image | |
CN101470809B (en) | Moving object detection method based on expansion mixed gauss model | |
CN113469201A (en) | Image acquisition equipment offset detection method, image matching method, system and equipment | |
CN103971386A (en) | Method for foreground detection in dynamic background scenario | |
Bedruz et al. | Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach | |
CN104378582A (en) | Intelligent video analysis system and method based on PTZ video camera cruising | |
EP2549759A1 (en) | Method and system for facilitating color balance synchronization between a plurality of video cameras as well as method and system for obtaining object tracking between two or more video cameras | |
CN110636248B (en) | Target tracking method and device | |
CN117475355A (en) | Security early warning method and device based on monitoring video, equipment and storage medium | |
CN112381132A (en) | Target object tracking method and system based on fusion of multiple cameras | |
CN112528974A (en) | Distance measuring method and device, electronic equipment and readable storage medium | |
CN117593548A (en) | Visual SLAM method for removing dynamic feature points based on weighted attention mechanism | |
CN107045630B (en) | RGBD-based pedestrian detection and identity recognition method and system | |
CN113628251B (en) | Smart hotel terminal monitoring method | |
JP4918615B2 (en) | Object number detection device and object number detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |