CN115150591A - Regional video monitoring method based on intelligent algorithm - Google Patents
Regional video monitoring method based on intelligent algorithm Download PDFInfo
- Publication number
- CN115150591A CN115150591A CN202210893235.2A CN202210893235A CN115150591A CN 115150591 A CN115150591 A CN 115150591A CN 202210893235 A CN202210893235 A CN 202210893235A CN 115150591 A CN115150591 A CN 115150591A
- Authority
- CN
- China
- Prior art keywords
- representing
- face
- function
- data
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a regional video monitoring method based on an intelligent algorithm, which comprises the steps of accurately positioning a monitoring camera, a broadcasting device and a wide-area security radar sensor which are arranged in a region by utilizing a spatial layout function; a positioning function formed by a multi-channel antenna array and a digital beam is utilized for data collected by the wide area security radar sensor; acquiring face data by using a monitoring camera, and processing the face data; if the human face is influenced by the environment or accidentally shielded, the monitoring camera judges whether the action behaviors of the personnel in the area are illegal or not through a personnel behavior algorithm model; the invention analyzes and calculates the data, can accurately judge whether non-workers enter the area, and the provided algorithm and model are based on the existing theory and foundation, and process the acquired data through the database to ensure that the data is accurately stored without causing confusion.
Description
Technical Field
The invention relates to the field of regional monitoring, in particular to a regional video monitoring method based on an intelligent algorithm.
Background
In some specific areas, to prevent entry of non-workers, warning signs are often provided in the area. However, the above method is difficult to play a practical warning role, and people cannot supervise the area in real time. With the development of video monitoring technology, video monitoring and intelligent algorithms are adopted to supervise the area for 24 hours.
Patent publication No. CN107370947A discloses a regional monitoring method, which comprises the steps of: judging whether an alarm event occurs in an area, if so, scheduling a camera to move to an alarm position where the alarm event is located, and adjusting the camera angle and the focal length of the camera to align to the alarm position; and sending the camera data of the camera. According to the area monitoring method, when an alarm event occurs, one camera is scheduled to move to the alarm position where the alarm event is located so as to obtain the camera shooting data of the alarm position, so that clear and complete camera shooting evidence can be provided, and enough camera shooting data support can be provided for events which occur at certain positions and last for a period of time, so that the method is beneficial to frightening personnel from results, and further, the occurrence of related events can be effectively prevented.
Patent publication No. CN110109067A discloses a land-based FMCW region monitoring radar data processing method, which comprises the following steps: step 1: processing the original clutter map by a self-adaptive background clutter map cancellation algorithm to obtain a residual clutter map; step 2: if the signal amplitude of the residual clutter maps exceeds the CFAR threshold, the target is considered to be detected, and then the CFAR detection result is subjected to centering to obtain a centering result; and step 3: and performing trace point association and alpha beta filtering on the focusing result, and reporting and displaying. The method comprises the steps of establishing a radar detection background clutter map, subtracting the radar detection background clutter map from an original clutter map obtained by new detection to obtain a residual clutter map, judging whether a target appears or not by comparing the signal amplitude of the residual clutter with a CFAR threshold, and performing centering processing on a CFAR detection result, so that the anti-interference capability of the ground-based FMCW area monitoring radar is effectively improved.
However, the existing area monitoring method is generally based on video monitoring or radar positioning, and has weak capability in data processing, which causes inaccurate judgment when non-workers enter the area, and thus, the problem to be solved urgently is solved.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an intelligent algorithm-based regional video monitoring method.
The technical scheme adopted by the invention is that the method comprises the following steps:
step S1: accurately positioning a monitoring camera, a broadcasting device and a wide area security radar sensor which are installed in the area by using a spatial layout function;
step S2: the method comprises the following steps of utilizing a multi-channel antenna array and a positioning function formed by digital beams to accurately position and track a ground or air target all day long for data collected by a wide area security radar sensor;
and step S3: the method for acquiring the face data by using the monitoring camera comprises the following steps:
step A1: establishing a face contour extraction function, wherein the face contour extraction function utilizes dynamic feature extraction, and the face needs to be rotated during extraction to extract the distance between a face point and a point;
step A2: storing the extracted contour data into a basic contour database;
step A3: comparing the comparison function with a background human face database, and if the comparison function is confirmed to be a non-working person, sending a driving away warning by the broadcasting device;
and step S4: if the human face is influenced by the environment or accidentally shielded, the monitoring camera judges whether the action behaviors of the personnel in the area are illegal or not through a personnel behavior algorithm model;
step S5: if the personnel do not have illegal actions but linger or linger in the area for a long time, the wide area security radar sensor detects the time when the personnel appear in the area and exceeds a preset time threshold value, the broadcasting device can also give a driving away warning.
Further, the spatial layout function has an expression:
wherein, A m Representing a spatial layout function, f m Indicates the number of monitoring cameras, B m Set of position coordinates representing the installation of a surveillance camera, d m Indicating the number of broadcasting devices, E m Set of position coordinates representing installation announcement device, l m Number of wide area security radar sensors, S m Representing a position coordinate set for installing a wide area security radar sensor, m representing the size of all devices, C n Representing the size of the area of the space.
Further, the positioning function formed by the multi-channel antenna array and the digital beam has the expression:
wherein D (j) m ,d m ) Representing the location function, j m Abscissa representing the object to be located, F n Indicating the direction of the abscissa of the positioned object, d m Representing the ordinate of the object being positioned, eta represents the positioning noise coefficient, u m Representing Gaussian white noise during positioning, rho representing the search range of the wide-area security radar, m representing the size of all devices, C n Size of the area of the space, G n Number of positioning targets H representing wide area security radar n Indicates the wide area security radar reflection distance, delta m Representing the transmission speed, ξ, of the digital beam signal m Indicating the reflection intensity of the positioned object, j, indicating a constant, relating to the cleanliness of the air,indicating the radar wavelength.
Further, the face contour extraction function has the expression:
where R (n, m) represents a face contour extraction function, θ represents an angle between a face point and a point, s 1 (n, m) represents the radian of the face contour, u (n, m) represents the saturation of the face complexion, s 2 (n, m) represents the completeness of the facial contour, and j (n, m) represents the proportion of the five sense organs;
the extracted contour data is stored in a basic contour database, and the expression is as follows:
where k (x, y) represents the data storage function, p v Representing the total amount of data storage, T v Representing data storage speed;
the expression of the comparison function is as follows:
where p represents the facial feature contrast, Q z,v (x, y) coefficient matrix for representing the kind of facial feature data, U z,v (x, y) represents the real-time facial features collected, Q c,v (x, y) represents a table matrix of feature stores in the database, U c,v (x, y) represents the facial features already in the database.
Further, the expression of the personnel behavior algorithm model is as follows:
wherein l (i, h) represents a human behavior prediction function,representing observation noise, e (b) (i, h) represents the motion prediction coefficients of different parts of the human body, and epsilon (b) represents the amplitude of the human motion.
Further, the preset time threshold is expressed as:
wherein L represents a time threshold, f represents an integration number, D (u) represents a period distribution coefficient, and L h (u) represents the total length of stay in the zone, L v (u) denotes the mean value specifying the length of stay within the zone, o (r) denotes the range of stay lengths within the zone, and i denotes the value of the time period.
Has the beneficial effects that:
the invention provides a regional video monitoring method based on an intelligent algorithm, which is characterized in that a monitoring camera and a wide-area security radar sensor which are installed in a region are used for simultaneously acquiring data of personnel, a database is used in a matching manner, a spatial layout function is used to ensure that the radar sensor and the monitoring camera are reasonably arranged, monitoring dead angles cannot be caused, the data are analyzed and calculated by using an algorithm with multiple model fusion, the entering of non-working personnel into the region can be accurately judged, the provided algorithm and model are based on the existing theory and basis, the thought is clear, the understanding is simple, the acquired data are accurately stored by processing the database, and confusion cannot be caused.
Drawings
FIG. 1 is a flow chart of the overall steps of the present invention;
fig. 2 is a processing diagram of the face data of the human face of the present invention.
Detailed Description
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict, and the present application will be further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a regional video monitoring method based on an intelligent algorithm includes the steps of:
step S1: accurately positioning a monitoring camera, a broadcasting device and a wide area security radar sensor which are arranged in a region by utilizing a spatial layout function;
the model number utilized by the surveillance camera is DS-2CD7A447FWD-XZ (S) (/ JM) (/ PTZJ) (B)
The monitoring camera of the type adopts a deep learning algorithm, takes mass pictures and video resources as a roadbed, and forms a deep face image for learning by extracting target characteristics through the machine. The detection rate of the target face is greatly improved;
support intelligent resource mode switching: full structuring (defaulting), face snapshot, face comparison, road monitoring and Smart events;
full-structured mode: a) Snapping a human body: the attribute identification of the movement direction, the coat color, the lower garment color, the gender, the wearing of glasses, the knapsack, the carrying of the east and west, the wearing of a hat, the wearing of a mask, the coat type, the lower garment type, the hair style, the riding state, the manned state, the riding type and the like is supported; b) Snapping a human face: attribute identification such as gender, age range, wearing glasses, wearing a mask, expressions, wearing a hat and the like is supported; c) Snapshotting a non-motor vehicle: the attribute identification of the color of the coat, the color of the lower garment, the gender, the wearing of glasses, the age bracket, the backpack, the carrying of things, the wearing of a hat, the type of the coat, the type of the lower garment, the wearing of a mask, the hair style, the type of a non-motor vehicle, the style of the hat and the like is supported; d) Snapping a motor vehicle: and attribute identification such as license plate numbers, license plate types, vehicle body colors, vehicle brands and the like is supported.
Face snapshot mode: a) detection, snapshot, scoring and screening are supported on a moving face, and an optimal face is output, b) false alarm removal and rapid snapshot of the face are supported, c) two modes of rapid snapshot and optimal snapshot are supported, d) 60 faces are detected at most simultaneously, and e) face duplication removal is supported;
face comparison mode: a) front-end face comparison is supported, b) management of at most 10 face libraries is supported, at most 15 faces are imported, c) the storage space of the total face library is supported to be at most 3GB, a single face does not exceed 300KB, d) defense deployment of different face libraries at different time is supported, e) blacklist comparison success alarm output is supported, f) face detection with the face pupil distance of more than 20 pixels is supported, g) face quick comparison is supported, the optimal comparison mode is set, and h) at most 60 targets are detected simultaneously;
and (3) a road monitoring mode: a) Vehicle detection: supporting license plate recognition and snapshot, license plate number/body color/vehicle type/vehicle brand, b) mixed line detection: the method detects vehicles, pedestrians and non-motor vehicles which are running forward or reversely, automatically identifies the license plate of the vehicle, and can snapshot the picture of the vehicle without the license plate
Smart event mode: the method comprises the following steps of supporting border crossing detection, area intrusion detection, area entering detection, area leaving detection, loitering detection, personnel gathering detection, quick movement detection, parking detection, article leaving detection, article taking detection, scene change detection, audio steep rise detection, audio steep fall detection, audio presence or absence detection and virtual focus detection;
and Smart video recording: support disconnected net continuous transmission function and guarantee that the video does not lose, cooperate Smart NVR SD card to realize retrieving after the intelligence of incident video, analysis and concentrated broadcast, smart coding: supporting low code rate, low time delay, ROI enhancement coding, SVC self-adaptive coding technology and Smart265 coding;
the device supports an upper channel lens and a lower channel lens, and the upper channel is internally provided with the motorized zoom lens, so that the operation is convenient and easy, and the zooming process is stable; the lower channel fixed-focus full-color lens meets the monitoring requirement under low illumination;
the high-efficiency mild light supplement lamp is arranged in the device to avoid light pollution, ensure normal face snapshot at night,
the highest resolution can reach 400 ten thousand pixels, and 30fps real-time images can be output under the resolution, so that the images are smoother, fog penetration and electronic anti-shake are supported, and wide dynamic 120dB is supported;
supporting open type network video interface, ISAPI, GB/T28181-2016, E-HOME2.0/4.0 access, ISUP5.0, view library;
the five-code stream technology is supported, and 20-path simultaneous stream fetching is supported;
the method supports three-level user authority management, supports authorized users and passwords, supports IP address filtering, and supports GB35114 secure encryption (/ JM model support);
the method supports the storage of a standard 256GB MicroSD/MicroSDHC/MicroSDXC card, and supports a 10M/100M/1000M self-adaptive network port;
audio: 2 built-in microphones and 1 built-in loudspeaker are arranged in a standard way, wherein, the model S supports 2 paths of input and 1 path of output and supports a sound pick-up to supply power;
alarm (-S model support): 3 paths of input and 2 paths of output (the alarm input supports switching value, and the alarm output supports DC 12V and 30mA at maximum);
power supply: DC:12V +/-20%; poE:802.3at, type 2, class 4;
protection grade: IP67;
the device supports electric adjustment of the micro cloud platform, the horizontal adjustment range is-90 degrees to 90 degrees, and the vertical adjustment range is-5 degrees to 25 degrees.
The wide-area security radar sensor adopts a Horn-X2 Pro long-distance 3D laser radar which is a long-distance high-performance laser radar and provides the large-view detection capability of 300m long distance and maximum 90 degrees to 30 degrees; the angular resolution of 0.05 degrees to 0.05 degrees can be achieved, so that the small target can be accurately detected at a long distance. In addition, the Horn-X2 Pro also has sufficient performance redundancy and extremely high reliability, meets different requirements in the fields of rail transit, ship shipping, airport aviation, urban traffic, industrial detection and the like, and supports parameter customization of the Horn-X2 Pro.
The number of the spatial layout function and the number of the monitoring cameras, the position coordinate set for installing the monitoring cameras, the number of the broadcasting devices, the position coordinate set for installing the broadcasting devices, the number of the wide area security radar sensors, the position coordinate set for installing the wide area security radar sensors, the sizes of all the devices and the area size of the space are related.
Step S2: the method comprises the following steps of utilizing a multi-channel antenna array and a positioning function formed by digital beams to accurately position and track a ground or air target all day long for data collected by a wide area security radar sensor;
the utility model discloses a multichannel antenna array and digital beam form's location function and the abscissa of location object, the direction of the abscissa of location object, the ordinate of location object, location noise figure, the gaussian white noise when fixing a position, wide area security radar's search range, the area size in the big or small space of all devices, wide area security radar's location target quantity, wide area security radar reflection distance, digital beam signal's transmission speed, the reflection intensity of location object, the cleanliness factor of air, the radar wavelength is relevant.
And step S3: as shown in fig. 2, the processing of the face data by using the monitoring camera includes:
step A1: establishing a face contour extraction function, wherein the face contour extraction function utilizes dynamic feature extraction, and the face needs to be rotated during extraction to extract the distance between a face point and a point;
the face contour extraction function is related to the angle between each point of the face, the radian of the face contour, the saturation of the face complexion, the integrity of the face contour and the proportion of five sense organs;
step A2: storing the extracted contour data into a basic contour database;
and storing the extracted contour data into a basic contour database, wherein the data storage amount is related to the data storage speed.
Step A3: comparing the comparison function with the background face database, and if the result is confirmed to be a non-working person, sending a driving away warning by the broadcasting device;
the contrast function is related to the contrast of the facial features, the coefficient matrix of the facial feature data types, the collected real-time facial features, the scale matrix of feature storage in the database, and the existing facial features in the database.
And step S4: if the monitoring camera is influenced by the environment or the situation that the human face is intentionally shielded, judging whether the action behaviors of the personnel in the area are illegal or not by the monitoring camera through a personnel behavior algorithm model;
the personnel behavior algorithm model is related to a personnel behavior prediction function, observation noise, motion prediction coefficients of different parts of a human body and the amplitude of personnel motion.
Step S5: if the personnel do not have illegal actions but stay or wander in the area for a long time, the wide area security radar sensor detects the time when the personnel appear in the area and exceeds a preset time threshold value, and the broadcasting device can also give a driving away warning.
The time threshold is related to the time period distribution coefficient, the total stay time in the region, the mean value of stay time in the region, the stay time range in the region and the value of the time period.
The spatial layout function has the expression:
wherein A is m Representing a spatial layout function, f m Indicates the number of monitoring cameras, B m Set of position coordinates representing the installation of a surveillance camera, d m Indicating the number of broadcasting devices, E m Set of position coordinates representing installation reporting means,/ m Number of wide area security radar sensors, S m Representing a position coordinate set for installing a wide area security radar sensor, m representing the size of all devices, C n Representing the size of the area of the space.
The multi-channel antenna array and the digital beam forming positioning function have the expression:
wherein D (j) m ,d m ) Representing the location function, j m Showing the abscissa of the object to be positioned, F n Indicating the direction of the abscissa of the positioned object, d m Representing the ordinate of the object being positioned, eta represents the positioning noise coefficient, u m Representing Gaussian white noise during positioning, rho representing the search range of the wide-area security radar, m representing the size of all devices, C n Size of the area of the representation space, G n Number of positioning targets H representing wide area security radar n Indicating wide area security radar reflection distance, delta m Representing transmission speed, ξ, of the digital beam signal m Indicating the reflection intensity of the positioned object, j represents a constant, relating to the cleanliness of the air,indicating the radar wavelength.
The face contour extraction function has the expression:
where R (n, m) represents a face contour extraction function, θ represents an angle between a face point and a point, s 1 (n, m) represents the radian of the face contour, u (n, m) represents the saturation of the face complexion, s 2 (n, m) represents the completeness of the facial contour, and j (n, m) represents the proportion of the five sense organs;
the extracted contour data is stored in a basic contour database, and the expression is as follows:
where k (x, y) represents a data storage function, p v Representing the total amount of data storage, T v Representing data storage speed;
the expression of the comparison function is as follows:
where p represents the facial feature contrast, Q z,v (x, y) coefficient matrix for representing the kind of facial feature data, U z,v (x, y) represents the real-time facial features collected, Q c,v (x, y) represents a table matrix of feature stores in the database, U c,v (x, y) represents the facial features already in the database.
The personnel behavior algorithm model has the expression:
wherein l (i, h) represents a human behavior prediction function,representing observation noise, e (b) (i, h) represents the motion prediction coefficients of different parts of the human body, and epsilon (b) represents the amplitude of the human motion.
The expression of the preset time threshold is as follows:
wherein L represents a time threshold, f represents an integration number, D (u) represents a period distribution coefficient, L h (u) represents the total length of stay in the zone, L v (u) denotes the mean value specifying the length of stay within the zone, o (r) denotes the range of lengths of stay within the zone, and i denotes the value of the time period.
The invention provides a regional video monitoring method based on an intelligent algorithm, which is characterized in that a monitoring camera and a wide-area security radar sensor which are installed in a region are used for simultaneously acquiring data of personnel, a database is used in a matching manner, a spatial layout function is used to ensure that the radar sensor and the monitoring camera are reasonably arranged, monitoring dead angles cannot be caused, the data are analyzed and calculated by using an algorithm with multiple model fusion, the entering of non-working personnel into the region can be accurately judged, the provided algorithm and model are based on the existing theory and basis, the thought is clear, the understanding is simple, the acquired data are accurately stored by processing the database, and confusion cannot be caused.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "fixed" are to be construed broadly, e.g., as meaning either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various equivalent changes, modifications, substitutions and alterations can be made herein without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims and their equivalents.
Claims (6)
1. A regional video monitoring method based on an intelligent algorithm is characterized by comprising the following steps:
step S1: accurately positioning a monitoring camera, a broadcasting device and a wide area security radar sensor which are installed in the area by using a spatial layout function;
step S2: the method comprises the following steps of utilizing a multi-channel antenna array and a positioning function formed by digital beams to accurately position and track a ground or aerial target all day long for data collected by a wide area security radar sensor;
and step S3: the method for acquiring the face data by using the monitoring camera comprises the following steps:
step A1: establishing a face contour extraction function, wherein the face contour extraction function utilizes dynamic feature extraction, and the face needs to be rotated during extraction to extract the distance between a face point and a point;
step A2: storing the extracted contour data into a basic contour database;
step A3: comparing the comparison function with a background human face database, and if the comparison function is confirmed to be a non-working person, sending a driving away warning by the broadcasting device;
and step S4: if the human face is influenced by the environment or accidentally shielded, the monitoring camera judges whether the action behaviors of the personnel in the area are illegal or not through a personnel behavior algorithm model;
step S5: if the personnel do not have illegal actions but linger or linger in the area for a long time, the wide area security radar sensor detects the time when the personnel appear in the area and exceeds a preset time threshold value, the broadcasting device can also give a driving away warning.
2. The method for monitoring regional video based on intelligent algorithm as claimed in claim 1, wherein the spatial layout function is expressed as:
wherein A is m Representing a spatial layout function, f m Indicates the number of monitoring cameras, B m Set of position coordinates representing the installation of a surveillance camera, d m Indicating the number of broadcast devices, E m Set of position coordinates representing installation announcement device, l m Number of wide area security radar sensors, S m Representing a position coordinate set for installing a wide area security radar sensor, m representing the size of all devices, C n Representing the size of the area of the space.
3. The method for monitoring regional video based on intelligent algorithm as claimed in claim 1, wherein the positioning function formed by multi-channel antenna array and digital beam is expressed as:
wherein D (j) m ,d m ) Representing the location function, j m Showing the abscissa of the object to be positioned, F n Indicating the direction of the abscissa of the positioned object, d m Denotes the ordinate of the object to be positioned, eta denotes the positioning noise factor, u m Representing Gaussian white noise during positioning, rho representing the search range of the wide-area security radar, m representing the size of all devices, C n Size of the area of the representation space, G n Number of positioning targets H representing wide area security radar n Indicates the wide area security radar reflection distance, delta m Representing transmission speed, ξ, of the digital beam signal m Indicating the reflection intensity of the positioned object, j, indicating a constant, relating to the cleanliness of the air,indicating the radar wavelength.
4. The method for monitoring regional video based on intelligent algorithm as claimed in claim 1, wherein said face contour extraction function has the expression:
where R (n, m) represents a face contour extraction function, θ represents an angle between a face point and a point, s 1 (n, m) represents the radian of the face contour,u (n, m) represents the saturation of the facial complexion, s 2 (n, m) represents the integrity of the facial contour, j (n, m) represents the proportion of the five sense organs;
the extracted contour data is stored in a basic contour database, and the expression is as follows:
where k (x, y) represents a data storage function, p v Representing the total amount of data storage, T v Representing data storage speed;
the expression of the comparison function is as follows:
where p represents the facial feature contrast, Q z,v (x, y) coefficient matrix for representing the kind of facial feature data, U z,v (x, y) represents the real-time facial features collected, Q c,v (x, y) represents a table matrix of feature stores in the database, U c,v (x, y) represents the facial features already in the database.
5. The regional video monitoring method based on intelligent algorithm as claimed in claim 1, wherein the human behavior algorithm model has an expression:
6. The method for monitoring regional video based on intelligent algorithm as claimed in claim 1, wherein the preset time threshold is expressed as:
wherein L represents a time threshold, f represents an integration number, D (u) represents a period distribution coefficient, and L h (u) represents the total length of stay in the area, L v (u) denotes the mean value specifying the length of stay within the zone, o (r) denotes the range of lengths of stay within the zone, and i denotes the value of the time period.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210893235.2A CN115150591B (en) | 2022-07-27 | 2022-07-27 | Regional video monitoring method based on intelligent algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210893235.2A CN115150591B (en) | 2022-07-27 | 2022-07-27 | Regional video monitoring method based on intelligent algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115150591A true CN115150591A (en) | 2022-10-04 |
CN115150591B CN115150591B (en) | 2024-08-23 |
Family
ID=83413893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210893235.2A Active CN115150591B (en) | 2022-07-27 | 2022-07-27 | Regional video monitoring method based on intelligent algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115150591B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117856054A (en) * | 2024-01-06 | 2024-04-09 | 南通星宇电气有限公司 | Switch cabinet with alarm function and use method thereof |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015092478A1 (en) * | 2013-12-20 | 2015-06-25 | Agence Spatiale Européenne | Digital beam-forming network having a reduced complexity and array antenna comprising the same |
CN109819208A (en) * | 2019-01-02 | 2019-05-28 | 江苏警官学院 | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring |
CN110176117A (en) * | 2019-06-17 | 2019-08-27 | 广东翔翼科技信息有限公司 | A kind of monitoring device and monitoring method of Behavior-based control identification technology |
US20200309900A1 (en) * | 2016-12-05 | 2020-10-01 | Echodyne Corp. | Antenna subsystem with analog beam-steering transmit array and sparse hybrid analog and digital beam-steering receive array |
CN113525340A (en) * | 2020-04-21 | 2021-10-22 | 乾碳国际公司 | ACE heavy truck oil-saving robot system |
CN114488134A (en) * | 2022-03-28 | 2022-05-13 | 北京卫星信息工程研究所 | Satellite-borne multi-channel GNSS-S radar video imaging system and ship track extraction method |
CN114707719A (en) * | 2022-03-29 | 2022-07-05 | 合肥金人科技有限公司 | Personnel behavior prediction method for dangerous area |
-
2022
- 2022-07-27 CN CN202210893235.2A patent/CN115150591B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015092478A1 (en) * | 2013-12-20 | 2015-06-25 | Agence Spatiale Européenne | Digital beam-forming network having a reduced complexity and array antenna comprising the same |
US20200309900A1 (en) * | 2016-12-05 | 2020-10-01 | Echodyne Corp. | Antenna subsystem with analog beam-steering transmit array and sparse hybrid analog and digital beam-steering receive array |
CN109819208A (en) * | 2019-01-02 | 2019-05-28 | 江苏警官学院 | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring |
CN110176117A (en) * | 2019-06-17 | 2019-08-27 | 广东翔翼科技信息有限公司 | A kind of monitoring device and monitoring method of Behavior-based control identification technology |
CN113525340A (en) * | 2020-04-21 | 2021-10-22 | 乾碳国际公司 | ACE heavy truck oil-saving robot system |
CN114488134A (en) * | 2022-03-28 | 2022-05-13 | 北京卫星信息工程研究所 | Satellite-borne multi-channel GNSS-S radar video imaging system and ship track extraction method |
CN114707719A (en) * | 2022-03-29 | 2022-07-05 | 合肥金人科技有限公司 | Personnel behavior prediction method for dangerous area |
Non-Patent Citations (1)
Title |
---|
牟昱舟: "阵列天线广义数字波束成形的研究", 《硕士电子期刊 信息科技辑》, 15 July 2020 (2020-07-15) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117856054A (en) * | 2024-01-06 | 2024-04-09 | 南通星宇电气有限公司 | Switch cabinet with alarm function and use method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN115150591B (en) | 2024-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109064755B (en) | Path identification method based on four-dimensional real-scene traffic simulation road condition perception management system | |
KR102104088B1 (en) | Uwb-based location tracking and ai combined intelligent object tracking video monitoring system | |
CN108965825B (en) | Video linkage scheduling method based on holographic position map | |
US9721168B2 (en) | Directional object detection | |
CN106952477B (en) | Roadside parking management method based on multi-camera image joint processing | |
CN107705574A (en) | A kind of precisely full-automatic capturing system of quick road violation parking | |
CN108802758B (en) | Intelligent security monitoring device, method and system based on laser radar | |
CN104506804B (en) | Motor vehicle abnormal behaviour monitoring device and its method on a kind of through street | |
CN112449093A (en) | Three-dimensional panoramic video fusion monitoring platform | |
CN103795976A (en) | Full space-time three-dimensional visualization method | |
KR102122850B1 (en) | Solution for analysis road and recognition vehicle license plate employing deep-learning | |
CN113593250A (en) | Illegal parking detection system based on visual identification | |
CN107360394A (en) | More preset point dynamic and intelligent monitoring methods applied to frontier defense video monitoring system | |
CN106954049A (en) | The airport birds information acquisition method of panorama and precise image tracking system | |
Lalonde et al. | A system to automatically track humans and vehicles with a PTZ camera | |
CN112257683A (en) | Cross-mirror tracking method for vehicle running track monitoring | |
CN115150591A (en) | Regional video monitoring method based on intelligent algorithm | |
Chundi et al. | Intelligent video surveillance systems | |
CN201796450U (en) | Intelligent monitoring system with border-crossing identification function | |
CN115361499B (en) | Dual-machine cooperative border defense target recognition and tracking system and method | |
CN118262297A (en) | Automatic acquisition method, system and equipment for dominant offence of inland waterway ship based on AI visual identification | |
Rashid et al. | Traffic Violations Detection Review Based on Intelligent Surveillance Systems | |
Boult | Geo-spatial active visual surveillance on wireless networks | |
NL2036250B1 (en) | Edge devices for anonymized monitoring of moveable objects, method of communication using said edge devices | |
For et al. | A multi-camera collaboration framework for real-time vehicle detection and license plate recognition on highways |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |