CN118587415B - A video target detection and tracking method and system based on artificial intelligence - Google Patents
A video target detection and tracking method and system based on artificial intelligence Download PDFInfo
- Publication number
- CN118587415B CN118587415B CN202410639450.9A CN202410639450A CN118587415B CN 118587415 B CN118587415 B CN 118587415B CN 202410639450 A CN202410639450 A CN 202410639450A CN 118587415 B CN118587415 B CN 118587415B
- Authority
- CN
- China
- Prior art keywords
- block
- value
- grayscale
- positioning
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video target detection tracking method and system based on artificial intelligence, which belong to the technical field of video monitoring, and the method comprises the steps of marking a video target, extracting a target image of the video target, carrying out corresponding verification and correction, carrying out gray processing on the target image to obtain a target gray image, analyzing the target gray image to obtain initial identification characteristics of the video target, identifying each pixel block in the initial identification characteristics, determining a corresponding positioning block according to block attributes corresponding to each pixel block, wherein the positioning block is used for determining the pixel block of the video target or a combined block between the pixel blocks, acquiring a monitoring video in real time, carrying out gray processing on the monitoring video to obtain a corresponding monitoring gray image, and determining a corresponding positioning matching block in the monitoring gray image according to the positioning block, and fifth, identifying the corresponding video target according to the positioning matching block and the initial identification characteristics.
Description
Technical Field
The invention belongs to the technical field of video monitoring, and particularly relates to a video target detection tracking method and system based on artificial intelligence.
Background
With the rapid development of information technology, video monitoring technology is widely used in various fields. Traditional video monitoring mainly relies on manual monitoring, and the method is not only low in efficiency, but also easily misses key information. In recent years, with the continuous progress of artificial intelligence technology, particularly development of deep learning and computer vision technology, automatic video object detection and tracking is enabled. Therefore, the invention provides the video target detection tracking method and system based on artificial intelligence, which are used for better and faster video target detection tracking and are the directions of solving and improving the current needs.
Disclosure of Invention
In order to solve the problems of the scheme, the invention provides a video target detection tracking method and system based on artificial intelligence.
The aim of the invention can be achieved by the following technical scheme:
a video target detection tracking method based on artificial intelligence includes:
Marking a video target, extracting a target image of the video target, and performing corresponding verification and correction;
step two, carrying out gray processing on a target image to obtain a target gray image, and analyzing the target gray image to obtain initial identification characteristics of a video target;
further, the method for analyzing the target gray scale image comprises the following steps:
The method comprises the steps of identifying gray values of pixels in a target gray image, merging according to the gray values of adjacent pixels to obtain a plurality of pixel blocks and block attributes corresponding to the pixel blocks, marking the pixel blocks in the target gray image according to the block attributes and the corresponding block attributes, and dividing the target gray image to obtain initial identification features corresponding to video targets.
Further, the method for merging according to the gray value between each adjacent pixel includes:
step SA1, marking each pixel as a single sample, identifying a gray level difference value between gray level values corresponding to each adjacent single sample, merging the adjacent single samples with the gray level difference value smaller than a threshold value X1 to obtain a merged sample, and calculating the gray level value corresponding to the merged sample according to a merging formula;
Step SA2, determining a sample to be combined of the combined samples, calculating a gray level difference value between the combined samples and the sample to be combined, combining the combined samples with the gray level difference value smaller than a threshold value X1 with the sample to be combined to obtain a new combined sample, and calculating the gray level value of the new combined sample according to a combining formula;
step SA3, the step SA2 is circulated until the merging samples do not meet the merging requirements and the samples to be merged are marked as pixel blocks;
And step SA4, judging whether a merging block exists, returning to step SA3 when the merging block exists, identifying the corresponding part of each pixel block when the merging block does not exist, determining the corresponding stable value according to the identified part, identifying the block shape, the position relation and the gray value of each pixel block, and integrating the obtained stable value, the part, the block shape, the position relation and the gray value into the block attribute of the corresponding pixel block.
Further, the combining formula is: wherein Hd is the gray value of the combined sample, HB is the sum of the gray values corresponding to the single samples in the combined sample, and L is the number of the single samples in the combined sample.
Further, the method for determining the stable value includes:
matching initial values corresponding to all the parts according to the identified parts;
Acquiring current environmental information, analyzing each part according to the acquired environmental information, and acquiring an adjustment coefficient corresponding to each part;
and calculating a corresponding stable value according to the formula WD=CZ×τ, wherein WD is the stable value, CZ is the initial value, and τ is the adjustment coefficient.
Step three, identifying each pixel block in the initial identification feature, and determining a corresponding positioning block according to the block attribute corresponding to each pixel block, wherein the positioning block is used for determining the pixel block of a video target or a combined block among the pixel blocks;
The method for determining the positioning block comprises the following steps:
identifying block attributes of each pixel block, wherein the block attributes comprise stable values, parts, block shapes, position relations and gray values;
Identifying each pixel block in the first combination, and marking the pixel block as a unit block;
Identifying combined shape data for the first combination;
Identifying stable values corresponding to the unit blocks, and selecting the lowest stable value in the unit blocks as a representative stable value of the first combination;
Calculating a corresponding positioning evaluation value according to the formula QYU = (b1×wdb) × (b2×ba);
Wherein QYU is a positioning evaluation value, b1 and b2 are both proportionality coefficients, the value range is 0< b1 less than or equal to 1,0< b2 less than or equal to 1, WDB is a stable value, BA is a first shape value;
and selecting the first combination with the largest positioning evaluation value as a positioning block.
Further, the calculating method of the first shape value includes:
Determining a reference background according to the environmental background characteristics and the combined shape data, identifying a reference similarity value between the reference background and the combined shape data, and counting the reference probability of the reference background;
Performing boundary assimilation evaluation on the combined shape data and the environmental background characteristics according to a preset assimilation evaluation model, wherein the expression of the assimilation evaluation model is as follows Wherein x is input data;
According to the formula And calculating a corresponding first shape value, wherein BA is the first shape value, SL is the reference similarity value and gL is the reference probability.
Acquiring a monitoring video in real time, carrying out gray processing on the monitoring video to obtain a corresponding monitoring gray image, and determining a corresponding positioning matching block in the monitoring gray image according to a positioning block;
further, the method for determining the positioning matching block comprises the following steps:
recognizing the gray value of each pixel in the monitoring gray image, and merging according to the gray value between each two adjacent pixels to obtain a plurality of monitoring blocks and the gray value and the block shape corresponding to each monitoring block;
recognizing the gray value and the block shape of each pixel block in the positioning block, and screening each monitoring block according to the block shape of the pixel block to obtain a screened monitoring gray image;
traversing the positioning block on the monitoring gray level image, calculating positioning matching values in real time, comparing the obtained positioning matching values, and determining a positioning matching block.
Further, the calculation method of the positioning matching value comprises the following steps:
marking gray values of pixel blocks in the positioning blocks as hi, wherein i represents corresponding pixel blocks, i=1, 2, &..the., n is a positive integer;
identifying a monitoring block corresponding to each pixel block on the monitoring gray image, and marking the gray value of the corresponding monitoring block as ki;
According to the formula And calculating a corresponding positioning matching value, wherein DPW is the positioning matching value.
And fifthly, identifying the corresponding video target according to the positioning matching block and the initial identification feature.
A video target detection tracking system based on artificial intelligence comprises a target analysis module, a positioning module and a tracking module;
The target analysis module is used for analyzing the marked video target and determining corresponding initial identification characteristics;
The positioning module is used for determining a positioning block corresponding to the video target according to the initial identification characteristics;
The tracking module is used for identifying and tracking the video target, acquiring the monitoring video in real time, carrying out gray processing on the monitoring video to obtain a corresponding monitoring gray image, determining a corresponding positioning matching block in the monitoring gray image according to the positioning block, and identifying the corresponding video target according to the positioning matching block and the initial identification feature.
Compared with the prior art, the invention has the beneficial effects that:
The method and the device realize real-time identification and tracking of the video target, particularly realize quick and efficient positioning of the video target by quickly identifying the corresponding positioning matching block through the positioning block, and are convenient for quick operation due to the fact that a model, an algorithm and the like with high resource occupancy rate are rarely configured in the operation process. The method and the device realize automatic real-time detection and tracking of the targets in the video stream without manual intervention, thereby greatly improving the monitoring efficiency.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a functional block diagram of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, a video target detecting and tracking method based on artificial intelligence, the method comprises:
Marking as a video target to be detected and tracked, extracting a target image of the video target, and carrying out corresponding verification and correction, namely showing the extracted target image to a manager, determining whether the extracted target image is a marked complete image, and if not, carrying out corresponding adjustment such as cutting and the like;
performing gray processing on the verified and adjusted target image to obtain a target gray image, and analyzing the target gray image to obtain initial identification characteristics of a video target;
the method for analyzing the target gray level image comprises the following steps:
the method comprises the steps of identifying gray values of pixels in a target gray image, merging according to the gray values of adjacent pixels to obtain a plurality of pixel blocks and block attributes corresponding to the pixel blocks, marking the pixel blocks in the target gray image according to the block attributes and the corresponding block attributes, dividing the target gray image, namely dividing the target gray image marked corresponding to a video target, and obtaining initial identification features corresponding to the video target.
The method for merging according to gray values between adjacent pixels comprises the following steps:
step SA1, marking each pixel as a single sample, identifying a gray level difference value between gray level values corresponding to each adjacent single sample, merging the adjacent single samples with the gray level difference value smaller than a threshold value X1 to obtain a merged sample, and calculating the gray level value corresponding to the merged sample according to a merging formula;
the combining formula is: wherein Hd is the gray value of the combined sample, HB is the sum of the gray values corresponding to the single samples in the combined sample, and L is the number of the single samples in the combined sample.
Step SA2, determining samples to be combined of the combined samples, namely each single sample adjacent to the combined samples and the combined samples, calculating a gray level difference value between the combined samples and the samples to be combined, combining the combined samples with the gray level difference value smaller than a threshold value X1 with the samples to be combined to obtain new combined samples, and calculating the gray level value of the new combined samples according to a combining formula;
Step SA3, namely, the step SA2 is cycled until the merging samples do not meet the merging requirements of the samples to be merged, and the corresponding merging samples are marked as pixel blocks, wherein the merging requirements are that the gray level difference value is smaller than a threshold value X1;
And step SA4, judging whether a merging block exists, returning to step SA3 when the merging block exists, when the merging block does not exist, indicating that all merging is completed, identifying the corresponding part of each pixel block when the merging block does not exist, determining the corresponding stable value according to the identified part, identifying the block shape, the position relation and the gray value of each pixel block, wherein the position relation refers to the connection and the adjacent relation, and integrating the obtained stable value, the part, the block shape, the position relation and the gray value into the block attribute of the corresponding pixel block.
The method for determining the stable value comprises the following steps:
Identifying the corresponding parts of each pixel block, such as the relevant parts of coat, palm, hair, face, eyes and the like for people;
Counting various parts possibly encountered in video detection tracking application, counting the variation probability of each part, such as the variation probability of changing clothes, removing clothes and the like, and counting the corresponding probability value, namely removing percentage numbers;
Analyzing each part according to the obtained environment information to obtain corresponding adjustment coefficients of each part, namely setting corresponding adjustment coefficients according to the influence of the current environment, specifically establishing a corresponding intelligent model based on a CNN network or a DNN network and the like, establishing a corresponding training set by a manual mode to train, wherein the training set comprises input data and output data, the input data is part and environment information, the output data is adjustment coefficients, and analyzing the intelligent model after the training is successful to obtain the corresponding adjustment coefficients;
and calculating a corresponding stable value according to the formula WD=CZ×τ, wherein WD is the stable value, CZ is the initial value, and τ is the adjustment coefficient.
Step three, identifying each pixel block in the initial identification characteristic, and determining a corresponding positioning block according to the block attribute corresponding to each pixel block, wherein the positioning block is used for rapidly determining the pixel block of a video target or a combined block among the pixel blocks;
The method for determining the positioning block comprises the following steps:
Identifying block attributes of each pixel block, wherein the block attributes comprise stable values, parts, block shapes, position relations and gray values;
Determining various pixel block combination modes according to the position relation among the pixel blocks, marking the pixel blocks as a first combination, and singly or in combination;
The method comprises the steps of identifying combined shape data of a first combination, wherein the combined shape data comprises data of the overall boundary shape, the area, the gray value and the like of each unit block, acquiring preset environmental background characteristics, wherein the environmental background characteristics are used for representing various shape data possibly existing in a current environment, and the environmental background under the monitoring background is basically fixed, controllable and predictable, so that corresponding environmental background characteristics can be preset according to actual conditions;
determining a background most similar to the combined shape data according to preset environmental background characteristics, marking the background as a reference background, identifying the similarity between the reference background and the combined shape data, marking the similarity as a reference similarity value, counting the occurrence probability of the reference background, counting according to historical data in a period of time, and marking the similarity as a reference probability;
According to the combination shape data and environmental background characteristics, making boundary assimilation evaluation, i.e. if the cell blocks positioned at the boundary in the first combination are possibly assimilated with environment, for example, they are all positive red, their gray values are not greatly different, they are easy to be compatible with environment, their specificity is insufficient, and are not favourable for quick identification from environment, presetting a gray value difference value interval, according to the gray value difference value interval and the occurrence probability of said background meeting the gray value difference value interval setting assimilation evaluation standard, i.e. it is necessary to reach gray value difference value interval and be higher than correspondent occurrence probability to make assimilation evaluation standard, according to the assimilation evaluation standard setting correspondent assimilation evaluation model, its expression is Wherein x is input data, and is the combination of shape data and environmental background characteristics;
According to the formula And calculating a corresponding first shape value, wherein BA is the first shape value, SL is the reference similarity value and gL is the reference probability.
Identifying stable values corresponding to the unit blocks, and selecting the lowest stable value in the unit blocks as a representative stable value of the first combination;
Calculating a corresponding positioning evaluation value according to a formula QYU = (b1×WDB) x (b2×BA), wherein QYU is the positioning evaluation value, b1 and b2 are both proportionality coefficients, the value range is 0< b1 less than or equal to 1,0< b2 less than or equal to 1, WDB is a representative stable value, and BA is a first shape value.
And selecting the first combination with the largest positioning evaluation value as a positioning block.
The positioning block is an optimal pixel block, the identification and the positioning are convenient, the number of the pixel blocks is preferably not more than 5, and the identification efficiency and the identification precision are higher as the number of the pixel blocks is smaller.
Step four, acquiring a monitoring video in real time, carrying out gray processing on a current monitoring picture to obtain a monitoring gray image, identifying gray values of pixels in the monitoring gray image, merging according to the gray values between adjacent pixels to obtain a plurality of pixel blocks and gray values and block shapes corresponding to the pixel blocks;
the gray value and the block shape of each pixel block corresponding to the positioning block are identified, each monitoring block is screened according to the block shape of the pixel block, namely, the corresponding monitoring block of the shape which is not changed by analyzing the pixel block according to the actual situation is removed, and a corresponding identification evaluation model is established according to the basic common sense and the prior art for evaluation;
Marking gray values of pixel blocks in a positioning block as hi, wherein i represents a corresponding pixel block, i=1, 2, &..the., n is a positive integer;
The method comprises the steps of traversing the positioning blocks on the monitoring gray level image, calculating positioning matching values in real time, directly skipping if the positioning blocks correspond to the removed monitoring blocks, calculating the positioning matching values instead of a cavity on the monitoring gray level image, comparing the obtained positioning matching values to determine positioning matching blocks, namely, combining the monitoring blocks corresponding to the positioning blocks with the minimum positioning matching values, wherein the positioning matching blocks are image blocks on the monitoring gray level image and are part of a video target.
The calculation method of the positioning matching value comprises the following steps:
The method comprises the steps of identifying the monitoring blocks corresponding to each pixel block on a monitoring gray image, marking the gray value of the corresponding monitoring block as ki, representing the corresponding to the corresponding pixel block, and in practical application, generally, carrying out quick matching on the corresponding monitoring block by means of the corresponding intelligent model, namely, directly determining the corresponding monitoring block in the moving process of a positioning block, then carrying out direct calculation according to the corresponding gray value, thereby improving the efficiency, and particularly, establishing the corresponding intelligent model according to the prior art, such as establishing the corresponding intelligent model based on a neural network.
According to the formulaAnd calculating a corresponding positioning matching value, wherein DPW is the positioning matching value.
And fifthly, identifying the corresponding video target according to the positioning matching block and the initial identification feature.
The method and the device realize real-time identification and tracking of the video target, particularly realize quick and efficient positioning of the video target by quickly identifying the corresponding positioning matching block through the positioning block, and are convenient for quick operation due to the fact that a model, an algorithm and the like with high resource occupancy rate are rarely configured in the operation process. The method and the device realize automatic real-time detection and tracking of the targets in the video stream without manual intervention, thereby greatly improving the monitoring efficiency.
A video target detection tracking system based on artificial intelligence comprises a target analysis module, a positioning module and a tracking module;
The target analysis module is used for analyzing the marked video target and determining the corresponding initial identification characteristic.
The positioning module is used for determining a positioning block corresponding to the video target according to the initial identification characteristics.
The tracking module is used for identifying and tracking the video target, acquiring the monitoring video in real time, carrying out gray processing on the monitoring video to obtain a corresponding monitoring gray image, determining a corresponding positioning matching block in the monitoring gray image according to the positioning block, and identifying the corresponding video target according to the positioning matching block and the initial identification feature.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas which are obtained by acquiring a large amount of data and performing software simulation to obtain the closest actual situation, and preset parameters and preset thresholds in the formulas are set by a person skilled in the art according to the actual situation or are obtained by simulating a large amount of data.
The above embodiments are only for illustrating the technical method of the present invention and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present invention may be modified or substituted without departing from the spirit and scope of the technical method of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410639450.9A CN118587415B (en) | 2024-05-22 | 2024-05-22 | A video target detection and tracking method and system based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410639450.9A CN118587415B (en) | 2024-05-22 | 2024-05-22 | A video target detection and tracking method and system based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118587415A CN118587415A (en) | 2024-09-03 |
CN118587415B true CN118587415B (en) | 2025-01-24 |
Family
ID=92527533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410639450.9A Active CN118587415B (en) | 2024-05-22 | 2024-05-22 | A video target detection and tracking method and system based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118587415B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103237156A (en) * | 2013-04-02 | 2013-08-07 | 哈尔滨工业大学 | Modified block matching algorithm applied to electronic image stabilization |
CN110879999A (en) * | 2019-11-14 | 2020-03-13 | 武汉兰丁医学高科技有限公司 | Micro microscopic image acquisition device based on mobile phone and image splicing and identifying method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101222604B (en) * | 2007-04-04 | 2010-06-09 | 晨星半导体股份有限公司 | Method for calculating motion estimation value and estimating motion vector of image |
CN106558042B (en) * | 2015-09-29 | 2020-03-31 | 阿里巴巴集团控股有限公司 | Method and device for positioning key points of image |
CN112257819B (en) * | 2020-12-23 | 2021-03-30 | 恒信东方文化股份有限公司 | Image matching method and system |
CN115334228B (en) * | 2021-04-26 | 2024-08-20 | 华为技术有限公司 | Video processing method and related device |
CN117221493A (en) * | 2023-09-27 | 2023-12-12 | 河北英创科技有限公司 | Early warning system based on AI video analysis and control security protection |
-
2024
- 2024-05-22 CN CN202410639450.9A patent/CN118587415B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103237156A (en) * | 2013-04-02 | 2013-08-07 | 哈尔滨工业大学 | Modified block matching algorithm applied to electronic image stabilization |
CN110879999A (en) * | 2019-11-14 | 2020-03-13 | 武汉兰丁医学高科技有限公司 | Micro microscopic image acquisition device based on mobile phone and image splicing and identifying method |
Also Published As
Publication number | Publication date |
---|---|
CN118587415A (en) | 2024-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109670441B (en) | Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet | |
CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
CN105023008B (en) | The pedestrian of view-based access control model conspicuousness and multiple features recognition methods again | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN105404847B (en) | A kind of residue real-time detection method | |
Fang et al. | Multi-level feature fusion based locality-constrained spatial transformer network for video crowd counting | |
CN111310662B (en) | A method and system for flame detection and recognition based on integrated deep network | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN114998852A (en) | Intelligent detection method for road pavement diseases based on deep learning | |
CN105512618B (en) | Video tracing method | |
CN102214309B (en) | Special human body recognition method based on head and shoulder model | |
CN108960047B (en) | Face duplication removing method in video monitoring based on depth secondary tree | |
CN101986348A (en) | Visual target identification and tracking method | |
CN109145708A (en) | A kind of people flow rate statistical method based on the fusion of RGB and D information | |
CN107066952A (en) | A kind of method for detecting lane lines | |
CN110096945B (en) | Indoor monitoring video key frame real-time extraction method based on machine learning | |
CN108256462A (en) | A kind of demographic method in market monitor video | |
CN108960142B (en) | Pedestrian re-identification method based on global feature loss function | |
CN107123130A (en) | Kernel correlation filtering target tracking method based on superpixel and hybrid hash | |
CN104599291B (en) | Infrared motion target detection method based on structural similarity and significance analysis | |
CN109035296A (en) | A kind of improved moving objects in video detection method | |
Gao et al. | Anomaly detection of trackside equipment based on GPS and image matching | |
CN105631410B (en) | A classroom detection method based on intelligent video processing technology | |
CN114863464A (en) | Second-order identification method for PID drawing picture information | |
CN103927517B (en) | Motion detection method based on human body global feature histogram entropies |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |