CN110796157A - Image difference identification method and device and storage medium - Google Patents
Image difference identification method and device and storage medium Download PDFInfo
- Publication number
- CN110796157A CN110796157A CN201910809238.1A CN201910809238A CN110796157A CN 110796157 A CN110796157 A CN 110796157A CN 201910809238 A CN201910809238 A CN 201910809238A CN 110796157 A CN110796157 A CN 110796157A
- Authority
- CN
- China
- Prior art keywords
- image
- compared
- region
- area
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000001514 detection method Methods 0.000 claims abstract description 147
- 230000000694 effects Effects 0.000 abstract description 5
- 230000006835 compression Effects 0.000 description 11
- 238000007906 compression Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 108010001267 Protein Subunits Proteins 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application relates to an image difference identification method, an image difference identification device and a storage medium, wherein the image difference identification method comprises the following steps: acquiring an image to be identified and a reference image; determining at least one detection direction according to the image to be identified; determining a first region to be compared from the image to be identified according to at least one detection direction, and determining a second region to be compared with the first region to be compared in the same position from the reference image; determining the structural similarity between the first region to be compared and the second region to be compared; according to the structural similarity and at least one detection direction, difference recognition is carried out on the image to be recognized and the reference image, so that difference recognition can be carried out by taking the structural similarity of the same position area in the two images as a difference recognition standard, the image recognition accuracy is improved, and the recognition effect is good.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image difference recognition method, an image difference recognition apparatus, and a storage medium.
Background
GIF (Graphics Interchange Format) is an image file Format developed by CompuServe corporation in 1987. The data of the GIF file is a continuous tone lossless compression format based on the LZW algorithm, and a plurality of color images can be stored in one GIF file, so that animation is formed.
Currently, in the GIF image compression processing technology in the prior art, a difference region between two previous and next frames of images is determined by comparing pixel color value differences of the two images, and then compression is performed based on the difference region.
However, the GIF image compression processing technology directly determines the difference region by comparing the pixel color value difference of the image, and does not consider the sensitivity of human vision to the image difference, that is, there is an image difference in the difference region that cannot be recognized by human eyes, which results in a low accuracy of image difference recognition and a low image compression ratio.
Disclosure of Invention
The embodiment of the application provides an image difference identification method, an image difference identification device and a storage medium, so that the accuracy of image difference identification is improved.
The embodiment of the application provides an image difference identification method, which comprises the following steps:
acquiring an image to be identified and a reference image;
determining at least one detection direction according to the image to be identified;
determining a first region to be compared from the image to be identified according to the at least one detection direction, and determining a second region to be compared with the first region to be compared in the same position from the reference image;
determining the structural similarity between the first region to be compared and the second region to be compared;
and performing difference identification on the image to be identified and the reference image according to the structural similarity and at least one detection direction.
Wherein, the determining a first region to be compared from the image to be recognized according to the at least one detection direction specifically includes:
determining a target direction according to the at least one detection direction;
determining a first region to be compared from the image to be identified according to the target direction;
the performing difference identification on the image to be identified and the reference image according to the structural similarity and the at least one detection direction specifically includes:
judging whether the structural similarity is greater than a preset threshold value or not;
if so, marking the first area to be compared as the same area, updating the first area to be compared according to the target direction, and then returning to the step of determining a second area to be compared with the first area to be compared in the same position in the reference image;
if not, when the updated times of the first area to be compared is larger than a preset time value, determining a difference area between the image to be recognized and the reference image according to the same area so as to perform difference recognition on the image to be recognized and the reference image.
The determining a target direction according to the at least one detection direction specifically includes:
determining a target direction from the plurality of detected directions;
before the determining the difference region between the image to be identified and the reference image according to the same region, the method further comprises:
and updating the target direction according to the rest detection directions, updating the image to be recognized and the reference image according to the same region, and then returning to the step of determining the first region to be compared from the image to be recognized according to the target direction.
The determining a target direction according to the at least one detection direction specifically includes:
determining a target direction from the plurality of detected directions;
after the judging whether the structural similarity is greater than a preset threshold, the method further includes:
if not, when the updated times of the first area to be compared is not larger than the preset times value, updating the target direction according to the remaining detection directions, and then returning to the step of determining the first area to be compared from the image to be identified according to the target direction.
Wherein, the updating the first area to be compared according to the target direction specifically includes:
increasing the size of the first area to be compared by a preset offset distance along the target direction;
and updating the first area to be compared after the size is increased to be the first area to be compared.
Wherein, the determining the difference region between the image to be identified and the reference image according to the same region specifically comprises:
and taking the area except the same area in the image to be recognized as a difference area between the image to be recognized and a reference image.
The determining a first region to be compared from the image to be recognized according to the target direction specifically includes:
acquiring the position and the pixel value of a first pixel point in the image to be identified and the position and the pixel value of a second pixel point in the reference image;
comparing the pixel value of the first pixel point with the pixel value of the second pixel point;
when a first pixel point which is the same as the second pixel point in position and has a different pixel value exists, taking the corresponding first pixel point as a difference pixel point;
determining an initial difference area from the first image according to the position of the difference pixel point;
and determining a first region to be compared from the initial difference region according to the target direction.
Wherein, the updating the first area to be compared according to the target direction specifically includes:
and updating the first area to be compared according to the target direction and the initial difference area.
An embodiment of the present application further provides an image difference recognition apparatus, including:
the acquisition module is used for acquiring an image to be identified and a reference image;
the first determining module is used for determining at least one detection direction according to the image to be identified;
the second determining module is used for determining a first region to be compared from the image to be identified according to the at least one detection direction and determining a second region to be compared with the first region to be compared in the same position from the reference image;
the third determining module is used for determining the structural similarity between the first region to be compared and the second region to be compared;
and the identification module is used for carrying out difference identification on the image to be identified and the reference image according to the structural similarity and at least one detection direction.
Wherein the second determining module specifically includes:
a first determining unit for determining a target direction according to the at least one detection direction;
the second determining unit is used for determining a first region to be compared from the image to be identified according to the target direction;
a third determining unit configured to determine a second region to be compared having the same position as the first region to be compared from the reference image;
the identification module specifically comprises:
the judging unit is used for judging whether the structural similarity is larger than a preset threshold value or not;
a first updating unit, configured to mark the first region to be compared as the same region when the structural similarity is greater than the preset threshold, update the first region to be compared according to the target direction, and then trigger the third determining unit to determine, from the reference image, a second region to be compared, where the position of the second region to be compared is the same as that of the first region to be compared;
and the fourth determining unit is used for determining a difference area between the image to be recognized and the reference image according to the same area when the structural similarity is not greater than the preset threshold value and the updated times of the first area to be compared are greater than a preset time value so as to perform difference recognition on the image to be recognized and the reference image.
Wherein the at least one detection direction is plural, and the first determining unit is specifically configured to:
determining a target direction from the plurality of detected directions;
the identification module further comprises a second updating unit, wherein:
the second updating unit is configured to update the target direction according to the remaining detection directions when the structural similarity is not greater than the preset threshold, update the image to be recognized and the reference image according to the same region, and then trigger the second determining unit to determine the first region to be compared from the image to be recognized according to the target direction.
Wherein the at least one detection direction is plural, and the first determining unit is specifically configured to:
determining a target direction from the plurality of detected directions;
the identification module further comprises a third updating unit, wherein:
the third updating unit is configured to update the target direction according to the remaining detection directions when the structural similarity is not greater than the preset threshold and the updated number of times of the first region to be compared is not greater than a preset number of times, and then trigger the second determining unit to determine the first region to be compared from the image to be identified according to the target direction.
Wherein, the updating the first area to be compared according to the target direction specifically includes:
increasing the size of the first area to be compared by a preset offset distance along the target direction;
and updating the first area to be compared after the size is increased to be the first area to be compared.
Wherein the fourth determining unit is specifically configured to:
and when the structural similarity is not greater than the preset threshold, taking the area except the same area in the image to be recognized as a difference area between the image to be recognized and a reference image.
Wherein the second determining unit is specifically configured to:
acquiring the position and the pixel value of a first pixel point in the image to be identified and the position and the pixel value of a second pixel point in the reference image;
comparing the pixel value of the first pixel point with the pixel value of the second pixel point;
when a first pixel point which is the same as the second pixel point in position and has a different pixel value exists, taking the corresponding first pixel point as a difference pixel point;
determining an initial difference area from the first image according to the position of the difference pixel point;
and determining a first region to be compared from the initial difference region according to the target direction.
Wherein, the updating the first area to be compared according to the target direction specifically includes:
and updating the first area to be compared according to the target direction and the initial difference area.
The embodiment of the application also provides a computer readable storage medium, wherein a plurality of instructions are stored in the storage medium, and the instructions are suitable for being loaded by a processor to execute any one of the image difference identification methods.
According to the image difference identification method, the image difference identification device and the storage medium, the image to be identified and the reference image are obtained, at least one detection direction is determined according to the image to be identified, the first area to be compared is determined from the image to be identified according to the at least one detection direction, the second area to be compared, which is identical to the first area to be compared, is determined from the reference image, the structural similarity between the first area to be compared and the second area to be compared is determined, and the image to be identified and the reference image are subjected to difference identification according to the structural similarity and the at least one detection direction, so that the structural similarity of the areas with the same positions in the two images is used as a difference identification standard to perform difference identification, the image identification accuracy is improved, and the identification effect is good.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an image difference identification system according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of an image difference identification method according to an embodiment of the present application.
Fig. 3 is a schematic diagram illustrating determination of a first region to be compared and a second region to be compared according to an embodiment of the present application.
Fig. 4 is another schematic flow chart of an image difference identification method according to an embodiment of the present application.
Fig. 5 is another schematic flow chart of an image difference identification method according to an embodiment of the present application.
Fig. 6 is another schematic flow chart of an image difference identification method according to an embodiment of the present application.
Fig. 7 is a schematic display diagram of a first region to be compared in an initial difference region according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an image difference identification apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image difference identification method, an image difference identification device and a storage medium.
Referring to fig. 1, fig. 1 is a schematic view of a scene of an image difference identification system according to an embodiment of the present disclosure, where the image difference identification system may include any one of the image difference identification devices according to the embodiments of the present disclosure, and the image difference identification device may be specifically integrated in an electronic device such as a terminal or a server, where the terminal may be a mobile phone, a tablet computer, a personal computer, and the like.
The electronic equipment can acquire an image to be identified and a reference image; determining at least one detection direction according to the image to be identified; determining a first region to be compared from the image to be identified according to at least one detection direction, and determining a second region to be compared with the first region to be compared in the same position from the reference image; determining the structural similarity between the first region to be compared and the second region to be compared; and performing difference identification on the image to be identified and the reference image according to the structural similarity and at least one detection direction.
The image to be recognized and the reference image may be input by a user, or acquired from a target database by an electronic device, and the two images have the same shape and size, and the at least one detection direction may be one or more of four directions in a cross shape in the image to be recognized, for example, the image to be recognized may be rectangular in shape, and the at least one detection direction may be at least one of four directions in the image to be recognized, namely, vertically upward, vertically downward, horizontally leftward and horizontally rightward.
For example, in fig. 1, when receiving an operation instruction input by a user, the electronic device first acquires an image to be recognized and a reference image, and determines that a detection direction Z is vertical downward in the image to be recognized, then determines a first comparison region from the image to be recognized in the detection direction Z, and determines a corresponding second region to be compared from the reference image, then determines a structural similarity between the first region to be compared and the second region to be compared, and determines a difference between the recognition image and the reference image according to the structural similarity and the detection direction Z.
As shown in fig. 2, fig. 2 is a schematic flow chart of an image difference identification method provided in an embodiment of the present application, and a specific flow of the image difference identification method may be as follows:
s101, acquiring an image to be identified and a reference image.
In the present embodiment, the image to be recognized and the reference image have the same shape and size. Specifically, the image to be recognized and the reference image may be input by a user, or may be obtained from a target image database by an image difference recognition device, or may be extracted from a target video or a target dynamic image by the image difference recognition device, for example, the reference image and the image to be recognized may correspond to two adjacent frames of images in the same video or dynamic image.
S102, determining at least one detection direction according to the image to be identified.
The at least one detection direction may be one or more of four directions in a cross shape or an x shape in the image to be recognized, for example, the at least one detection direction may be at least one of four directions in a vertical upward direction, a vertical downward direction, a horizontal leftward direction, and a horizontal rightward direction in the image to be recognized.
In addition, in some alternative embodiments, the at least one detection direction may also be preset, for example, the at least one detection direction may be preset to at least one of four directions of vertical upward, vertical downward, horizontal left and horizontal right in the image to be recognized, so that after the image difference recognition apparatus performs the above S101, the subsequent S103 may be directly performed without performing the above S102.
S103, determining a first area to be compared from the image to be identified according to at least one detection direction, and determining a second area to be compared with the first area to be compared in the same position from the reference image.
Wherein, S103 may specifically include:
and S1031, determining a target direction according to at least one detection direction.
When the number of the at least one detection direction is one, the detection direction is the target direction. When the number of the at least one detection direction is multiple, one detection direction may be selected from the multiple detection directions as the target direction randomly or according to a preset rule, where the preset rule may be a priority order of the multiple detection directions, for example, the larger an included angle formed by clockwise turning the vertical downward direction to the detection direction in the image to be recognized is, the later the priority order corresponding to the detection direction is.
S1032, determining a first area to be compared from the image to be recognized according to the target direction.
As shown in fig. 3, the image to be recognized T1 and the reference image T2 are both composed of a plurality of pixels P1/P2, and the plurality of pixels P1/P2 are distributed in rows and columns, wherein the pixels P1/P2 arranged in rows and columns may specifically include the 1 st, 2 nd, as.
In this embodiment, the image difference recognition apparatus may cut out an area having a preset size from the image to be recognized in the target direction, starting from the boundary position of the image to be recognized, and takes this area as a first area to be compared, for example, as shown in fig. 3, the target direction S may be a vertically downward direction in the image to be recognized T1, the image difference recognition means may have the upper boundary B1 of the image to be recognized T1 as a long side, a rectangular region having a preset width W is cut out from the image to be recognized T1 with the target direction S as the width direction to obtain a first region to be compared D1, the preset width W may be a width of at least one row of pixels in the image T1 to be recognized, for example, the preset width W is the width of two adjacent rows of pixels in the image to be recognized T1, and the corresponding first region to be compared D1 may be a region formed by the 1 st row and the 2 nd row of pixels in the image to be recognized T1.
In some embodiments, when the image to be recognized is rectangular, the at least one detection direction may be at least one of four directions from a diagonal intersection to four vertex points in the image to be recognized, wherein the target direction may be a direction from a diagonal intersection to a lower-right vertex point in the image to be recognized, and the corresponding first region to be compared may be located in an upper-left vertex region of the image to be recognized, for example, an overlapping region between two pixels of the row 1 and the row 2 and two pixels of the column 1 and the column 2.
S1033, determining a second area to be compared, which is the same as the first area to be compared in position, from the reference image.
In this embodiment, the reference image and the image to be recognized have mutually independent coordinate systems, and the coordinate systems of the reference image and the image to be recognized are constructed in the same manner, for example, for the rectangular image to be recognized and the reference image, the coordinate system of the corresponding image is constructed by using the top right corner point of the corresponding rectangle as the origin coordinate, the horizontal line where the upper boundary is located as the X axis, and the vertical line where the right boundary is located as the Y axis.
Further, since the above-described reference image and the image to be recognized have the same shape and size, a region in the reference image having the same position coordinates as the first region to be compared, such as the region D2 in fig. 3, may be used as the second region to be compared, and the region D2 may be the region in the reference image T2 having the same position coordinates as the first region to be compared D1.
And S104, determining the structural similarity between the first area to be compared and the second area to be compared.
The structural similarity between the first region to be compared and the second region to be compared can be calculated according to the following formula:
SSIM(x,y)=((2μxμy+c1)(2δxy+c2))/((μx 2+μy 2+c1)(δx 2+δy 2+c2));
wherein x is a first region to be compared, y is a second region to be compared, SSIM (x, y) is the structural similarity between x and y, and μxIs the average value of x, μyIs the average value of y, δx 2Is the variance of x, δy 2Is the variance of y, δxyIs the covariance of x and y, c1And c2Is a constant for maintaining stability, wherein c1=(k1L)2,c2=(k2L)2L is the dynamic range of the pixel value, k1=0.01,k2=0.03。
Further, the value of the SSIM (x, y) ranges from-1 to 1, wherein the closer the value of the SSIM (x, y) is to 1, the better the structural similarity between the first region to be compared and the second region to be compared is, and when the value of the SSIM (x, y) is 1, the first region to be compared and the second region to be compared are the same image region.
It should be noted that the above-mentioned methods for calculating the mean, the variance and the covariance of the image area are the prior art, and are not described herein again.
And S105, performing difference identification on the image to be identified and the reference image according to the structural similarity and at least one detection direction.
As shown in fig. 4, the S105 may specifically include:
s1051, judging whether the structural similarity is larger than a preset threshold value, if so, executing S1052, and if not, executing S1053.
When the structural similarity between the first region to be compared and the second region to be compared is greater than a preset threshold (for example, 0.975), it may be determined that the first region to be compared and the second region to be compared are the same image region, and when the structural similarity between the first region to be compared and the second region to be compared is not greater than the preset threshold, it may be determined that the first region to be compared and the second region to be compared are different image regions.
S1052, mark the first area to be compared as the same area, update the first area to be compared according to the target direction, and then return to S1033.
Wherein, the S1052 may specifically include:
s1-1, mark the first area to be compared as the same area.
When the structural similarity between the first to-be-compared area and the second to-be-compared area is greater than the preset threshold, that is, the first to-be-compared area and the second to-be-compared area are the same, the first to-be-compared area may be marked as the same area, so as to facilitate identification of the difference between the to-be-identified image and the reference image in the subsequent step.
S1-2, increasing the size of the first region to be compared by a preset offset distance in the target direction.
The preset offset distance may be the width of one row or two adjacent rows of pixels in the image to be recognized, or may also be the width of one column or two adjacent columns of pixels in the image to be recognized. For example, as shown in fig. 3, the current first region to be compared D1 is composed of two rows of pixels, row 1 and row 2, in the image to be recognized T1, the target direction S is vertically downward, and the lower boundary of the current first region to be compared D1 may be shifted in the target direction S by a preset shift distance to increase the width W of the first region to be compared D1 by the value of the preset shift distance.
And S1-3, updating the first area to be compared after the size is increased to be the first area to be compared.
Next to the previous example, with reference to fig. 3, if the preset offset distance is the width of two adjacent rows of pixels in the image to be recognized T1, the first area to be compared after the size increase corresponds to the area formed by four rows of pixels in the line 1, the line 2, the line 3 and the line 4 in the image to be recognized T1.
Further, after the step S1-3 is executed, the steps S1033, S104, and S1051 may be sequentially executed, so as to form a loop, and when the structural similarity corresponding to the first region to be compared after the size increase is greater than the preset threshold, the next loop can be entered, so as to increase the size of the first region to be compared gradually and the size of the corresponding same region gradually with the increase of the number of loops, so that a larger same region can be identified from the image to be identified, which is favorable for improving the accuracy of the subsequent image difference identification operation.
In addition, in some alternative embodiments, the above S1052 may further specifically include:
s1-1, mark the first area to be compared as the same area.
And S1-4, taking the area except the same area in the image to be recognized as a residual area, and intercepting the target area from the residual area along the target direction according to a preset intercepting distance from the boundary position of the first area to be compared. The preset intercepting distance may be the width of one row or two adjacent rows of pixels in the image to be recognized, or may also be the width of one column or two adjacent columns of pixels in the image to be recognized. For example, as shown in fig. 3, the first region to be compared D1 is composed of two rows of pixels, line 1 and line 2, in the image to be recognized T1, the target direction S is vertically downward, and the image difference recognition apparatus may intercept a target region having a width equal to a preset intercept distance in the target direction S from a lower boundary position of the first region to be compared D1, wherein if the preset intercept distance is a width of two adjacent rows of pixels in the image to be recognized T1, the target region may be a region composed of two rows of pixels, line 3 and line 4, in the image to be recognized T1.
And S1-5, updating the target area to be the first area to be compared.
After the step S1-5 is executed, the steps S1033, S104, and S1051 may be sequentially executed, so as to form a loop, and when the structural similarity corresponding to the updated first region to be compared is greater than the preset threshold, the next loop may be entered, so as to increase the size of the same region in the image to be recognized gradually with the increase of the number of loops, but the size of the first region to be compared for performing the structural similarity comparison does not increase gradually, so that the repeated calculation of the structural similarity of the image region marked as the same region may be avoided, thereby reducing the calculation amount and improving the working efficiency.
And S1053, when the updated times of the first area to be compared is larger than a preset time value, determining a difference area between the image to be recognized and the reference image according to the same area so as to perform difference recognition on the image to be recognized and the reference image.
The preset number value may be zero, and when the updated number of the first area to be compared is greater than zero, it indicates that the loop S1033-S104-S1051-S1052-S1033 has been performed at least once, and the same area has been detected in the image to be identified.
Specifically, the structural similarity between the first to-be-compared area and the second to-be-compared area is not greater than a preset threshold, and the updated number of times of the first to-be-compared area is greater than a preset number, that is, the first to-be-compared area is different from the second to-be-compared area, and when the same area is detected in the to-be-identified image, an area other than the same area in the to-be-identified image may be used as a difference area between the to-be-identified image and the reference image.
It should be noted that, when the structural similarity of the first to-be-compared region is not greater than the preset threshold, and the updated number of times of the first to-be-compared region is not greater than the preset number, that is, the first to-be-compared region is different from the second to-be-compared region, and the same region is not detected in the target direction, the entire to-be-identified image may be used as the difference region between the to-be-identified image and the reference image.
In an alternative embodiment, the at least one detection direction may be multiple, and S1031 may specifically include:
s2-1, determining a target direction from the plurality of detected directions.
The image difference recognition device may select one of the detection directions as the target direction randomly or according to a preset rule, where the preset rule may be a priority order of the detection directions.
Further, in some specific embodiments, as shown in fig. 5, before the step S1053, the method may further include:
and S1054, when the updated times of the first area to be compared is larger than a preset time value, updating the target direction according to the remaining detection directions, updating the image to be recognized and the reference image according to the same area, and then returning to the S1032.
Specifically, when the structural similarity between the first region to be compared and the second region to be compared is not greater than a preset threshold and the updated number of times of the first region to be compared is greater than a preset number, the target direction may be marked as a detected direction, and the detection directions other than the detected direction among the plurality of detection directions are used as remaining detection directions, and then the image difference identification device may select one detection direction from the remaining detection directions randomly or according to a preset rule to update the target direction.
The image difference recognition device may update an area of the image to be recognized other than the same area as the image to be recognized, and update an area of the reference image at the same position as the updated image to be recognized as the reference image.
Specifically, after the above-mentioned S1054 is executed, S1032, S1033, S104, and S1051 may be sequentially executed, so as to form a loop, and when the structural similarity obtained based on the updated target direction, the image to be recognized, and the reference image is not greater than the preset threshold, and the updated number of times of the first area to be compared is greater than the preset number of times, the next loop can be entered, so that the difference recognition can be performed on the image to be recognized from multiple detection directions, which is beneficial to improving the accuracy of the image difference recognition.
Wherein, the S1053 may specifically include: and when the remaining detection directions do not exist, namely the detection directions are marked as detected directions, removing the area marked as the same area in the image to be identified so as to obtain a difference area between the image to be identified and the reference image.
Further, in other embodiments, after S1051, the method may further include:
and S1055, if the structural similarity between the first area to be compared and the second area to be compared is not greater than the preset threshold, updating the target direction according to the remaining detection directions when the updated number of times of the first area to be compared is not greater than the preset number, and then returning to the S1032.
Specifically, the structural similarity between the first region to be compared and the second region to be compared is not greater than a preset threshold, and the updated number of times of the first region to be compared is not greater than a preset number, that is, the first region to be compared and the second region to be compared are different, and when the same region is not detected in the target direction, the target direction may be marked as a detected direction, and the detection directions other than the detected direction in the plurality of detection directions are taken as remaining detection directions, and then the image difference identification apparatus may select one detection direction from the remaining detection directions randomly or according to a preset rule, so as to update the target direction.
In addition, after the above-mentioned S1055 is executed, S1032, S1033, S104, and S1051 may be sequentially executed to form a loop, and when the structural similarity obtained based on the updated target direction is not greater than the preset threshold and the updated number of times of the first area to be compared is not greater than the preset number of times, the next loop may be entered, so that the difference recognition of the images to be recognized from multiple detection directions may be performed, which is beneficial to improving the accuracy of the image difference recognition.
In the above embodiment, since the human eye vision system can easily extract the structural information from the field of view, by using the structural similarity of the same-position regions in the image to be recognized and the reference image as the disparity recognition standard, a disparity region that is smaller and closer to human eyes and can recognize disparity can be obtained. And after the difference region between the image to be recognized and the reference image is obtained, lossless image compression processing can be performed on the corresponding image to be recognized and the reference image based on the difference region, so that the image compression rate can be improved under the condition of not influencing the image quality.
As can be seen from the above, in the image difference identification method provided in this embodiment, the image to be identified and the reference image are obtained, at least one detection direction is determined according to the image to be identified, the first region to be compared is determined from the image to be identified according to the at least one detection direction, the second region to be compared, which is the same as the first region to be compared, is determined from the reference image, the structural similarity between the first region to be compared and the second region to be compared is determined, and the difference identification is performed on the image to be identified and the reference image according to the structural similarity and the at least one detection direction, so that the structural similarity of the regions with the same positions in the two images is used as a difference identification standard to perform difference identification, the image identification accuracy is improved, and the identification effect is good.
As shown in fig. 6, fig. 6 is another schematic flow chart of the image difference identification method provided in the embodiment of the present application, and a specific flow of the image difference identification method may be as follows:
s201, acquiring an image to be identified and a reference image.
In this embodiment, the image to be recognized and the reference image have the same shape and size, and specifically, the image difference recognition apparatus may extract two adjacent frames of images from the target video file or the target moving image file to obtain the image to be recognized and the reference image.
S202, determining at least one detection direction according to the image to be identified.
For example, at least one of four directions of vertically upward, vertically downward, horizontally leftward and horizontally rightward in the image to be recognized may be taken as the above-described detection direction.
S203, determining a target direction according to the at least one detection direction.
For example, one detection direction may be selected as the target direction from the four detection directions of vertical upward, vertical downward, horizontal leftward and horizontal rightward in the priority order of the detection directions, where the four detection directions may be sorted from front to back in the priority order: vertical downward, horizontal leftward, vertical upward, and horizontal rightward, that is, a vertically downward direction with the top priority order may be taken as the target direction.
And S204, acquiring the position and the pixel value of a first pixel point in the image to be identified and the position and the pixel value of a second pixel point in the reference image.
The image to be recognized and the reference image may have mutually independent coordinate systems, and the coordinate systems of the image to be recognized and the reference image are constructed in the same manner, that is, the position coordinates of the first pixel point and the position coordinates of the second pixel point may correspond to each other one to one. Specifically, when the image to be recognized and the reference image are in RGB image formats, the pixel value is an RGB value of a corresponding pixel point, and when the image to be recognized and the reference image are in RGBA image formats, the pixel value is an RGBA value of a corresponding pixel point.
S205, comparing the pixel value of the first pixel point with the pixel value of the second pixel point.
For example, a first pixel point in the image to be recognized may be traversed, and a difference between a pixel value of the first pixel point and a pixel value of a second pixel point having the same position coordinate in the reference image may be compared.
S206, when a first pixel point which is the same as the second pixel point in position and different in pixel value exists, the corresponding first pixel point is used as a difference pixel point.
For example, when the image to be recognized and the reference image are two adjacent frames of images in a video file, the difference pixel point usually appears due to a change in the middle region of the images.
And S207, determining an initial difference area from the image to be identified according to the positions of the difference pixel points.
The shape of the initial difference region can be a rectangle, a circle or an ellipse, and all the difference pixel points are located in the initial difference region.
For example, if the initial difference region is rectangular, the image difference recognition device may traverse the position coordinates of the difference pixel point in the image to be recognized to obtain a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value of the difference pixel point, and then determine the initial difference region based on the minimum abscissa value, the minimum ordinate value, the maximum abscissa value, and the maximum ordinate value, where the minimum abscissa value, the minimum ordinate value, the maximum abscissa value, and the maximum ordinate value may respectively correspond to an upper left vertex angle abscissa, a lower right vertex angle ordinate, a lower right vertex angle abscissa, and an upper left vertex angle ordinate of the rectangular initial difference region.
And S208, determining a first area to be compared from the initial difference area according to the target direction.
The image difference recognition device may cut out an area having a preset size from the initial difference area in the target direction from a boundary position of the initial difference area, and use the area as the first area to be compared.
For example, as shown in fig. 7, the target direction S 'may be a vertically downward direction in the initial difference region C1, and the image difference recognition apparatus may cut out a rectangular region D1' having a preset width W ', which may be the width of at least one row of pixels in the image to be recognized, from the initial difference region C1 with the upper boundary L1 of the initial difference region C1 as a long side and the target direction S' as a width direction, to obtain the first region to be compared.
S209, determining a second area to be compared with the first area to be compared in the same position from the reference image.
The specific implementation manner of S209 in this embodiment may refer to the specific implementation manner of S1033 in the previous method embodiment, and therefore, the detailed description thereof is omitted here.
S210, determining the structural similarity between the first area to be compared and the second area to be compared.
The specific implementation manner of S210 in this embodiment may refer to the specific implementation manner of S104 in the previous method embodiment, and therefore, the detailed description thereof is omitted here.
And S211, judging whether the structural similarity is greater than a preset threshold value, if so, executing S212, and if not, executing S213.
The specific implementation manner of S211 in this embodiment may refer to the specific implementation manner of S1051 in the previous method embodiment, and therefore, details are not described herein again.
S212, mark the first area to be compared as the same area, update the first area to be compared according to the target direction, and then return to S209.
Wherein, the S212 may specifically include:
s4-1, mark the first area to be compared as the same area.
And S4-2, updating the first area to be compared according to the target direction and the initial difference area.
In an embodiment, the image difference identifying apparatus may increase the size of the first area to be compared by a preset offset distance in the target direction in the initial difference area, and update the first area to be compared after the increase in size as the first area to be compared. The preset offset distance may be the width of one row or two adjacent rows of pixels in the image to be recognized, or may be the width of one column or two adjacent columns of pixels in the image to be recognized. For example, as shown in fig. 7, the lower boundary of the first region to be compared D1 'may be shifted in the initial difference region C1 in the target direction S' by a preset shift distance to increase the width W 'of the first region to be compared D1' by the value of the preset shift distance.
Further, after the step S4-2 is executed, the steps S209, S210, and S211 may be sequentially executed to form a loop, and when the structural similarity corresponding to the first region to be compared after the size increase is greater than the preset threshold, the next loop can be entered, so as to increase the size of the first region to be compared gradually and increase the size of the corresponding same region gradually as the number of loops increases, so that a larger same region can be identified from the initial difference region, thereby facilitating to improve the accuracy of the subsequent image difference identification operation.
And S213, when the updated times of the first area to be compared is larger than a preset time value, determining a difference area between the image to be recognized and the reference image according to the same area so as to perform difference recognition on the image to be recognized and the reference image.
For example, when the structural similarity between the first to-be-compared area and the second to-be-compared area is not greater than a predetermined threshold, and the updated number of times of the first to-be-compared area is greater than a predetermined number (e.g., 0), that is, the first to-be-compared area and the second to-be-compared area are not the same, and the same area is detected in the initial difference area, the image difference identifying apparatus may use an area other than the same area in the initial difference area as the difference area between the to-be-identified image and the reference image.
It should be noted that, when the structural similarity of the first region to be compared is not greater than the preset threshold and the updated number of times of the first region to be compared is not greater than the preset number, that is, when the same region is not detected in the target direction, all the initial difference regions may be used as the difference regions between the image to be identified and the reference image.
In an alternative embodiment, the at least one detection direction may be multiple, and the S203 may specifically include:
s5-1, determining a target direction from the plurality of detected directions.
For example, one detection direction may be randomly selected from the plurality of detection directions as the target direction.
Further, before S213, the method may further include:
and S214, when the updated times of the first area to be compared is greater than the preset times, respectively updating the initial difference area and the target direction according to the same area and the rest detection directions, and then returning to the S208.
Specifically, when the structural similarity between the first region to be compared and the second region to be compared is not greater than a predetermined threshold and the updated number of times of the first region to be compared is greater than a predetermined number, the image difference recognition apparatus may update a region other than the same region in the initial difference region to the initial difference region and may mark the target direction as a detected direction, and then may randomly select one of the detection directions other than the detected direction from the remaining detection directions to update the target direction.
Specifically, after the above S214 is executed, S208, S209, S210, and S211 may be sequentially executed to form a loop, and when the structural similarity obtained based on the updated target direction and the initial difference region is not greater than the preset threshold and the updated number of times of the first region to be compared is greater than the preset number, the next loop may be entered, so that the difference recognition of the image to be recognized from multiple detection directions may be performed, which is beneficial to improving the accuracy of the image difference recognition.
Wherein, the S213 may specifically include: and when the remaining detection directions do not exist, namely the detection directions are marked as detected directions, removing the area marked as the same area in the initial difference area to obtain the difference area between the image to be identified and the reference image.
Further, after the above S211, the method may further include:
s215, if the structural similarity between the first area to be compared and the second area to be compared is not greater than the preset threshold, when the updated number of times of the first area to be compared is not greater than the preset number, the target direction is updated according to the remaining detection directions, and then the process returns to the step S208.
Specifically, when the structural similarity between the first region to be compared and the second region to be compared is not greater than a preset threshold and the updated number of times of the first region to be compared is not greater than a preset number, that is, the first region to be compared and the second region to be compared are different and the same region is not detected in the target direction, the target direction may be marked as a detected direction, and the detection directions other than the detected direction among the detection directions may be set as remaining detection directions, and then the image difference identification apparatus may select one detection direction from the remaining detection directions at random or according to a preset rule to update the target direction.
In addition, after the above S215 is executed, S208, S209, S210, and S211 may be sequentially executed to form a loop, and when the structural similarity obtained based on the updated target direction is not greater than the preset threshold and the updated number of times of the first region to be compared is not greater than the preset number, the next loop may be entered, so that the difference recognition of the images to be recognized from the multiple detection directions may be performed, which is beneficial to improving the accuracy of the image difference recognition.
Therefore, an initial difference region between the image to be recognized and the reference image is obtained by comparing pixel values of pixel points with the same position in the image to be recognized and the reference image, and structural similarity analysis is performed on the initial difference region and the region with the same position in the reference image to remove the difference which cannot be recognized by a human visual system in the initial difference region, so that a difference region which is smaller and closer to human eyes and can recognize the difference can be obtained. And after the difference region between the image to be recognized and the reference image is obtained, lossless image compression processing can be performed on the corresponding image to be recognized and the reference image based on the difference region, so that the image compression rate can be improved under the condition of not influencing the image quality.
On the basis of the method in the foregoing embodiment, the present embodiment will be further described from the perspective of an image difference recognition apparatus, please refer to fig. 8, where fig. 8 specifically describes the image difference recognition apparatus provided in the embodiment of the present application, which may include: an obtaining module 110, a first determining module 120, a second determining module 130, a third determining module 140, and a recognition module 150, wherein:
(1) acquisition module 110
The acquiring module 110 is configured to acquire an image to be recognized and a reference image.
In the present embodiment, the image to be recognized and the reference image have the same shape and size. Specifically, the image to be recognized and the reference image may be input by a user, or may be obtained by the obtaining module 110 from a target image database, or may be obtained by the obtaining module 110 by extracting from a target video or a target dynamic image, for example, the reference image and the image to be recognized may correspond to two adjacent frames of images in the same video or dynamic image.
(2) First determination module 120
A first determining module 120, configured to determine at least one detection direction according to the image to be identified.
The at least one detection direction may be one or more of four directions in a cross shape or an x shape in the image to be recognized, for example, the at least one detection direction may be at least one of four directions in a vertical upward direction, a vertical downward direction, a horizontal leftward direction, and a horizontal rightward direction in the image to be recognized.
(3) Second determination module 130
A second determining module 130, configured to determine a first region to be compared from the image to be identified according to the at least one detection direction, and determine a second region to be compared, which is located at the same position as the first region to be compared, from the reference image.
The second determining module 130 may specifically include:
(a) a first determining unit 131 for determining a target direction based on at least one detection direction.
When the number of the at least one detection direction is one, the detection direction is the target direction. When the number of the at least one detection direction is multiple, the first determining unit 131 may select one detection direction from the multiple detection directions as the target direction randomly or according to a preset rule, where the preset rule may be a priority order of the multiple detection directions, for example, the priority order corresponding to the detection direction is backward as an included angle from a vertical downward direction to the detection direction in the image to be recognized is larger.
(b) A second determining unit 132, configured to determine the first to-be-compared area from the to-be-identified image according to the target direction.
As shown in fig. 3, the image to be recognized T1 and the reference image T2 are both composed of a plurality of pixels P1/P2, and the plurality of pixels P1/P2 are distributed in rows and columns, wherein the pixels P1/P2 arranged in rows and columns may specifically include the 1 st, 2 nd, as.
In the present embodiment, the second determination unit 132 may cut out an area having a preset size from the image to be recognized in the target direction, starting from the boundary position of the image to be recognized, and takes this area as a first area to be compared, for example, as shown in fig. 3, the target direction S may be a vertically downward direction in the image to be recognized T1, the second determining unit 132 may have the upper boundary B1 of the image to be recognized T1 as a long side, a rectangular region having a preset width W is cut out from the image to be recognized T1 with the target direction S as the width direction to obtain a first region to be compared D1, the preset width W may be a width of at least one row of pixels in the image T1 to be recognized, for example, the preset width W is the width of two adjacent rows of pixels in the image to be recognized T1, and the corresponding first region to be compared D1 may be a region formed by the 1 st row and the 2 nd row of pixels in the image to be recognized T1.
In some embodiments, when the image to be recognized is rectangular, the at least one detection direction may be at least one of four directions from a diagonal intersection to four vertex points in the image to be recognized, wherein the target direction may be a direction from a diagonal intersection to a lower-right vertex point in the image to be recognized, and the corresponding first region to be compared may be located in an upper-left vertex region of the image to be recognized, for example, an overlapping region between two pixels of the row 1 and the row 2 and two pixels of the column 1 and the column 2.
(c) A third determining unit 133, configured to determine a second region to be compared, which is located at the same position as the first region to be compared, from the reference image.
In this embodiment, the reference image and the image to be recognized have mutually independent coordinate systems, and the coordinate systems of the reference image and the image to be recognized are constructed in the same manner, for example, for the rectangular image to be recognized and the reference image, the coordinate system of the corresponding image is constructed by using the top right corner point of the corresponding rectangle as the origin coordinate, the horizontal line where the upper boundary is located as the X axis, and the vertical line where the right boundary is located as the Y axis.
Further, since the above-described reference image and the image to be recognized have the same shape and size, the third determining unit 133 can take the region in the reference image having the same position coordinates as the first region to be compared as the second region to be compared, for example, the region D2 in fig. 3, the region D2 being the region in the reference image T2 having the same position coordinates as the first region to be compared D1.
(4) Third determination module 140
And a third determining module 140, configured to determine a structural similarity between the first region to be compared and the second region to be compared.
The structural similarity between the first region to be compared and the second region to be compared can be calculated according to the following formula:
SSIM(x,y)=((2μxμy+c1)(2δxy+c2))/((μx 2+μy 2+c1)(δx 2+δy 2+c2));
wherein x is a first region to be compared, y is a second region to be compared, SSIM (x, y) is the structural similarity between x and y, and μxIs the average value of x, μyIs the average value of y, δx 2Is the variance of x, δy 2Is the variance of y, δxyIs the covariance of x and y, c1And c2Is a constant for maintaining stability, wherein c1=(k1L)2,c2=(k2L)2L is the dynamic range of the pixel value, k1=0.01,k2=0.03。
Further, the value of the SSIM (x, y) ranges from-1 to 1, wherein the closer the value of the SSIM (x, y) is to 1, the better the structural similarity between the first region to be compared and the second region to be compared is, and when the value of the SSIM (x, y) is 1, the first region to be compared and the second region to be compared are the same image region.
It should be noted that the above-mentioned methods for calculating the mean, the variance and the covariance of the image area are the prior art, and are not described herein again.
(5) Identification module 150
And the identifying module 150 is configured to perform difference identification on the image to be identified and the reference image according to the structural similarity and the at least one detection direction.
The identification module 150 may specifically include:
(A) and a judging unit 151, configured to judge whether the structural similarity is greater than a preset threshold.
When the structural similarity between the first region to be compared and the second region to be compared is greater than a preset threshold (for example, 0.975), it may be determined that the first region to be compared and the second region to be compared are the same image region, and when the structural similarity between the first region to be compared and the second region to be compared is not greater than the preset threshold, it may be determined that the first region to be compared and the second region to be compared are different image regions.
(B) The first updating unit 152 is configured to mark the first region to be compared as the same region when the structural similarity between the first region to be compared and the second region to be compared is greater than a preset threshold, update the first region to be compared according to the target direction, and then trigger the third determining unit 133 to determine the second region to be compared, which is located at the same position as the first region to be compared, from the reference image.
The first updating unit 152 may specifically be configured to:
s1-1, mark the first area to be compared as the same area.
When the structural similarity between the first to-be-compared area and the second to-be-compared area is greater than the preset threshold, that is, the first to-be-compared area and the second to-be-compared area are the same, the first to-be-compared area may be marked as the same area, so as to facilitate identification of the difference between the to-be-identified image and the reference image in the subsequent step.
S1-2, increasing the size of the first region to be compared by a preset offset distance in the target direction.
The preset offset distance may be the width of one row or two adjacent rows of pixels in the image to be recognized, or may also be the width of one column or two adjacent columns of pixels in the image to be recognized. For example, as shown in fig. 3, the current first region to be compared D1 is composed of two rows of pixels, row 1 and row 2, in the image to be recognized T1, the target direction S is vertically downward, and the lower boundary of the current first region to be compared D1 may be shifted in the target direction S by a preset shift distance to increase the width W of the first region to be compared D1 by the value of the preset shift distance.
And S1-3, updating the first area to be compared after the size is increased to be the first area to be compared.
Next to the previous example, with reference to fig. 3, if the preset offset distance is the width of two adjacent rows of pixels in the image to be recognized T1, the first area to be compared after the size increase corresponds to the area formed by four rows of pixels in the line 1, the line 2, the line 3 and the line 4 in the image to be recognized T1.
Further, after performing the above S1-3, the first updating unit 152 may trigger the third determining unit 133 to determine a second region to be compared again from the reference image, the second region to be compared having the same position as the first region to be compared, so as to form a loop, and when the structural similarity corresponding to the increased size of the first region to be compared is greater than the preset threshold, the next loop can be entered, so that as the number of loops increases, the size of the first region to be compared gradually increases, and the size of the corresponding same region also gradually increases, so that a larger same region can be identified from the image to be identified, which is favorable for improving the accuracy of the subsequent image difference identifying operation.
In addition, in some alternative embodiments, the first updating unit 152 may be further specifically configured to:
s1-1, mark the first area to be compared as the same area.
And S1-4, taking the area except the same area in the image to be recognized as a residual area, and intercepting the target area from the residual area along the target direction according to a preset intercepting distance from the boundary position of the first area to be compared. The preset intercepting distance may be the width of one row or two adjacent rows of pixels in the image to be recognized, or may also be the width of one column or two adjacent columns of pixels in the image to be recognized. For example, as shown in fig. 3, the first region to be compared D1 is composed of two rows of pixels, i.e., the 1 st row and the 2 nd row in the image to be recognized T1, the target direction S is vertically downward, and the first updating unit 152 may intercept a target region having a width equal to a preset intercept distance in the target direction S from a lower boundary position of the first region to be compared D1, wherein the target region may be a region composed of two rows of pixels, i.e., the 3 rd row and the 4 th row in the image to be recognized T1, if the preset intercept distance is a width of two adjacent rows of pixels in the image to be recognized T1.
And S1-5, updating the target area to be the first area to be compared.
After executing the above S1-5, the first updating unit 152 may trigger the third determining unit 133 to determine the second region to be compared, which is at the same position as the first region to be compared, from the reference image again to form a loop, and enter the next loop when the structural similarity corresponding to the updated first region to be compared is greater than the preset threshold, so that as the number of loops increases, the size of the same region in the image to be identified increases gradually, but the size of the first region to be compared, which is subjected to structural similarity comparison, does not increase gradually, so that repeated calculation of the structural similarity of the image region that is marked as the same region can be avoided, thereby reducing the amount of calculation and improving the work efficiency.
In addition, in a specific embodiment, the second determining unit 132 may be specifically configured to:
acquiring the position and the pixel value of a first pixel point in an image to be identified and the position and the pixel value of a second pixel point in a reference image;
comparing the pixel value of the first pixel point with the pixel value of the second pixel point;
when a first pixel point which is the same as the second pixel point in position and has a different pixel value exists, taking the corresponding first pixel point as a difference pixel point;
determining an initial difference area from the first image according to the positions of the difference pixel points;
and determining a first region to be compared from the initial difference region according to the target direction.
Further, the first updating unit 152 may specifically be configured to:
when the structural similarity between the first area to be compared and the second area to be compared is greater than the preset threshold, the first area to be compared is marked as the same area, the first area to be compared is updated according to the target direction and the initial difference area, and then the third determining unit 133 is triggered to determine the second area to be compared, which has the same position as the first area to be compared, from the reference image.
(C) A fourth determining unit 153, configured to determine, according to the same region, a difference region between the image to be recognized and the reference image to perform difference recognition on the image to be recognized and the reference image when the structural similarity is not greater than the preset threshold and the updated number of times of the first region to be compared is greater than the preset number value.
Specifically, when the structural similarity between the first to-be-compared area and the second to-be-compared area is not greater than the preset threshold, and the updated number of times of the first to-be-compared area is greater than the preset number, that is, the first to-be-compared area is different from the second to-be-compared area, and the same area has been detected in the to-be-identified image, the fourth determining unit 153 may use an area other than the same area in the to-be-identified image as the difference area between the to-be-identified image and the reference image.
It should be noted that, when the structural similarity of the first to-be-compared region is not greater than the preset threshold, and the updated number of times of the first to-be-compared region is not greater than the preset number, that is, the first to-be-compared region is different from the second to-be-compared region, and the same region is not detected in the target direction, the entire to-be-identified image may be used as the difference region between the to-be-identified image and the reference image.
In an alternative embodiment, the at least one detection direction may be multiple, and the first determining unit 131 may specifically be configured to:
s2-1, determining a target direction from the plurality of detected directions.
The first determining unit 131 may select one of the detecting directions as the target direction randomly or according to a preset rule, where the preset rule may be a priority order of the detecting directions.
Further, the identification module 150 may further include:
(D) the second updating unit 154 is configured to update the target direction according to the remaining detection directions when the structural similarity between the first to-be-compared area and the second to-be-compared area is not greater than the preset threshold and the updated number of times of the first to-be-compared area is greater than the preset number, update the to-be-identified image and the reference image according to the same area, and then trigger the second determining unit 132 to determine the first to-be-compared area from the to-be-identified image according to the target direction.
Specifically, when the structural similarity between the first area to be compared and the second area to be compared is not greater than a predetermined threshold and the updated number of times of the first area to be compared is greater than a predetermined number, the second updating unit 154 may mark the target direction as the detected direction, and set the detection directions other than the detected direction as the remaining detection directions, and then the second updating unit 154 may select one detection direction from the remaining detection directions randomly or according to a predetermined rule to update the target direction.
The second updating unit 154 may update an area of the image to be recognized other than the same area as the image to be recognized, and update an area of the reference image at the same position as the updated image to be recognized as the reference image.
Specifically, after the target direction, the image to be recognized, and the reference image are updated, the second updating unit 154 may trigger the second determining unit 132 to determine the first region to be compared from the image to be recognized according to the target direction to form a loop, and when the structural similarity obtained based on the updated target direction, the image to be recognized, and the reference image is not greater than the preset threshold, and the updated number of times of the first region to be compared is greater than the preset number, the next loop may be entered.
The fourth determining unit 153 may be specifically configured to: and when the remaining detection directions do not exist, namely the detection directions are marked as detected directions, removing the area marked as the same area in the image to be identified so as to obtain a difference area between the image to be identified and the reference image.
Further, after the above S1051, the method may further include:
(E) the third updating unit 155 is configured to update the target direction according to the remaining detection directions when the structural similarity is not greater than the preset threshold and the updated number of times of the first to-be-compared region is not greater than the preset number, and then trigger the second determining unit 132 to determine the first to-be-compared region from the to-be-identified image according to the target direction.
Specifically, when the structural similarity between the first region to be compared and the second region to be compared is not greater than a predetermined threshold, and the updated number of times of the first region to be compared is not greater than a predetermined number of times, that is, the first region to be compared is different from the second region to be compared, and the same region is not detected in the target direction, the third updating unit 155 may mark the target direction as a detected direction, and set a detection direction other than the detected direction among the plurality of detection directions as a remaining detection direction, and then the third updating unit 155 may select one detection direction from the remaining detection directions randomly or according to a predetermined rule, so as to update the target direction.
In addition, after the third updating unit 155 updates the target direction, the second determining unit 132 may be triggered to determine the first region to be compared from the image to be recognized according to the target direction to form a loop, and when the structural similarity obtained based on the updated target direction is not greater than the preset threshold and the updated number of times of the first region to be compared is not greater than the preset number, the next loop may be entered.
In the above embodiment, since the human eye vision system can easily extract the structural information from the field of view, by using the structural similarity of the same-position regions in the image to be recognized and the reference image as the disparity recognition standard, a disparity region that is smaller and closer to human eyes and can recognize disparity can be obtained. And after the difference region between the image to be recognized and the reference image is obtained, lossless image compression processing can be performed on the corresponding image to be recognized and the reference image based on the difference region, so that the image compression rate can be improved under the condition of not influencing the image quality.
In specific implementation, each of the foregoing sub-units, and modules may be implemented as an independent entity, or may be combined arbitrarily and implemented as one or several entities, and specific implementations of each of the foregoing sub-units, and modules may refer to the foregoing method embodiments, and are not described herein again.
As can be seen from the above, the image difference identification apparatus provided in this embodiment obtains the image to be identified and the reference image, determines at least one detection direction according to the image to be identified, then determines the first region to be compared from the image to be identified according to the at least one detection direction, determines the second region to be compared having the same position as the first region to be compared from the reference image, then determines the structural similarity between the first region to be compared and the second region to be compared, and performs difference identification on the image to be identified and the reference image according to the structural similarity and the at least one detection direction, so that difference identification can be performed by using the structural similarity of the regions having the same positions in the two images as a difference identification standard, thereby improving image identification accuracy and achieving a good identification effect.
Accordingly, an electronic device according to an embodiment of the present application is further provided, as shown in fig. 9, which shows a schematic structural diagram of the electronic device according to an embodiment of the present application, and specifically:
the electronic device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, Radio Frequency (RF) circuitry 403, a power supply 404, an input unit 405, and a display unit 406. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The RF circuit 403 may be used for receiving and transmitting signals during information transmission and reception, and in particular, for receiving downlink information of a base station and then processing the received downlink information by the one or more processors 401; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 403 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 403 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The electronic device further includes a power supply 404 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 404 is logically connected to the processor 401 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 404 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 405, and the input unit 405 may be used to receive input numeric or character information and generate a keyboard, mouse, joystick, optical or trackball signal input in relation to user settings and function control. Specifically, in one particular embodiment, input unit 405 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 405 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The electronic device may also include a display unit 406, and the display unit 406 may be used to display information input by or provided to the user as well as various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 406 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 401 to determine the type of the touch event, and then the processor 401 provides a corresponding visual output on the display panel according to the type of the touch event. Although in FIG. 9 the touch sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
acquiring an image to be identified and a reference image;
determining at least one detection direction according to the image to be identified;
determining a first region to be compared from the image to be identified according to at least one detection direction, and determining a second region to be compared with the first region to be compared in the same position from the reference image;
determining the structural similarity between the first region to be compared and the second region to be compared;
and performing difference identification on the image to be identified and the reference image according to the structural similarity and at least one detection direction.
The electronic device can achieve the effective effect that any image difference recognition device provided in the embodiments of the present application can achieve, which is detailed in the foregoing embodiments and not described herein again.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The foregoing describes in detail an image difference identification method, apparatus, and storage medium provided in the embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the description of the foregoing embodiments is only used to help understand the method and core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An image difference identification method is characterized by comprising the following steps:
acquiring an image to be identified and a reference image;
determining at least one detection direction according to the image to be identified;
determining a first region to be compared from the image to be identified according to the at least one detection direction, and determining a second region to be compared with the first region to be compared in the same position from the reference image;
determining the structural similarity between the first region to be compared and the second region to be compared;
and performing difference identification on the image to be identified and the reference image according to the structural similarity and at least one detection direction.
2. The image difference recognition method according to claim 1, wherein the determining a first region to be compared from the image to be recognized according to the at least one detection direction specifically includes:
determining a target direction according to the at least one detection direction;
determining a first region to be compared from the image to be identified according to the target direction;
the performing difference identification on the image to be identified and the reference image according to the structural similarity and the at least one detection direction specifically includes:
judging whether the structural similarity is greater than a preset threshold value or not;
if so, marking the first area to be compared as the same area, updating the first area to be compared according to the target direction, and then returning to the step of determining a second area to be compared with the first area to be compared in the reference image;
if not, when the updated times of the first area to be compared is larger than a preset time value, determining a difference area between the image to be recognized and the reference image according to the same area so as to perform difference recognition on the image to be recognized and the reference image.
3. The image difference recognition method according to claim 2, wherein the at least one detection direction is plural, and the determining a target direction according to the at least one detection direction specifically includes:
determining a target direction from the plurality of detected directions;
before the determining the difference region between the image to be identified and the reference image according to the same region, the method further comprises:
and updating the target direction according to the rest detection directions, updating the image to be recognized and the reference image according to the same region, and then returning to the step of determining the first region to be compared from the image to be recognized according to the target direction.
4. The image difference recognition method according to claim 2, wherein the at least one detection direction is plural, and the determining a target direction according to the at least one detection direction specifically includes:
determining a target direction from the plurality of detected directions;
after the judging whether the structural similarity is greater than a preset threshold, the method further includes:
if not, when the updated times of the first area to be compared is not larger than the preset times value, updating the target direction according to the remaining detection directions, and then returning to the step of determining the first area to be compared from the image to be identified according to the target direction.
5. The image difference recognition method according to claim 2, wherein the updating the first area to be compared according to the target direction specifically includes:
increasing the size of the first area to be compared by a preset offset distance along the target direction;
and updating the first area to be compared after the size is increased to be the first area to be compared.
6. The image difference identification method according to claim 2, wherein the determining the difference region between the image to be identified and the reference image according to the same region specifically comprises:
and taking the area except the same area in the image to be recognized as a difference area between the image to be recognized and a reference image.
7. The image difference recognition method according to claim 2, wherein the determining a first region to be compared from the image to be recognized according to the target direction specifically includes:
acquiring the position and the pixel value of a first pixel point in the image to be identified and the position and the pixel value of a second pixel point in the reference image;
comparing the pixel value of the first pixel point with the pixel value of the second pixel point;
when a first pixel point which is the same as the second pixel point in position and has a different pixel value exists, taking the corresponding first pixel point as a difference pixel point;
determining an initial difference area from the first image according to the position of the difference pixel point;
and determining a first region to be compared from the initial difference region according to the target direction.
8. The image difference recognition method according to claim 7, wherein the updating the first area to be compared according to the target direction specifically includes:
and updating the first area to be compared according to the target direction and the initial difference area.
9. An image difference recognition apparatus, comprising:
the acquisition module is used for acquiring an image to be identified and a reference image;
the first determining module is used for determining at least one detection direction according to the image to be identified;
the second determining module is used for determining a first region to be compared from the image to be identified according to the at least one detection direction and determining a second region to be compared with the first region to be compared in the same position from the reference image;
the third determining module is used for determining the structural similarity between the first region to be compared and the second region to be compared;
and the identification module is used for carrying out difference identification on the image to be identified and the reference image according to the structural similarity and at least one detection direction.
10. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor to perform the image difference identification method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910809238.1A CN110796157B (en) | 2019-08-29 | 2019-08-29 | Image difference recognition method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910809238.1A CN110796157B (en) | 2019-08-29 | 2019-08-29 | Image difference recognition method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110796157A true CN110796157A (en) | 2020-02-14 |
CN110796157B CN110796157B (en) | 2024-08-06 |
Family
ID=69427139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910809238.1A Active CN110796157B (en) | 2019-08-29 | 2019-08-29 | Image difference recognition method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110796157B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111966600A (en) * | 2020-08-31 | 2020-11-20 | 平安健康保险股份有限公司 | Webpage testing method and device, computer equipment and computer readable storage medium |
CN112001238A (en) * | 2020-07-14 | 2020-11-27 | 浙江大华技术股份有限公司 | Terminal block wiring state identification method, identification device and storage medium |
CN112070113A (en) * | 2020-07-28 | 2020-12-11 | 北京旷视科技有限公司 | Camera scene change judgment method and device, electronic equipment and readable storage medium |
CN112734747A (en) * | 2021-01-21 | 2021-04-30 | 腾讯科技(深圳)有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112883827A (en) * | 2021-01-28 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Method and device for identifying designated target in image, electronic equipment and storage medium |
CN113409312A (en) * | 2021-08-03 | 2021-09-17 | 广东博创佳禾科技有限公司 | Image processing method and device for biomedical images |
CN113554050A (en) * | 2020-04-26 | 2021-10-26 | 阿里巴巴集团控股有限公司 | Image information analysis method and device, electronic equipment and computer storage medium |
CN113812252A (en) * | 2020-06-18 | 2021-12-21 | 纳恩博(北京)科技有限公司 | Method for controlling operation of apparatus, robot apparatus, and storage medium |
CN114359764A (en) * | 2020-09-28 | 2022-04-15 | 艾索擘(上海)科技有限公司 | A method, system and related equipment for identifying illegal buildings based on image data |
WO2022115987A1 (en) * | 2020-12-01 | 2022-06-09 | 浙江吉利控股集团有限公司 | Method and system for automatic driving data collection and closed-loop management |
CN115131722A (en) * | 2022-06-30 | 2022-09-30 | 重庆中科云从科技有限公司 | A target space monitoring method, computer-readable storage medium and electronic device |
CN117110330A (en) * | 2023-10-25 | 2023-11-24 | 山西慧达澳星科技有限公司 | Conveying belt flaw detection method, device, equipment and storage medium |
CN118764722A (en) * | 2024-09-06 | 2024-10-11 | 浙江大华技术股份有限公司 | Image processing method and related device, system and storage medium |
CN119027655A (en) * | 2024-10-29 | 2024-11-26 | 北京城建智控科技股份有限公司 | Object detection method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070025592A1 (en) * | 2005-07-27 | 2007-02-01 | Kabushiki Kaisha Toshiba | Target-region detection apparatus, method and program |
CN103186897A (en) * | 2011-12-29 | 2013-07-03 | 北京大学 | Method and device for obtaining image diversity factor result |
CN103955678A (en) * | 2014-05-13 | 2014-07-30 | 深圳市同洲电子股份有限公司 | Image recognition method and device |
CN108241645A (en) * | 2016-12-23 | 2018-07-03 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN109034185A (en) * | 2018-06-08 | 2018-12-18 | 汪俊 | A kind of street view image contrast difference method and device |
CN109447154A (en) * | 2018-10-29 | 2019-03-08 | 网易(杭州)网络有限公司 | Picture similarity detection method, device, medium and electronic equipment |
-
2019
- 2019-08-29 CN CN201910809238.1A patent/CN110796157B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070025592A1 (en) * | 2005-07-27 | 2007-02-01 | Kabushiki Kaisha Toshiba | Target-region detection apparatus, method and program |
CN103186897A (en) * | 2011-12-29 | 2013-07-03 | 北京大学 | Method and device for obtaining image diversity factor result |
CN103955678A (en) * | 2014-05-13 | 2014-07-30 | 深圳市同洲电子股份有限公司 | Image recognition method and device |
CN108241645A (en) * | 2016-12-23 | 2018-07-03 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN109034185A (en) * | 2018-06-08 | 2018-12-18 | 汪俊 | A kind of street view image contrast difference method and device |
CN109447154A (en) * | 2018-10-29 | 2019-03-08 | 网易(杭州)网络有限公司 | Picture similarity detection method, device, medium and electronic equipment |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554050A (en) * | 2020-04-26 | 2021-10-26 | 阿里巴巴集团控股有限公司 | Image information analysis method and device, electronic equipment and computer storage medium |
CN113812252A (en) * | 2020-06-18 | 2021-12-21 | 纳恩博(北京)科技有限公司 | Method for controlling operation of apparatus, robot apparatus, and storage medium |
CN112001238A (en) * | 2020-07-14 | 2020-11-27 | 浙江大华技术股份有限公司 | Terminal block wiring state identification method, identification device and storage medium |
CN112070113A (en) * | 2020-07-28 | 2020-12-11 | 北京旷视科技有限公司 | Camera scene change judgment method and device, electronic equipment and readable storage medium |
CN112070113B (en) * | 2020-07-28 | 2024-03-26 | 北京旷视科技有限公司 | Imaging scene change judging method and device, electronic equipment and readable storage medium |
CN111966600B (en) * | 2020-08-31 | 2023-08-04 | 平安健康保险股份有限公司 | Webpage testing method, webpage testing device, computer equipment and computer readable storage medium |
CN111966600A (en) * | 2020-08-31 | 2020-11-20 | 平安健康保险股份有限公司 | Webpage testing method and device, computer equipment and computer readable storage medium |
CN114359764A (en) * | 2020-09-28 | 2022-04-15 | 艾索擘(上海)科技有限公司 | A method, system and related equipment for identifying illegal buildings based on image data |
WO2022115987A1 (en) * | 2020-12-01 | 2022-06-09 | 浙江吉利控股集团有限公司 | Method and system for automatic driving data collection and closed-loop management |
CN112734747B (en) * | 2021-01-21 | 2024-06-25 | 腾讯科技(深圳)有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112734747A (en) * | 2021-01-21 | 2021-04-30 | 腾讯科技(深圳)有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112883827A (en) * | 2021-01-28 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Method and device for identifying designated target in image, electronic equipment and storage medium |
CN112883827B (en) * | 2021-01-28 | 2024-03-29 | 腾讯科技(深圳)有限公司 | Method and device for identifying specified target in image, electronic equipment and storage medium |
CN113409312A (en) * | 2021-08-03 | 2021-09-17 | 广东博创佳禾科技有限公司 | Image processing method and device for biomedical images |
CN115131722A (en) * | 2022-06-30 | 2022-09-30 | 重庆中科云从科技有限公司 | A target space monitoring method, computer-readable storage medium and electronic device |
CN117110330B (en) * | 2023-10-25 | 2024-01-30 | 山西慧达澳星科技有限公司 | Conveying belt flaw detection method, device, equipment and storage medium |
CN117110330A (en) * | 2023-10-25 | 2023-11-24 | 山西慧达澳星科技有限公司 | Conveying belt flaw detection method, device, equipment and storage medium |
CN118764722A (en) * | 2024-09-06 | 2024-10-11 | 浙江大华技术股份有限公司 | Image processing method and related device, system and storage medium |
CN118764722B (en) * | 2024-09-06 | 2024-12-17 | 浙江大华技术股份有限公司 | Image processing method, related device, system and storage medium |
CN119027655A (en) * | 2024-10-29 | 2024-11-26 | 北京城建智控科技股份有限公司 | Object detection method and device |
CN119027655B (en) * | 2024-10-29 | 2025-02-07 | 北京城建智控科技股份有限公司 | Object detection method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110796157B (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110796157B (en) | Image difference recognition method, device and storage medium | |
CN106778440B (en) | Two-dimensional code identification method and device | |
US10796133B2 (en) | Image processing method and apparatus, and electronic device | |
CN108875451B (en) | Method, device, storage medium and program product for positioning image | |
US11055561B2 (en) | Similar picture identification method, device, and storage medium | |
CN110853076A (en) | Target tracking method, device, equipment and storage medium | |
CN110363785B (en) | Text hyper-box detection method and device | |
EP2890105B1 (en) | Method, apparatus and terminal for generating thumbnail of image | |
CN105989572B (en) | Picture processing method and device | |
CN106296634B (en) | A kind of method and apparatus detecting similar image | |
CN110784672B (en) | Video data transmission method, device, equipment and storage medium | |
CN104463105A (en) | Guide board recognizing method and device | |
WO2020134789A1 (en) | Mobile terminal and method for controlling on and off of screen, and computer storage medium | |
CN106204552A (en) | The detection method of a kind of video source and device | |
CN105898561B (en) | A kind of method of video image processing and device | |
CN108960213A (en) | Method for tracking target, device, storage medium and terminal | |
CN113469923B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112541489A (en) | Image detection method and device, mobile terminal and storage medium | |
CN115841575A (en) | Key point detection method, device, electronic apparatus, storage medium, and program product | |
HK40022030A (en) | Method and device for identifying image difference and storage medium | |
CN108021313B (en) | Picture browsing method and terminal | |
CN111899042B (en) | Malicious exposure advertisement behavior detection method and device, storage medium and terminal | |
CN110503084B (en) | Method and device for identifying text region in image | |
CN115187999A (en) | Text recognition method and device, electronic equipment and computer readable storage medium | |
CN113780291A (en) | Image processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40022030 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |