[go: up one dir, main page]

CN111028276A - Image alignment method and device, storage medium and electronic equipment - Google Patents

Image alignment method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111028276A
CN111028276A CN201911253884.0A CN201911253884A CN111028276A CN 111028276 A CN111028276 A CN 111028276A CN 201911253884 A CN201911253884 A CN 201911253884A CN 111028276 A CN111028276 A CN 111028276A
Authority
CN
China
Prior art keywords
image
contour
affine transformation
homography matrix
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911253884.0A
Other languages
Chinese (zh)
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911253884.0A priority Critical patent/CN111028276A/en
Publication of CN111028276A publication Critical patent/CN111028276A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种图像对齐方法、装置、存储介质及电子设备,其中,本申请实施例获取同一拍摄场景的第一图像和第二图像;对所述第一图像进行下采样处理,得到第三图像,并对所述第二图像进行下采样处理,得到第四图像;基于轮廓检测算法,从所述第三图像中提取第一轮廓图像,并从所述第四图像中提取第二轮廓图像;对所述第一轮廓图像和所述第二轮廓图像进行特征点匹配,得到第一特征点对;根据所述第一特征点对,对所述第二图像进行仿射变换处理,以得到与所述第一图像对齐的第三图像,该方案基于轮廓图像进行特征点对的提取,消除无效信息的干扰,提高了提取到的特征点对的准确率,进而提升图像对齐效果。

Figure 201911253884

The embodiment of the present application discloses an image alignment method, device, storage medium and electronic device, wherein the embodiment of the present application acquires a first image and a second image of the same shooting scene; performs downsampling processing on the first image, obtaining a third image, and performing down-sampling processing on the second image to obtain a fourth image; extracting a first contour image from the third image based on a contour detection algorithm, and extracting a first contour image from the fourth image Two contour images; perform feature point matching on the first contour image and the second contour image to obtain a first feature point pair; perform affine transformation processing on the second image according to the first feature point pair , in order to obtain a third image aligned with the first image, the scheme extracts feature point pairs based on the contour image, eliminates the interference of invalid information, improves the accuracy of the extracted feature point pairs, and further improves the image alignment effect .

Figure 201911253884

Description

Image alignment method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image alignment method, an image alignment apparatus, a storage medium, and an electronic device.
Background
With the continuous development of intelligent terminal technology, the use of electronic devices (such as smart phones, tablet computers, and the like) is becoming more and more popular. Most of electronic devices are built-in with cameras, and with the enhancement of processing capability of mobile terminals and the development of camera technologies, users have higher and higher requirements for the quality of shot images.
In order to capture an image with a better effect, some image synthesis processing algorithms are used to improve the quality of an output image, such as an HDR (High-Dynamic Range) synthesis algorithm or a multi-frame noise reduction algorithm, and when these algorithms are applied, image alignment processing is required, but the accuracy of feature points detected by a conventional image alignment scheme is low, which results in poor image alignment effect.
Disclosure of Invention
The embodiment of the application provides an image alignment method, an image alignment device, a storage medium and an electronic device, which can improve the accuracy of feature point detection and improve the image alignment effect.
In a first aspect, an embodiment of the present application provides an image alignment method, including:
acquiring a first image and a second image of the same shooting scene;
carrying out downsampling processing on the first image to obtain a third image, and carrying out downsampling processing on the second image to obtain a fourth image;
extracting a first contour image from the third image and a second contour image from the fourth image based on a contour detection algorithm;
matching feature points of the first contour image and the second contour image to obtain a first feature point pair;
and performing affine transformation processing on the second image according to the first characteristic point pair to obtain a third image aligned with the first image.
In a second aspect, an embodiment of the present application further provides an image alignment apparatus, including:
the image acquisition module is used for acquiring a first image and a second image of the same shooting scene;
the down-sampling module is used for performing down-sampling processing on the first image to obtain a third image and performing down-sampling processing on the second image to obtain a fourth image;
a contour detection module for extracting a first contour image from the third image and a second contour image from the fourth image based on a contour detection algorithm;
the characteristic matching module is used for matching characteristic points of the first contour image and the second contour image to obtain a first characteristic point pair;
and the image alignment module is used for carrying out affine transformation processing on the second image according to the first characteristic point pair so as to obtain a third image aligned with the first image.
In a third aspect, embodiments of the present application further provide a storage medium having a computer program stored thereon, where the computer program is executed on a computer, so that the computer executes an image alignment method as provided in any of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the image alignment method provided in any embodiment of the present application by calling the computer program.
According to the technical scheme, the first image and the second image which are obtained by shooting the same shooting scene are obtained, downsampling processing is carried out on the first image and the second image respectively to obtain the third image and the fourth image, the first contour image is obtained based on the third image and a contour detection algorithm, the second contour image is obtained according to the fourth image, feature point matching is carried out on the first contour image and the second contour image to obtain the first feature point pair, affine transformation processing is carried out on the second image based on the first feature point pair to obtain the third image aligned with the first image, contour information is extracted through the images obtained by downsampling, feature point pair extraction is carried out based on the contour images, interference of invalid information can be eliminated, accuracy of the extracted feature point pairs is improved, and further image alignment effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first image alignment method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of down-sampling an image in an image alignment method according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of a second image alignment method according to an embodiment of the present disclosure.
Fig. 4 is a schematic image partition diagram of an image alignment method according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an image alignment apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 7 is a second structural schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiment of the present application provides an image alignment method, and an execution subject of the image alignment method may be the image alignment apparatus provided in the embodiment of the present application, or an electronic device integrated with the image alignment apparatus, where the image alignment apparatus may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a first flowchart illustrating an image alignment method according to an embodiment of the present disclosure. The specific flow of the image alignment method provided by the embodiment of the application can be as follows:
101. a first image and a second image of the same shooting scene are acquired.
102. And performing downsampling processing on the first image to obtain a third image, and performing downsampling processing on the second image to obtain a fourth image.
The scheme of the embodiment of the application can be applied to various occasions needing image alignment, such as HDR image shooting, HDR video recording, multi-frame noise reduction processing and the like. For example, the electronic device may receive the first image and the second image obtained by shooting the same shooting scene sent by another terminal. Or, the electronic device starts the camera to shoot a shooting scene in a shooting mode to obtain an image to be processed, and determines a first image and a second image from the image to be processed. The following describes a scheme of an embodiment of the present application in detail, taking HDR image capture as an example.
In some embodiments, the electronic device may continuously expose a shooting scene, acquire images with more frames than the number of images required for HDR synthesis, and then select several images with the best definition from the images as images to be processed, where the image with the highest definition is used as a reference image and is denoted as a first image, and the remaining images are used as images to be aligned and are denoted as second images, where the first image may have only one frame and the second image may have one or more frames.
Wherein, in some embodiments, the plurality of frames of images to be processed may have different exposure parameters. In some embodiments, multiple frames of images to be processed may also have the same exposure parameters. For example, when acquiring an image to be processed, the electronic device determines an Exposure parameter of a normal Exposure according to an automatic photometry system of the camera, then adjusts the Exposure parameter based on the Exposure parameter of the normal Exposure to increase the Exposure degree, and then performs shooting, for example, increasing an Exposure amount of 1EV (Exposure value, which is an amount reflecting how much Exposure is), for example, increasing the Exposure amount by extending the Exposure time period. The specific number of the second images may be set according to actual needs, which is not limited in this application.
After obtaining a plurality of frames of images to be processed of the same shooting scene, determining one frame of image from the images to be processed as a reference image for image alignment, marking the reference image as a first image, and marking other images except the reference image as second images. In some embodiments, sharpness detection is performed on multiple frames of images to be processed, for example, sharpness of the images is detected through edge information, gradient information, and the like, and one frame of image with the highest sharpness is used as a reference image.
After the first image and the second image are obtained, downsampling processing is respectively carried out on the first image and the second image to obtain a third image and a fourth image. The number of the third image and the fourth image is not limited. Taking the first image as an example, in some embodiments, the downsampling operation may be performed only once. Or, in other embodiments, multiple continuous downsampling may be performed according to a preset sampling multiple to obtain multiple frames of third images with successively reduced resolutions.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating image down-sampling in an image alignment method according to an embodiment of the present disclosure. In some embodiments, the first image P10 and the second image P20 are downsampled twice in succession according to the preset sampling multiple of 4, so as to obtain third images P11 and P12, and obtain fourth images P21 and P22. The sampling multiple and the number of downsamplings are merely examples, and in other embodiments, other sampling multiples and downsampling numbers may be used.
103. Based on a contour detection algorithm, a first contour image is extracted from the third image, and a second contour image is extracted from the fourth image.
Next, an image contour is extracted from the down-sampled image. For example, contour information is extracted from the third image P12 to constitute a first contour image. Wherein the contour information is a pixel value at a contour line of an object in the image. A contour image is an image having pixel values only at contour lines. The contour detection algorithm may be a sobel algorithm, a canny operator, or the like. And obtaining a second contour image corresponding to the second image based on the same processing mode.
104. And matching the first contour image with the second contour image to obtain a first characteristic point pair.
After the first contour image and the second contour image are obtained, the feature points in the contour images are detected according to a feature point detection algorithm, and the feature point pairs in the two frame images are matched. For example, a harris corner detection algorithm is adopted to detect corners in the contour images, and the corners in the two frame contour images are matched to obtain a feature point pair, which is marked as a first feature point pair. For another example, a Scale-invariant feature transform (SIFT) feature detection algorithm is used to extract feature points and perform feature point matching to obtain a first feature point pair.
105. And performing affine transformation processing on the second image according to the first characteristic point pairs to obtain a third image aligned with the first image.
After the first feature point pair is obtained, affine transformation processing may be performed on all pixel points in the second image based on the feature point pairs matched in the first contour image and the second contour image, so as to obtain a third image aligned with the first image. For example, based on the first feature point pairs, calculating a target homography matrix of the second image relative to the first image; and performing affine transformation processing on the second image based on the target homography matrix to obtain a third image aligned with the first image.
Wherein, Homography (Homography) transformation is used to describe the position mapping relation of the object between the world coordinate system and the pixel coordinate system, and the corresponding transformation matrix is called Homography matrix. In the embodiment of the present application, the target homography matrix is used to represent a position mapping relationship between a first image and a second image of an object, feature point pairs matched with the two images are obtained by performing image registration on the second image and the first image (pixel points representing the same object on the two images form one feature point pair), and then a homography matrix is obtained by calculation based on positions of the feature point pairs on the images, where the homography matrix is calculated by using at least three feature point pairs.
In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict.
As can be seen from the above, the image alignment method provided in the embodiment of the present application obtains the first image and the second image obtained by shooting the same shooting scene, respectively performs downsampling processing on the first image and the second image to obtain the third image and the fourth image, obtains the first contour image based on the third image and the contour detection algorithm, and obtains the second contour image according to the fourth image, performing feature point matching on the first contour image and the second contour image to obtain a first feature point pair, performing affine transformation processing on the second image based on the first feature point pair to obtain a third image aligned with the first image, according to the scheme, the contour information is extracted through the image obtained by down-sampling, the extraction of the feature point pairs is carried out based on the contour image, the interference of invalid information is eliminated, the accuracy rate of the extracted feature point pairs is improved, and the image alignment effect is further improved.
In some embodiments, calculating the target homography matrix for the second image relative to the first image based on the first pairs of feature points comprises: and screening target characteristic point pairs from the first characteristic point pairs according to a characteristic point screening algorithm, and determining a target homography matrix according to the target characteristic point pairs.
For example, any three or more sets of candidate pairs of feature points are selected from the first pair of feature points; calculating a candidate homography matrix of the second image relative to the first image based on the candidate characteristic point pairs, and carrying out affine transformation processing on the characteristic points in the second image based on the candidate homography matrix; judging whether the accuracy of the feature points obtained by affine transformation is greater than a preset threshold value or not according to the feature points in the first image; if not, returning to execute the selection of any three or more groups of candidate characteristic point pairs from the first characteristic point pair; and if so, taking the candidate homography matrix as a target homography matrix.
Because the feature point pairs obtained by matching may have a certain error rate, in order to further improve the accuracy of image alignment of the present application, feature points are screened, any three groups of candidate feature point pairs are randomly selected from the first feature point pairs, a homography matrix is calculated, coordinates of all feature points (the feature points detected in 104) in the second image are transformed by using the homography matrix, coordinates of the feature points in the first image are obtained, the accuracy of affine transformation is judged according to the calculated coordinates and real coordinates of the feature points in the first image, and if the accuracy reaches a preset threshold (for example, 95%). And judging that the homography matrix meets the requirements, otherwise, randomly selecting any three groups of candidate characteristic point pairs again, recalculating the homography matrix, and judging again until the calculated homography matrix meets the requirements.
In some embodiments, determining whether the accuracy of the feature points obtained by the affine transformation processing is greater than a preset threshold according to the feature points in the first image includes: calculating a mean square value of coordinate differences between the feature points obtained by affine transformation and the feature points at the corresponding positions in the first image; counting the proportion of the number of the pixel points with the coordinate difference mean square value smaller than the preset value in the total number of the pixel points in the first image, and taking the proportion as the accuracy of the feature points obtained by affine transformation; and judging whether the accuracy is greater than a preset threshold value.
In this embodiment, the accuracy of the affine transformation is determined by the size of the mean square value of the coordinate difference between the feature point obtained by the affine transformation and the feature point at the corresponding position in the first image, where a smaller mean square value of the coordinate difference indicates a higher accuracy of the pixel point transformation, and the more the number of pixels accurately transformed, the higher accuracy of the affine transformation.
In some embodiments, downsampling the first image to obtain a third image comprises: carrying out down-sampling processing on the first image for multiple times according to a preset sampling multiple to obtain multiple frames of third images with sequentially reduced resolution; based on a contour detection algorithm, obtaining a first contour image according to the third image, wherein the contour detection algorithm comprises the following steps: performing Gaussian filtering processing on the third image with the lowest resolution; and obtaining a first contour image according to the third image after the Gaussian filtering processing based on a contour detection algorithm. The contour image can be extracted from the downsampled image with the lowest resolution, and the efficiency of image processing can be improved.
Alternatively, in another embodiment, the downsampling the first image to obtain the third image includes: carrying out down-sampling processing on the first image for multiple times according to a preset sampling multiple to obtain multiple frames of third images with sequentially reduced resolution; extracting a first contour image from the third image based on a contour detection algorithm, comprising: extracting first contour information of the first image based on a contour detection algorithm, and extracting a plurality of second contour information from a plurality of frames of third images respectively based on the contour detection algorithm; and combining the first contour information and the plurality of second contour information to obtain a first contour image corresponding to the first image.
In this embodiment, more and more effective contour information can be extracted by extracting the contour of the image from the original image and the down-sampled image. Referring to fig. 2, taking the first image as an example, the first image P10 and the corresponding third images P11 and P12 are first subjected to gaussian filtering to eliminate the influence of noise and non-texture regions of the image on the detection. For an image, the principle of gaussian filtering is as follows: each pixel in the image is scanned using a gaussian kernel (e.g., a 2-dimensional convolution operator) to perform a weighted average calculation, and the image is blurred to remove detail and noise. For example, a gaussian kernel of 3 × 3 is used for filtering, that is, a weighted average calculation is performed on the pixel values of a pixel itself and other pixels in the neighborhood by using 3 × 3 weights. Then, contour information is extracted from the first image P10 and the corresponding third images P11 and P12, respectively, by using the sobel algorithm, and the extracted plural pieces of contour information are combined with one image to form a first contour image corresponding to the first image. Wherein the contour information is a pixel value at a contour line of an object in the image. A contour image is an image having pixel values only at contour lines. And when the contour information is combined, restoring the contour pixel points in the image obtained by down-sampling to the positions of the contour pixel points in the original image. For example, the pixel point in the first row and the first column of the third image P11 is the contour pixel point, and the position of the pixel point in the first image P10 is the second row and the second column, so that the position of the pixel point in the first contour image needs to be restored to the second row and the second column.
In some embodiments, before down-sampling the first image to obtain the third image and down-sampling the second image to obtain the fourth image, the method further includes: counting the brightness of each pixel point of the first image to obtain a first brightness distribution of the first image, and counting the brightness of each pixel point of the second image to obtain a second brightness distribution of the second image; and adjusting the second brightness distribution according to the first brightness distribution, and adjusting the brightness of the second image based on the adjusted second brightness distribution to obtain a second image with brightness aligned with the first image.
In the embodiment, the brightness of the first image and the second image is adjusted by using a histogram equalization algorithm, and the brightness distribution of the two images is balanced at the same level, so that the situation that the texture of the image is unclear due to over brightness or over darkness can be effectively prevented. The luminance distribution in the embodiment of the present application may be represented as a luminance distribution histogram. Histogram statistics is carried out on the first image and the second image, each 16 pixel values are used as a grade in the histogram statistics, and the 8-bit depth image is divided into 16 grades. The pixel values of the whole image are counted and are counted into the 16 levels, and a luminance distribution histogram corresponding to the image is generated. Then, image histogram equalization processing is performed.
The histogram equalization process may be performed on the second image with reference to the first luminance distribution of the first image. When the luminance of both the first image and the second image is low, the histogram equalization processing may be performed on the luminance of both the first image and the second image. The processing results are to balance the luminance distributions of both images at the same level.
The method according to the preceding embodiment is illustrated in further detail below by way of example.
Referring to fig. 3, fig. 3 is a second flowchart of an image alignment method according to an embodiment of the invention. The method comprises the following steps:
201. a first image and a second image of the same shooting scene are acquired.
The electronic equipment can continuously expose a shooting scene, acquire images with more frames than the number of image frames required by HDR synthesis, and then select several images with the best definition from the images as images to be processed, wherein the image with the highest definition is used as a reference image and is marked as a first image, and the rest images are used as images to be aligned and are marked as second images.
For example, the electronic device acquires 5 frames of images to be processed, respectively labeled A, B, C, D, E, F. The sharpness of the image a is preferably determined by performing alignment processing on the image a with the image a as a first image and the remaining image B, C, D, E, F as a second image.
202. A first luminance distribution histogram of the first image is acquired, and a second luminance distribution histogram of the second image is acquired.
Histogram statistics is carried out on the first image and the second image, each 16 pixel values are used as a grade in the histogram statistics, and the 8-bit depth image is divided into 16 grades. The pixel values of the whole image are counted and are counted into the 16 levels, and a luminance distribution histogram corresponding to the image is generated.
203. And adjusting the brightness distribution of the second image according to the first brightness distribution histogram and the second brightness distribution histogram to obtain a second image with brightness aligned with the first image.
And adjusting the second brightness histogram based on the first brightness distribution histogram by using a histogram equalization algorithm to obtain a third brightness distribution histogram, and adjusting the brightness distribution of the second image based on the third brightness distribution histogram to obtain a second image with brightness aligned with the first image. Based on the method, the brightness distribution of the two pictures is balanced at the same level, so that the situation that the texture of the image is not clear due to over brightness or over darkness can be effectively prevented.
204. And carrying out continuous multi-time downsampling processing on the first image to obtain a plurality of frames of third images with sequentially reduced resolution, and carrying out continuous multi-time downsampling processing on the second image to obtain a plurality of frames of fourth images with sequentially reduced resolution.
205. And obtaining a first contour image according to the third image with the minimum resolution, and obtaining a second contour image according to the fourth image with the minimum resolution.
After the first image and the second image are obtained, downsampling processing is respectively carried out on the first image and the second image to obtain a third image and a fourth image. The number of the third image and the fourth image is not limited. And extracting the image contour according to the image obtained by down sampling to obtain a first contour image and a second contour image.
206. And matching the first contour image with the second contour image to obtain a first characteristic point pair.
207. And screening target characteristic point pairs from the first characteristic point pairs according to a characteristic point screening algorithm, and determining a target homography matrix according to the target characteristic point pairs.
After the first contour image and the second contour image are obtained, feature points in the contour images are detected according to a feature point detection algorithm, feature point pairs in the two frame images are matched, and a homography matrix is calculated according to the feature points. Please refer to the above embodiments, which are not described herein.
208. The second image is divided into a plurality of regions.
Next, taking the alignment of the image B and the image a as an example, please refer to fig. 4, and fig. 4 is a schematic diagram of image partition of the image alignment method according to the embodiment of the present application. Image B is segmented into M x N regions, where the size of M and N may be set according to the accuracy of the alignment of the images. By adopting the mode of dividing the images into regions and respectively calculating the homography matrixes among the images aiming at different regions, the accuracy of image alignment can be improved.
In some embodiments, for each region, feature points may also be screened in the same manner as the target homography matrix, and then the homography matrix corresponding to the feature point pair with the highest screening accuracy is determined as the homography matrix finally corresponding to the region.
It is to be understood that the first image and the second image have the same resolution, and when the partition processing is performed, the partition processing needs to be performed in the same manner, and the number of the areas is also equal.
209. And determining second characteristic point pairs which accord with the target homography matrix in each region, and calculating a local homography matrix of the second image in each region relative to the first image according to the corresponding second characteristic point pairs.
The feature points determined in 206 are distributed in the entire image, and after the image B is divided into a plurality of regions, the feature points are distributed in each region of the image B, and for each region, a homography matrix can be calculated according to the feature point pairs corresponding to the feature points in the region.
In this embodiment, for each region, for example, three sets of feature point pairs corresponding to at least three feature points in the first region of the first row of the image B that conform to the target homography matrix are acquired, and the local homography matrix of the region of the image B is calculated. In this way, a local homography matrix for each region can be obtained.
The homography matrix is a matrix in which the mean square value of the coordinate difference between the position of the feature point K1' in the first image obtained by transforming the feature point K1 (belonging to the feature point pair K12) in the second image according to the homography matrix and the position of the feature point K2 (belonging to the feature point pair K12) in the first image is smaller than a preset value.
210. And performing affine transformation processing on each area of the second image respectively based on the corresponding local homography matrix to obtain a third image aligned with the first image.
In the embodiment of the present application, a local homography matrix of the second image with respect to the first image is calculated for each region with the first image as a reference image, in such a manner that for image a, there is one local homography matrix corresponding to each region. And then carrying out affine transformation according to the local homography matrix corresponding to each region, so that void pixel points can appear at the junction of the regions, and at the moment, supplementing the void pixel points by adopting a bilinear difference mode.
As can be seen from the above, the image alignment method provided in the embodiment of the present invention matches the luminance distributions of the two images according to histogram equalization, so that the texture situations of the two images are shown. And then, a multi-scale image edge detection method is adopted to extract image edges on different image scales, so that the extracted image edge information is more comprehensive and more accurate, the robustness to noise is stronger, and the difficulty in detecting the characteristic points of the gradient difference image due to the existence of the noise is avoided. And through feature point screening and regional registration, more accurate feature point detection and registration are realized, a more accurate homography matrix can be obtained, and in addition, a more accurate alignment image is obtained based on accurate interpolation processing. The images matched accurately are aligned, so that the ghost phenomenon generated in the process of synthesizing multi-frame images can be effectively reduced, and the phenomenon of wrong texture expansion caused by super-resolution in the process of synthesizing the multi-frame images is reduced.
An image registration apparatus is also provided in an embodiment. Referring to fig. 5, fig. 5 is a schematic structural diagram of an image alignment apparatus 300 according to an embodiment of the present disclosure. The image alignment apparatus 300 is applied to an electronic device, and the image alignment apparatus 300 includes an image acquisition module 301, a down-sampling module 302, a contour detection module 303, a feature matching module 304, and an image alignment module 305, as follows:
an image obtaining module 301, configured to obtain a first image and a second image of the same shooting scene;
a down-sampling module 302, configured to perform down-sampling on the first image to obtain a third image, and perform down-sampling on the second image to obtain a fourth image;
a contour detection module 303, configured to extract a first contour image from the third image and a second contour image from the fourth image based on a contour detection algorithm;
a feature matching module 304, configured to perform feature point matching on the first contour image and the second contour image to obtain a first feature point pair;
an image alignment module 305, configured to perform affine transformation on the second image according to the first feature point pair to obtain a third image aligned with the first image.
In some embodiments, the image alignment module 305 is further configured to:
calculating a target homography matrix of the second image relative to the first image based on the first feature point pairs;
and carrying out affine transformation processing on the second image based on the target homography matrix so as to obtain a third image aligned with the first image.
In some embodiments, the image alignment module 305 is further configured to: selecting any three or more groups of candidate characteristic point pairs from the first characteristic point pair;
calculating a candidate homography matrix of the second image relative to the first image based on the candidate feature point pairs, and performing affine transformation processing on feature points in the second image based on the candidate homography matrix;
judging whether the accuracy of the feature points obtained by the affine transformation is greater than a preset threshold value or not according to the feature points in the first image;
if not, returning to execute the selection of any three or more groups of candidate characteristic point pairs from the first characteristic point pair;
and if so, taking the candidate homography matrix as a target homography matrix.
In some embodiments, the image alignment module 305 is further configured to: calculating a mean square value of coordinate differences between the feature points obtained by affine transformation and the feature points at the corresponding positions in the first image;
counting the proportion of the number of the pixel points with the coordinate difference mean square value smaller than a preset value in the total number of the pixel points in the first image, and taking the proportion as the accuracy of the feature points obtained by the affine transformation;
and judging whether the accuracy is greater than a preset threshold value.
In some embodiments, the image alignment module 305 is further configured to: segmenting the second image into a plurality of regions;
determining a second characteristic point pair corresponding to the characteristic point of the target homography matrix in each area, and calculating a local homography matrix of the second image in each area according to the corresponding second characteristic point pair;
and performing affine transformation processing on each region of the second image respectively based on the corresponding local homography matrix to obtain a third image aligned with the first image.
In some embodiments, the image alignment module 305 is further configured to: performing affine transformation processing on each region of the second image based on the corresponding local homography matrix;
judging whether the second image subjected to affine transformation processing has a hole pixel point;
and if so, filling the hollow pixel points based on a bilinear interpolation algorithm to obtain a third image aligned with the first image.
In some embodiments, the downsampling module 302 is further to: carrying out down-sampling processing on the first image for multiple times according to a preset sampling multiple to obtain multiple frames of third images with sequentially reduced resolution;
the contour detection module 303 is further configured to: performing Gaussian filtering processing on the third image with the lowest resolution; and obtaining a first contour image according to the third image after the Gaussian filtering processing based on a contour detection algorithm.
In some embodiments, the image alignment apparatus 300 further comprises a brightness adjustment module for: counting the brightness of each pixel point of the first image to obtain a first brightness distribution of the first image, and counting the brightness of each pixel point of the second image to obtain a second brightness distribution of the second image; and performing histogram equalization processing on the first image according to the first brightness distribution, and performing histogram equalization processing on the second image according to the second brightness distribution to obtain a first image and a second image with aligned brightness.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It should be noted that the image alignment apparatus provided in the embodiment of the present application and the image alignment method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image alignment method may be run on the image alignment apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image alignment method, and is not described herein again.
As can be seen from the above, the image alignment apparatus provided in the embodiment of the present application obtains the first image and the second image obtained by shooting the same shooting scene, respectively performs downsampling processing on the first image and the second image to obtain the third image and the fourth image, obtains the first contour image based on the third image and the contour detection algorithm, and obtains the second contour image according to the fourth image, performing feature point matching on the first contour image and the second contour image to obtain a first feature point pair, performing affine transformation processing on the second image based on the first feature point pair to obtain a third image aligned with the first image, according to the scheme, the contour information is extracted through the image obtained by down-sampling, the extraction of the feature point pairs is carried out based on the contour image, the interference of invalid information is eliminated, the accuracy rate of the extracted feature point pairs is improved, and the image alignment effect is further improved.
The embodiment of the application also provides the electronic equipment. The electronic device can be a smart phone, a tablet computer and the like. Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 400 comprises a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
Memory 402 may be used to store computer programs and data. The memory 402 stores computer programs containing instructions executable in the processor. The computer program may constitute various functional modules. The processor 401 executes various functional applications and data processing by calling a computer program stored in the memory 402.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions:
acquiring a first image and a second image of the same shooting scene;
carrying out downsampling processing on the first image to obtain a third image, and carrying out downsampling processing on the second image to obtain a fourth image;
extracting a first contour image from the third image and a second contour image from the fourth image based on a contour detection algorithm;
matching feature points of the first contour image and the second contour image to obtain a first feature point pair;
and performing affine transformation processing on the second image according to the first characteristic point pair to obtain a third image aligned with the first image.
In some embodiments, please refer to fig. 7, and fig. 7 is a second structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 400 further comprises: radio frequency circuit 403, display 404, control circuit 405, input unit 406, audio circuit 407, sensor 408, and power supply 409. The processor 401 is electrically connected to the radio frequency circuit 403, the display 404, the control circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power source 409.
The radio frequency circuit 403 is used for transceiving radio frequency signals to communicate with a network device or other electronic devices through wireless communication.
The display screen 404 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 405 is electrically connected to the display screen 404, and is configured to control the display screen 404 to display information.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 406 may include a fingerprint recognition module.
The audio circuit 407 may provide an audio interface between the user and the electronic device through a speaker, microphone. Wherein the audio circuit 407 comprises a microphone. The microphone is electrically connected to the processor 401. The microphone is used for receiving voice information input by a user.
The sensor 408 is used to collect external environmental information. The sensors 408 may include one or more of ambient light sensors, acceleration sensors, gyroscopes, etc.
The power supply 409 is used to power the various components of the electronic device 400. In some embodiments, the power source 409 may be logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
Although not shown in fig. 7, the electronic device 400 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions:
acquiring a first image and a second image of the same shooting scene;
carrying out downsampling processing on the first image to obtain a third image, and carrying out downsampling processing on the second image to obtain a fourth image;
extracting a first contour image from the third image and a second contour image from the fourth image based on a contour detection algorithm;
matching feature points of the first contour image and the second contour image to obtain a first feature point pair;
and performing affine transformation processing on the second image according to the first characteristic point pair to obtain a third image aligned with the first image.
As can be seen from the above, an embodiment of the present application provides an electronic device, where the electronic device acquires a first image and a second image obtained by shooting a same shooting scene, performs downsampling on the first image and the second image respectively to obtain a third image and a fourth image, obtains a first contour image based on the third image and a contour detection algorithm, and obtains a second contour image according to the fourth image, performing feature point matching on the first contour image and the second contour image to obtain a first feature point pair, performing affine transformation processing on the second image based on the first feature point pair to obtain a third image aligned with the first image, according to the scheme, the contour information is extracted through the image obtained by down-sampling, the extraction of the feature point pairs is carried out based on the contour image, the interference of invalid information is eliminated, the accuracy rate of the extracted feature point pairs is improved, and the image alignment effect is further improved.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the image alignment method according to any of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Furthermore, the terms "first", "second", and "third", etc. in this application are used to distinguish different objects, and are not used to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
The image alignment method, the image alignment device, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An image alignment method, comprising:
acquiring a first image and a second image of the same shooting scene;
carrying out downsampling processing on the first image to obtain a third image, and carrying out downsampling processing on the second image to obtain a fourth image;
extracting a first contour image from the third image and a second contour image from the fourth image based on a contour detection algorithm;
matching feature points of the first contour image and the second contour image to obtain a first feature point pair;
and performing affine transformation processing on the second image according to the first characteristic point pair to obtain a third image aligned with the first image.
2. The image alignment method according to claim 1, wherein performing affine transformation processing on the second image according to the first feature point pairs to obtain a third image aligned with the first image, includes:
calculating a target homography matrix of the second image relative to the first image based on the first feature point pairs;
and carrying out affine transformation processing on the second image based on the target homography matrix so as to obtain a third image aligned with the first image.
3. The image alignment method of claim 2, wherein said calculating a target homography matrix for the second image relative to the first image based on the first pairs of feature points comprises:
selecting any three or more groups of candidate characteristic point pairs from the first characteristic point pair;
calculating a candidate homography matrix of the second image relative to the first image based on the candidate feature point pairs, and performing affine transformation processing on feature points in the second image based on the candidate homography matrix;
judging whether the accuracy of the feature points obtained by the affine transformation is greater than a preset threshold value or not according to the feature points in the first image;
if not, returning to execute the selection of any three or more groups of candidate characteristic point pairs from the first characteristic point pair;
and if so, taking the candidate homography matrix as a target homography matrix.
4. The image alignment method according to claim 3, wherein said determining whether the accuracy of the feature points obtained by the affine transformation processing is greater than a preset threshold from the feature points in the first image comprises:
calculating a mean square value of coordinate differences between the feature points obtained by affine transformation and the feature points at the corresponding positions in the first image;
counting the proportion of the number of the pixel points with the coordinate difference mean square value smaller than a preset value in the total number of the pixel points in the first image, and taking the proportion as the accuracy of the feature points obtained by the affine transformation;
and judging whether the accuracy is greater than a preset threshold value.
5. The image alignment method according to claim 2, wherein performing affine transformation processing on the second image based on the target homography matrix to obtain a third image aligned with the first image comprises:
segmenting the second image into a plurality of regions;
determining a second characteristic point pair corresponding to the characteristic point of the target homography matrix in each area, and calculating a local homography matrix of the second image in each area according to the corresponding second characteristic point pair;
and performing affine transformation processing on each region of the second image respectively based on the corresponding local homography matrix to obtain a third image aligned with the first image.
6. The image alignment method according to claim 5, wherein performing affine transformation processing on each region of the second image based on the corresponding local homography matrix to obtain a third image aligned with the first image comprises:
performing affine transformation processing on each region of the second image based on the corresponding local homography matrix;
judging whether the second image subjected to affine transformation processing has a hole pixel point;
and if so, filling the hollow pixel points based on a bilinear interpolation algorithm to obtain a third image aligned with the first image.
7. The image alignment method of claim 1, wherein the down-sampling the first image to obtain a third image comprises:
carrying out down-sampling processing on the first image for multiple times according to a preset sampling multiple to obtain multiple frames of third images with sequentially reduced resolution;
the extracting a first contour image from the third image based on a contour detection algorithm comprises:
performing Gaussian filtering processing on the third image with the lowest resolution;
and obtaining a first contour image according to the third image after the Gaussian filtering processing based on a contour detection algorithm.
8. The image alignment method according to claim 1, wherein before the down-sampling the first image to obtain a third image and the down-sampling the second image to obtain a fourth image, the method further comprises:
counting the brightness of each pixel point of the first image to obtain a first brightness distribution of the first image, and counting the brightness of each pixel point of the second image to obtain a second brightness distribution of the second image;
and adjusting the second brightness distribution according to the first brightness distribution, and adjusting the brightness of the second image based on the adjusted second brightness distribution to obtain a second image with brightness aligned with the first image.
9. An image alignment apparatus, comprising:
the image acquisition module is used for acquiring a first image and a second image of the same shooting scene;
the down-sampling module is used for performing down-sampling processing on the first image to obtain a third image and performing down-sampling processing on the second image to obtain a fourth image;
a contour detection module for extracting a first contour image from the third image and a second contour image from the fourth image based on a contour detection algorithm;
the characteristic matching module is used for matching characteristic points of the first contour image and the second contour image to obtain a first characteristic point pair;
and the image alignment module is used for carrying out affine transformation processing on the second image according to the first characteristic point pair so as to obtain a third image aligned with the first image.
10. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute the image alignment method according to any one of claims 1 to 8.
11. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is configured to execute the image alignment method according to any one of claims 1 to 8 by calling the computer program.
CN201911253884.0A 2019-12-09 2019-12-09 Image alignment method and device, storage medium and electronic equipment Pending CN111028276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911253884.0A CN111028276A (en) 2019-12-09 2019-12-09 Image alignment method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911253884.0A CN111028276A (en) 2019-12-09 2019-12-09 Image alignment method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111028276A true CN111028276A (en) 2020-04-17

Family

ID=70208324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911253884.0A Pending CN111028276A (en) 2019-12-09 2019-12-09 Image alignment method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111028276A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639655A (en) * 2020-05-20 2020-09-08 北京百度网讯科技有限公司 Image local information generation method and device, electronic equipment and storage medium
CN112201000A (en) * 2020-10-10 2021-01-08 广东省构建工程建设有限公司 Dynamic fire monitoring system and method applied to construction stage
CN113674220A (en) * 2021-07-28 2021-11-19 浙江大华技术股份有限公司 Image difference detection method, detection device and storage medium
CN114096994A (en) * 2020-05-29 2022-02-25 北京小米移动软件有限公司南京分公司 Image alignment method and device, electronic equipment and storage medium
CN114298902A (en) * 2021-12-02 2022-04-08 上海闻泰信息技术有限公司 An image alignment method, apparatus, electronic device and storage medium
CN114390206A (en) * 2022-02-10 2022-04-22 维沃移动通信有限公司 Shooting method, device and electronic device
CN114429480A (en) * 2022-01-26 2022-05-03 Oppo广东移动通信有限公司 Image processing method and device, chip and electronic equipment
CN114821030A (en) * 2022-04-11 2022-07-29 苏州振旺光电有限公司 Planet image processing method, system and device
CN118887428A (en) * 2024-09-29 2024-11-01 奥谱天成(成都)信息科技有限公司 Image alignment method, device, storage medium and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105262958A (en) * 2015-10-15 2016-01-20 电子科技大学 Panoramic feature splicing system with virtual viewpoint and method thereof
CN106257535A (en) * 2016-08-11 2016-12-28 河海大学常州校区 Electrical equipment based on SURF operator is infrared and visible light image registration method
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature
CN107454330A (en) * 2017-08-24 2017-12-08 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN108154526A (en) * 2016-12-06 2018-06-12 奥多比公司 The image alignment of burst mode image
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method and device, readable storage medium and computer equipment
CN109785371A (en) * 2018-12-19 2019-05-21 昆明理工大学 A kind of sun image method for registering based on normalized crosscorrelation and SIFT
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105262958A (en) * 2015-10-15 2016-01-20 电子科技大学 Panoramic feature splicing system with virtual viewpoint and method thereof
CN106257535A (en) * 2016-08-11 2016-12-28 河海大学常州校区 Electrical equipment based on SURF operator is infrared and visible light image registration method
CN108154526A (en) * 2016-12-06 2018-06-12 奥多比公司 The image alignment of burst mode image
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature
CN107454330A (en) * 2017-08-24 2017-12-08 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method and device, readable storage medium and computer equipment
CN109785371A (en) * 2018-12-19 2019-05-21 昆明理工大学 A kind of sun image method for registering based on normalized crosscorrelation and SIFT
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
祁曦 等: "结合SIFT 和 Delaunay 三角网的遥感 图像配准算法", 计算机系统应用, pages 163 - 164 *
祁曦 等: "结合SIFT和Delaunay三角网的遥感图像配准算法", pages 163 - 164 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639655A (en) * 2020-05-20 2020-09-08 北京百度网讯科技有限公司 Image local information generation method and device, electronic equipment and storage medium
CN111639655B (en) * 2020-05-20 2023-10-13 北京百度网讯科技有限公司 Image local information generation method, device, electronic equipment and storage medium
CN114096994A (en) * 2020-05-29 2022-02-25 北京小米移动软件有限公司南京分公司 Image alignment method and device, electronic equipment and storage medium
CN112201000A (en) * 2020-10-10 2021-01-08 广东省构建工程建设有限公司 Dynamic fire monitoring system and method applied to construction stage
CN113674220A (en) * 2021-07-28 2021-11-19 浙江大华技术股份有限公司 Image difference detection method, detection device and storage medium
CN114298902A (en) * 2021-12-02 2022-04-08 上海闻泰信息技术有限公司 An image alignment method, apparatus, electronic device and storage medium
CN114429480A (en) * 2022-01-26 2022-05-03 Oppo广东移动通信有限公司 Image processing method and device, chip and electronic equipment
CN114390206A (en) * 2022-02-10 2022-04-22 维沃移动通信有限公司 Shooting method, device and electronic device
CN114821030A (en) * 2022-04-11 2022-07-29 苏州振旺光电有限公司 Planet image processing method, system and device
CN118887428A (en) * 2024-09-29 2024-11-01 奥谱天成(成都)信息科技有限公司 Image alignment method, device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
US11882357B2 (en) Image display method and device
US11138709B2 (en) Image fusion processing module
WO2022021999A1 (en) Image processing method and image processing apparatus
US10853927B2 (en) Image fusion architecture
CN108898567A (en) Image denoising method, apparatus and system
CN111091590A (en) Image processing method, image processing device, storage medium and electronic equipment
US9058655B2 (en) Region of interest based image registration
US10880455B2 (en) High dynamic range color conversion using selective interpolation
CN110930329A (en) Starry sky image processing method and device
US11024006B2 (en) Tagging clipped pixels for pyramid processing in image signal processor
CN112602088A (en) Method, system and computer readable medium for improving quality of low light image
CN111080683B (en) Image processing method, device, storage medium and electronic equipment
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN110728705B (en) Image processing method, device, storage medium and electronic device
US20240205363A1 (en) Sliding Window for Image Keypoint Detection and Descriptor Generation
CN111091513B (en) Image processing method, device, computer-readable storage medium, and electronic device
US20230016350A1 (en) Configurable keypoint descriptor generation
CN115564694A (en) Image processing method and device, computer-readable storage medium, and electronic device
CN108805883B (en) A kind of image segmentation method, image segmentation device and electronic equipment
CN115205779B (en) People Detection Method Based on Crowd Image Template
CN111754411A (en) Image noise reduction method, image noise reduction device and terminal equipment
US11810266B2 (en) Pattern radius adjustment for keypoint descriptor generation
CN112329606B (en) Living body detection method, living body detection device, electronic equipment and readable storage medium
CN111353929A (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20241008