CN111861959A - An ultra-long depth of field and ultra-wide dynamic image synthesis algorithm - Google Patents
An ultra-long depth of field and ultra-wide dynamic image synthesis algorithm Download PDFInfo
- Publication number
- CN111861959A CN111861959A CN202010681348.7A CN202010681348A CN111861959A CN 111861959 A CN111861959 A CN 111861959A CN 202010681348 A CN202010681348 A CN 202010681348A CN 111861959 A CN111861959 A CN 111861959A
- Authority
- CN
- China
- Prior art keywords
- ultra
- wide dynamic
- image
- pixel
- long depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种超长景深超宽动态图像合成算法,具体包括以下步骤:S1:获取至少两幅图像;S2:分别计算每幅图像的清晰度C,饱和度S和曝光程度E;S3:根据每幅图像的清晰度C、饱和度S和曝光程度E分别计算每幅图像的权重W;S4:对全部图像的权重W进行加权平均处理,合成超长景深超宽动态图像;通过采用本图像合成算法,可在RGB图像上直接合成,不需要在RAW格式上进行处理,处理方便;本图像合成算法运算复杂度低,不需要进行色调映射,可实时处理;本图像合成算法同时考虑清晰度,饱和度和曝光程度,能同时合成超长景深和超宽动态图像,还能较好地保留颜色信息。
The invention discloses an ultra-long depth-of-field and ultra-wide dynamic image synthesis algorithm, which specifically includes the following steps: S1: acquiring at least two images; S2: calculating the clarity C, saturation S and exposure degree E of each image respectively; S3 : Calculate the weight W of each image according to the clarity C, saturation S and exposure degree E of each image; S4: Perform weighted average processing on the weight W of all images to synthesize a super-long depth of field and super-wide dynamic image; This image synthesis algorithm can be directly synthesized on RGB images, and does not need to be processed in RAW format, which is convenient to process; this image synthesis algorithm has low computational complexity, does not require tone mapping, and can be processed in real time; this image synthesis algorithm also considers Sharpness, saturation and exposure, can synthesize ultra-long depth of field and ultra-wide dynamic images at the same time, and can better retain color information.
Description
技术领域technical field
本发明涉及一种图像合成算法,尤其涉及的是一种超长景深超宽动态图像合成算法。The invention relates to an image synthesis algorithm, in particular to an ultra-long depth of field and ultra-wide dynamic image synthesis algorithm.
背景技术Background technique
由于相机的物理特性和数字特性的限制,相机有着有限的景深和有限的动态范围,即能看清的距离区间有限和可区分的亮暗区间有限。在一些对景深和动态范围要求较高的领域,有限的景深和动态范围造成不小的困扰,如内窥镜摄像系统在临床使用时,由于景深限制,手术视野内工作距离相差较大的组织难以同时看清,需要反复调整工作距离或者焦距,带来操作不便;同时,由于有限动态范围的影响,难以同时看清亮暗差距较大的物体,给医生带来不适。Due to the limitations of the physical and digital characteristics of the camera, the camera has a limited depth of field and a limited dynamic range, that is, a limited distance range that can be seen clearly and a limited light and dark range that can be distinguished. In some fields that require high depth of field and dynamic range, the limited depth of field and dynamic range cause a lot of trouble. For example, when the endoscopic camera system is used clinically, due to the limited depth of field, the working distance in the surgical field of view is greatly different. It is difficult to see clearly at the same time, and it is necessary to repeatedly adjust the working distance or focal length, which brings inconvenience to the operation; at the same time, due to the limited dynamic range, it is difficult to see objects with a large gap between light and dark at the same time, which brings discomfort to the doctor.
现有的高动态范围(HDR)合成算法通过将多张曝光不同的RAW格式图像依据曝光时间长短进行加权,合成HDR图像用一幅值域很宽广的矩阵表示,最后通过色调映射(tonemapping)将HDR图像映射到0~255进行正确的颜色显示。该种算法计算复杂度较高,难以实现摄像系统50、60帧实时显示的要求。另外,该种算法需要在RAW格式图像上进行,对于常用的RGB图像,需要转到RAW格式进行运算,进一步增加了计算复杂度。再有,合成图像的好坏主要与色调映射方法有关,而色调映射容易出现细节丢失、光晕、色偏等问题。最重要的是,该种算法只能解决相机有限动态范围的问题,而不能解决有限景深的问题。The existing high dynamic range (HDR) synthesis algorithm weights multiple RAW images with different exposures according to the exposure time, and the synthesized HDR image is represented by a matrix with a wide range of values. HDR images are mapped to 0~255 for correct color display. The computational complexity of this algorithm is high, and it is difficult to achieve the real-time display requirements of 50 and 60 frames of the camera system. In addition, this algorithm needs to be performed on RAW format images. For commonly used RGB images, it needs to be transferred to RAW format for operation, which further increases the computational complexity. Furthermore, the quality of the composite image is mainly related to the tone mapping method, and tone mapping is prone to problems such as loss of details, halos, and color casts. Most importantly, this algorithm can only solve the problem of limited dynamic range of the camera, but not the problem of limited depth of field.
因此,现有的技术还有待于改进和发展。Therefore, the existing technology still needs to be improved and developed.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种超长景深超宽动态图像合成算法,以提供具有超长景深超高分辨率输出图像。The purpose of the present invention is to provide an ultra-long depth-of-field and ultra-wide dynamic image synthesis algorithm, so as to provide a super-high-resolution output image with an ultra-long depth of field.
本发明的技术方案如下:一种超长景深超宽动态图像合成算法,其中,具体包括以下步骤:The technical scheme of the present invention is as follows: an ultra-long depth of field and ultra-wide dynamic image synthesis algorithm, which specifically includes the following steps:
S1:获取至少两幅图像;S1: Acquire at least two images;
S2:分别计算每幅图像的清晰度C,饱和度S和曝光程度E;S2: Calculate the clarity C, saturation S and exposure E of each image respectively;
S3:根据每幅图像的清晰度C、饱和度S和曝光程度E分别计算每幅图像的权重W;S3: Calculate the weight W of each image respectively according to the sharpness C, saturation S and exposure degree E of each image;
S4:对全部图像的权重W进行加权平均处理,合成超长景深超宽动态图像。S4: Perform weighted average processing on the weights W of all the images to synthesize a super-long depth-of-field super-wide dynamic image.
所述的超长景深超宽动态图像合成算法,其中,所述S1中,所述图像包括近景清晰且不过曝图像和远景清晰且不欠曝图像。In the super-long depth-of-field and super-wide dynamic image synthesis algorithm, in the S1, the images include a close-range clear image without exposure and a distant view clear image without underexposure.
所述的超长景深超宽动态图像合成算法,其中,所述S2中,通过3*3的拉普拉斯算子滑过每幅图像每个像素点,得到每个像素点的清晰度C(i,j)。The ultra-long depth of field and ultra-wide dynamic image synthesis algorithm, wherein, in the S2, the 3*3 Laplacian operator is used to slide over each pixel of each image to obtain the definition C of each pixel. (i,j).
所述的超长景深超宽动态图像合成算法,其中,每个像素点的清晰度C(i,j)的计算公式为:The super-long depth-of-field super-wide dynamic image synthesis algorithm, wherein, the calculation formula of the clarity C(i,j) of each pixel point is:
f(i,j)为坐标为(i,j)像素点的灰度值。f(i,j) is the gray value of the pixel whose coordinates are (i,j).
所述的超长景深超宽动态图像合成算法,其中,所述S2中,通过每幅图像每个像素点的RGB值计算得到每个像素点的饱和度S(i,j)。In the super-long depth-of-field super-wide dynamic image synthesis algorithm, in the S2, the saturation S(i, j) of each pixel is obtained by calculating the RGB value of each pixel of each image.
所述的超长景深超宽动态图像合成算法,其中,所述每个像素点的饱和度S(i,j)的计算公式为:The super-long depth-of-field super-wide dynamic image synthesis algorithm, wherein, the calculation formula of the saturation S(i,j) of each pixel point is:
其中,R(i,j)为坐标为(i,j)像素点的红色分量,G(i,j)为坐标为(i,j)像素点的绿色分量,B(i,j)为坐标为(i,j)像素点的蓝色分量,M(i,j)为坐标为(i,j)像素点的灰度值。Among them, R(i,j) is the red component of the pixel with coordinates (i,j), G(i,j) is the green component of the pixel with coordinates (i,j), and B(i,j) is the coordinate is the blue component of the (i, j) pixel, and M(i, j) is the gray value of the (i, j) pixel.
所述的超长景深超宽动态图像合成算法,其中,通过每幅图像每个像素点的RGB值计算得到每个像素点的曝光程度E(i,j)。In the super-long depth-of-field and super-wide dynamic image synthesis algorithm, the exposure degree E(i, j) of each pixel is obtained by calculating the RGB value of each pixel of each image.
所述的超长景深超宽动态图像合成算法,其中,所述每个像素点的曝光程度E(i,j)的计算公式为:The super-long depth-of-field super-wide dynamic image synthesis algorithm, wherein, the calculation formula of the exposure degree E(i,j) of each pixel point is:
其中,G为预设值,R(i,j)为坐标为(i,j)像素点的红色分量,G(i,j)为坐标为(i,j)像素点的绿色分量,B(i,j)为坐标为(i,j)像素点的蓝色分量。Among them, G is the default value, R(i,j) is the red component of the pixel with coordinates (i,j), G(i,j) is the green component of the pixel with coordinates (i,j), B( i,j) is the blue component of the pixel whose coordinates are (i,j).
所述的超长景深超宽动态图像合成算法,其中,所述S3中,分别得到每幅图像中每个像素点的清晰度C,饱和度S和曝光程度E之后,分别计算每幅图像中每个像素点的权重W;每个像素点的权重值W(i,j)根据C(i,j)、S(i,j)、B(i,j)计算得到,计算公式为:In the super-long depth-of-field super-wide dynamic image synthesis algorithm, in the step S3, after obtaining the clarity C, the saturation S and the exposure degree E of each pixel in each image, respectively calculate the content in each image. The weight W of each pixel point; the weight value W(i,j) of each pixel point is calculated according to C(i,j), S(i,j), B(i,j), and the calculation formula is:
。 .
所述的超长景深超宽动态图像合成算法,其中,所述S4中,逐一将全部图像中同一坐标的像素点的权重进行加权平均,得到超长景深超宽动态图像中同一坐标的像素点的像素值,将得到的超长景深超宽动态图像中全部像素点合成得到超长景深超宽动态图像,每个像素点的加权公式为: , The super-long depth of field and super-wide dynamic image synthesis algorithm, wherein, in the S4, the weights of the pixels of the same coordinate in all the images are weighted and averaged one by one to obtain the pixels of the same coordinate in the super-long depth of field and super-wide dynamic images. The pixel value of the super-long depth of field and super-wide dynamic image is synthesized to obtain the super-long depth of field and super-wide dynamic image. The weighting formula of each pixel point is: ,
其中,Wn为第n幅图像中坐标为(i,j)像素点的权重,In(i,j)为第n幅图像中坐标为(i,j)像素点的像素值。Among them, Wn is the weight of the pixel with coordinates (i, j) in the nth image, and In(i, j) is the pixel value of the pixel with coordinates (i, j) in the nth image.
本发明的有益效果:本发明通过提供一种超长景深超宽动态图像合成算法,可在RGB图像上直接合成,不需要在RAW格式上进行处理,处理方便;本图像合成算法运算复杂度低,不需要进行色调映射,可实时处理;本图像合成算法同时考虑清晰度,饱和度和曝光程度,能同时合成超长景深和超宽动态图像,还能较好地保留颜色信息。Beneficial effects of the present invention: the present invention provides an ultra-long depth of field and ultra-wide dynamic image synthesis algorithm, which can be directly synthesized on RGB images without processing in RAW format, and the processing is convenient; the image synthesis algorithm has low computational complexity , does not require tone mapping, and can be processed in real time; this image synthesis algorithm considers sharpness, saturation and exposure at the same time, can synthesize ultra-long depth of field and ultra-wide dynamic images at the same time, and can better retain color information.
附图说明Description of drawings
图1是本发明中超长景深超宽动态图像合成算法的步骤流程图。FIG. 1 is a flow chart of the steps of an ultra-long depth and ultra-wide dynamic image synthesis algorithm in the present invention.
具体实施方式Detailed ways
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, only used to explain the present invention, and should not be construed as a limitation of the present invention.
在本发明的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", " rear, left, right, vertical, horizontal, top, bottom, inside, outside, clockwise, counterclockwise, etc., or The positional relationship is based on the orientation or positional relationship shown in the accompanying drawings, which is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, Therefore, it should not be construed as a limitation of the present invention. In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, features defined as "first", "second" may expressly or implicitly include one or more of said features. In the description of the present invention, "plurality" means two or more, unless otherwise expressly and specifically defined.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通讯;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that the terms "installed", "connected" and "connected" should be understood in a broad sense, unless otherwise expressly specified and limited, for example, it may be a fixed connection or a detachable connection Connection, or integral connection; it can be a mechanical connection, an electrical connection or can communicate with each other; it can be directly connected or indirectly connected through an intermediate medium, it can be the internal communication of two elements or the interaction of two elements relation. For those of ordinary skill in the art, the specific meanings of the above terms in the present invention can be understood according to specific situations.
在本发明中,除非另有明确的规定和限定,第一特征在第二特征之“上”或之“下”可以包括第一和第二特征直接接触,也可以包括第一和第二特征不是直接接触而是通过它们之间的另外的特征接触。而且,第一特征在第二特征“之上”、“上方”和“上面”包括第一特征在第二特征正上方和斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”包括第一特征在第二特征正下方和斜下方,或仅仅表示第一特征水平高度小于第二特征。In the present invention, unless otherwise expressly specified and limited, a first feature "on" or "under" a second feature may include the first and second features in direct contact, or may include the first and second features Not directly but through additional features between them. Also, the first feature being "above", "over" and "above" the second feature includes the first feature being directly above and obliquely above the second feature, or simply means that the first feature is level higher than the second feature. The first feature is "below", "below" and "below" the second feature includes the first feature being directly below and diagonally below the second feature, or simply means that the first feature has a lower level than the second feature.
下文的公开提供了许多不同的实施方式或例子用来实现本发明的不同结构。为了简化本发明的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本发明。此外,本发明可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本发明提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。The following disclosure provides many different embodiments or examples for implementing different structures of the present invention. In order to simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are only examples and are not intended to limit the invention. Furthermore, the present disclosure may repeat reference numerals and/or reference letters in different instances for the purpose of simplicity and clarity and not in itself indicative of a relationship between the various embodiments and/or arrangements discussed. In addition, the present disclosure provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials.
如图1所示,一种超长景深超宽动态图像合成算法,具体包括以下步骤:As shown in Figure 1, an ultra-long depth of field and ultra-wide dynamic image synthesis algorithm specifically includes the following steps:
S1:获取至少两幅图像。S1: Acquire at least two images.
其中,分别获取至少1幅近景清晰且不过曝图像image1和至少1幅远景清晰且不欠曝图像image2进行合成。Wherein, at least one image image1 with a clear close-range view and no underexposure and at least one image image2 with a clear view and no underexposure are obtained respectively for synthesis.
S2:分别计算每幅图像的清晰度C,饱和度S和曝光程度E。S2: Calculate the sharpness C, saturation S and exposure E of each image separately.
其中,所述清晰度C通过3*3的拉普拉斯算子滑过图像每个像素点,得到每个像素点的清晰度C(i,j),计算公式如下:Wherein, the definition C is slid over each pixel point of the image through the 3*3 Laplacian operator, and the definition C(i,j) of each pixel point is obtained, and the calculation formula is as follows:
f(i,j)为坐标为(i,j)像素点的灰度值。C值越大,清晰度越高。f(i,j) is the gray value of the pixel whose coordinates are (i,j). The larger the C value, the higher the clarity.
其中,所述饱和度S通过每个像素点的RGB值计算得到每个像素点的饱和度S(i,j),计算公式如下:Wherein, the saturation S is calculated by the RGB value of each pixel to obtain the saturation S(i, j) of each pixel, and the calculation formula is as follows:
其中,R(i,j)为坐标为(i,j)像素点的红色分量,G(i,j)为坐标为(i,j)像素点的绿色分量,B(i,j)为坐标为(i,j)像素点的蓝色分量,M(i,j)为坐标为(i,j)像素点的灰度值。S值越大,饱和度越高。Among them, R(i,j) is the red component of the pixel with coordinates (i,j), G(i,j) is the green component of the pixel with coordinates (i,j), and B(i,j) is the coordinate is the blue component of the (i, j) pixel, and M(i, j) is the gray value of the (i, j) pixel. The larger the S value, the higher the saturation.
其中,所述曝光程度E通过每个像素点的RGB值计算得到每个像素点的曝光程度E(i,j),计算公式如下:Wherein, the exposure degree E is calculated by the RGB value of each pixel point to obtain the exposure degree E(i, j) of each pixel point, and the calculation formula is as follows:
其中,G为预设值,可设为0.2。E值越大,曝光程度越佳。Among them, G is the default value, which can be set to 0.2. The larger the E value, the better the exposure.
其中,上述R(i,j)、 G(i,j)、 B(i,j) 、f(i,j)取值范围为0~1,若否,需要归一化。Among them, the value range of the above R(i,j), G(i,j), B(i,j), f(i,j) is 0~1, if not, it needs to be normalized.
S3:根据每幅图像的清晰度C、饱和度S和曝光程度E分别计算每幅图像的权重W。S3: Calculate the weight W of each image respectively according to the sharpness C, saturation S and exposure level E of each image.
分别得到每幅图像中每个像素点的清晰度C,饱和度S和曝光程度E之后,分别计算每幅图像中每个像素点的权重W,每个像素点的权重值W(i,j)根据C(i,j)、S(i,j)、B(i,j)计算得到,计算公式如下:After obtaining the sharpness C, saturation S and exposure degree E of each pixel in each image, respectively calculate the weight W of each pixel in each image, and the weight value of each pixel W(i, j ) is calculated according to C(i,j), S(i,j), B(i,j), and the calculation formula is as follows:
S4:对全部图像的权重W进行加权平均处理,合成超长景深超宽动态图像。S4: Perform weighted average processing on the weights W of all the images to synthesize a super-long depth-of-field super-wide dynamic image.
假设n张图像中同一坐标的像素点的权重分别为W1…Wn,对n幅图像中同一坐标的像素点的权重进行加权平均,得到超长景深超宽动态图像中同一坐标的像素点的像素值,将得到的超长景深超宽动态图像中全部像素点合成得到超长景深超宽动态图像,每个像素点的加权公式如下所示:Assuming that the weights of the pixels at the same coordinates in the n images are W1...Wn, the weights of the pixels at the same coordinates in the n images are weighted and averaged to obtain the pixels of the same coordinates in the super-long depth of field and super-wide dynamic images. value, synthesize all the pixels in the obtained ultra-long depth of field and ultra-wide dynamic image to obtain an ultra-long depth of field and ultra-wide dynamic image, and the weighting formula of each pixel is as follows:
其中:I1(i,j)…In(i,j)为第1幅到第n幅图像坐标为(i,j)像素点的像素值。Among them: I1(i,j)...In(i,j) is the pixel value of the first to nth image whose coordinates are (i,j) pixels.
在本说明书的描述中,参考术语“一个实施方式”、“某些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。In the description of this specification, reference to the terms "one embodiment," "some embodiments," "exemplary embodiment," "example," "specific example," or "some examples", etc. A particular feature, structure, material, or characteristic described in this embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
应当理解的是,本发明的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be understood that the application of the present invention is not limited to the above examples. For those of ordinary skill in the art, improvements or transformations can be made according to the above descriptions, and all these improvements and transformations should belong to the protection scope of the appended claims of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010681348.7A CN111861959A (en) | 2020-07-15 | 2020-07-15 | An ultra-long depth of field and ultra-wide dynamic image synthesis algorithm |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010681348.7A CN111861959A (en) | 2020-07-15 | 2020-07-15 | An ultra-long depth of field and ultra-wide dynamic image synthesis algorithm |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111861959A true CN111861959A (en) | 2020-10-30 |
Family
ID=72984318
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010681348.7A Pending CN111861959A (en) | 2020-07-15 | 2020-07-15 | An ultra-long depth of field and ultra-wide dynamic image synthesis algorithm |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111861959A (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101021945A (en) * | 2007-03-23 | 2007-08-22 | 北京中星微电子有限公司 | Image composing method and device |
| US20130070965A1 (en) * | 2011-09-21 | 2013-03-21 | Industry-University Cooperation Foundation Sogang University | Image processing method and apparatus |
| CN104616273A (en) * | 2015-01-26 | 2015-05-13 | 电子科技大学 | Multi-exposure image fusion method based on Laplacian pyramid decomposition |
| CN106030614A (en) * | 2014-04-22 | 2016-10-12 | 史內普艾德有限公司 | System and method for controlling one camera based on processing of images captured by another camera |
| CN106408518A (en) * | 2015-07-30 | 2017-02-15 | 展讯通信(上海)有限公司 | Image fusion method and apparatus, and terminal device |
| CN106550194A (en) * | 2016-12-26 | 2017-03-29 | 珠海格力电器股份有限公司 | Photographing method and device and mobile terminal |
| CN107220956A (en) * | 2017-04-18 | 2017-09-29 | 天津大学 | A kind of HDR image fusion method of the LDR image based on several with different exposures |
| CN108921806A (en) * | 2018-08-07 | 2018-11-30 | Oppo广东移动通信有限公司 | Image processing method, image processing device and terminal equipment |
-
2020
- 2020-07-15 CN CN202010681348.7A patent/CN111861959A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101021945A (en) * | 2007-03-23 | 2007-08-22 | 北京中星微电子有限公司 | Image composing method and device |
| US20130070965A1 (en) * | 2011-09-21 | 2013-03-21 | Industry-University Cooperation Foundation Sogang University | Image processing method and apparatus |
| CN106030614A (en) * | 2014-04-22 | 2016-10-12 | 史內普艾德有限公司 | System and method for controlling one camera based on processing of images captured by another camera |
| CN104616273A (en) * | 2015-01-26 | 2015-05-13 | 电子科技大学 | Multi-exposure image fusion method based on Laplacian pyramid decomposition |
| CN106408518A (en) * | 2015-07-30 | 2017-02-15 | 展讯通信(上海)有限公司 | Image fusion method and apparatus, and terminal device |
| CN106550194A (en) * | 2016-12-26 | 2017-03-29 | 珠海格力电器股份有限公司 | Photographing method and device and mobile terminal |
| CN107220956A (en) * | 2017-04-18 | 2017-09-29 | 天津大学 | A kind of HDR image fusion method of the LDR image based on several with different exposures |
| CN108921806A (en) * | 2018-08-07 | 2018-11-30 | Oppo广东移动通信有限公司 | Image processing method, image processing device and terminal equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110827200B (en) | Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal | |
| JP5529048B2 (en) | Interpolation system and method | |
| CN112381743B (en) | Image processing methods, apparatus, devices and storage media | |
| JP5291084B2 (en) | Edge mapping incorporating panchromatic pixels | |
| US8947501B2 (en) | Scene enhancements in off-center peripheral regions for nonlinear lens geometries | |
| JP5199471B2 (en) | Color constancy method and system | |
| WO2014044045A1 (en) | Image processing method and device | |
| WO2019183813A1 (en) | Image capture method and device | |
| CN114240767B (en) | Image wide dynamic range processing method and device based on exposure fusion | |
| WO2022253014A1 (en) | Underwater image color restoration method and apparatus | |
| TW201432616A (en) | Image capturing device and image processing method thereof | |
| WO2020107995A1 (en) | Imaging method and apparatus, electronic device, and computer readable storage medium | |
| WO2022116989A1 (en) | Image processing method and apparatus, and device and storage medium | |
| JP2012124877A (en) | Image processing device | |
| WO2020134123A1 (en) | Panoramic photographing method and device, camera and mobile terminal | |
| WO2019184667A1 (en) | Color correction method for panoramic image and electronic device | |
| CN114283100A (en) | High dynamic range image synthesis and tone mapping method and electronic equipment | |
| CN112381724A (en) | Image width dynamic enhancement method based on multi-exposure fusion framework | |
| JP2020145553A (en) | Image processing equipment, image processing methods, and programs | |
| TWI694722B (en) | Exposure level control, system and method for high dynamic range imaging | |
| WO2019196109A1 (en) | Method and apparatus for suppressing image pseudo-colour | |
| CN111861959A (en) | An ultra-long depth of field and ultra-wide dynamic image synthesis algorithm | |
| JP5952574B2 (en) | Image processing apparatus and control method thereof | |
| JP2022086311A (en) | Image pickup device, control method of image pickup device, and program | |
| CN118200748A (en) | Image processing method, electronic device, storage medium and program product |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201030 |


















