[go: up one dir, main page]

CN107563964A - The quick joining method of large area array sub-meter grade night scene remote sensing image - Google Patents

The quick joining method of large area array sub-meter grade night scene remote sensing image Download PDF

Info

Publication number
CN107563964A
CN107563964A CN201710722702.4A CN201710722702A CN107563964A CN 107563964 A CN107563964 A CN 107563964A CN 201710722702 A CN201710722702 A CN 201710722702A CN 107563964 A CN107563964 A CN 107563964A
Authority
CN
China
Prior art keywords
mrow
image
msub
rpc
night scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710722702.4A
Other languages
Chinese (zh)
Other versions
CN107563964B (en
Inventor
武红宇
白杨
王灵丽
谷文双
潘征
陆晗
钟兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chang Guang Satellite Technology Co Ltd
Original Assignee
Chang Guang Satellite Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chang Guang Satellite Technology Co Ltd filed Critical Chang Guang Satellite Technology Co Ltd
Priority to CN201710722702.4A priority Critical patent/CN107563964B/en
Publication of CN107563964A publication Critical patent/CN107563964A/en
Application granted granted Critical
Publication of CN107563964B publication Critical patent/CN107563964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention relates to a kind of quick joining method of large area array sub-meter grade night scene remote sensing image, comprise the following steps:Step 1:Relative detector calibration;Step 2:Remove isolated noise;Step 3:Based on RPC without control block adjustment;Step 4:Light and color homogenization processing;Step 5:Correction and image resampling are just penetrated based on RPC.The present invention uses the quick joining method of large area array sub-meter grade night scene remote sensing image, by carrying out relative detector calibration, image denoising to raw video, just penetrating correction based on RPC without control block adjustment, light and color homogenization based on RPC, and the processing such as image resampling obtains large-scale night scene remote sensing splicing image, and the quick processing of algorithm is speeded up to by GPU, both the accuracy of algorithm had been ensure that, processing speed is greatly improved again, algorithm is simple and easy, is easily directly applied in project treatment.

Description

The quick joining method of large area array sub-meter grade night scene remote sensing image
Technical field
The present invention relates to field of remote sensing image processing, more particularly to a kind of large area array sub-meter grade night scene remote sensing image it is quick Joining method.
Background technology
Image joint is also known as image mosaic, be by two width or several there is the image joint of certain degree of overlapping together, structure Into the technical process of a width overall image.In remote sensing image application, to obtain larger range of remote sensing image, it usually needs will Breadth is smaller or image from different sensors is by handling and being stitched together.Only by simply splicing, can make There is obvious geometry to misplace and radiate difference into stitching portion.Geometry dislocation is due to the space for splicing image with respect to position Put that relation is incorrect or image regional area has the reasons such as certain geometric distortion and caused, and it is due to acquisition shadow to radiate difference The reason such as the season of picture or sensor type difference causes.Therefore, in image mosaic, the geometry knot inlayed between image should be eliminated Structure misplaces and radiation difference could preferably meet the application of image in practice.
Night scene image is the surface data that remote sensing satellite obtains at night, utilizes the low optical scanning function of sensor, Neng Gouyou Effect ground detects urban lighting, or even to put etc. the low-intensity sent visible for small-scale settlement place, wagon flow, fishing boat light, fire Optical emitter, the footprint of human lives is caught under the background of dark.Night scene remote sensing image is widely used in social economy Many research fields such as parameter estimation, regional development research, urbanization monitoring, light pollution, can objectively react social economy Variation tendency.However, night scene image is limited to the factors such as sensor manufacturing process, satellite orbital altitude when obtaining, individual Image area coverage is limited, and individual image of such as long star 12K × 5K of light satellite video 03 0.92 meter of resolution ratio of super large area array covers The surface area of lid is about 50 square kilometres, it is seen that it is main that individual large area array sub-meter grade image can not disposably obtain covering city The broad range of data in region, it is therefore desirable to which multiple remote sensing in the range of multiple spot imaging task acquisition imaging task are carried out by satellite Image, the image that multiple spot imaging pattern obtains spliced to obtain further according to the image overlap between neighbouring imaging final distant Feel image, but the posture of satellite and differ when each task obtains image, it is different saturating that this will make it that every image has Depending on distortion;Sensor is needed using higher gain and longer time for exposure during imaging, and this will be inevitably in image Middle to introduce the highlighted noise such as noise and colourity noise, night scene image is integrally dark in addition, is unfavorable for visual interpretation and overlay region The search of domain splicing line.Therefore the night scene remote sensing image for needing to obtain for different imaging tasks carries out distortion rectification, and image is gone Make an uproar, the processing of image light and color homogenization;The RPC coefficients of single frames night scene image will be carried out based on RPC's there is certain error Without control block adjustment, block adjustment process needs to carry out overlapping region sift feature point extractions with matching, in order to improve Processing speed employs the sift matching algorithms based on GPU, and image is due to being center imaging, and there is certain in imaging process Shooting angle, therefore also need to that image is carried out just to penetrate correction based on RPC, to the image resampling after correction can with To final seamless spliced image.
The smoothness of gradation of image curved surface should be taken during image mosaic into account, the definition of image is taken into account again, for remote sensing The stitching algorithm of image mainly includes the technology such as Image registration and synthesis, and Yang Li duckweeds, Lin Guangfa, Chen Youfei exist within 2009《Remote sensing Technology and application》On deliver《Shading approach is studied before different aspect SPOT5 image mosaics》, its use numerical value adjustment, The image processing method such as grid editor filling and feature information extraction, classification, the patch atural object widely different to tone on image Handled with irregular scrappy atural object, reached the purpose for reducing tone difference, there is stronger applicability.Li De in 2006 Benevolence, Wang Mi, Pan Junzhen exist《Wuhan University Journal (information science version)》In deliver《The automatic dodging of optical remote sensing image And application》, paper is for the image that optical remote sensing obtains in the presence of between several images in width image inside and regional extent Color energy imbalance, it is proposed that automatic dodging method and handling process, by the even optical software of image to principle and stream Cheng Jinhang is realized, and achieves good effect with reference to practical engineering application.Liu Xiaolong exists within 2001《Remote sensing journal》Upper hair Table《The embedding technique for the digital orthoimage corrected based on Image Matching edge fit》, a kind of digital orthoimage of paper Embedding technique, including preconditioning technique, embedding technique, inlay flow and application prospect, wherein preconditioning technique includes color Coloured silk balance, Image Matching and edge fit correct technology, and color balance is corrected for radiancy, and Image Matching is corrected for several with edge fit The correction of what difference.But the processing side of the few Bonding Problems for large area array sub-meter grade night scene remote sensing image of existing method Method.
The content of the invention
The invention solves technical problem of the prior art, there is provided a kind of large area array sub-meter grade night scene remote sensing image it is fast Fast joining method.
In order to solve the above-mentioned technical problem, technical scheme is specific as follows:
A kind of quick joining method of large area array sub-meter grade night scene remote sensing image, comprises the following steps:
Step 1:Relative detector calibration
The processing of correction is normalized to the quantized value of original night remote sensing image each pixel spoke monochrome information response, The each detecting element response difference of sensor is reduced or eliminated, makes response uniformity of the detecting element to spoke brightness;Utilize phase Relative detector calibration is carried out to the result of radiation calibration;
Step 2:Remove isolated noise
First, initial data is isolated to the information of tri- wave bands of R, G, B, medium filtering is carried out to each wave band, i.e.,
Imed(R, G, B)=medfilt (Iori) (1)
Wherein, IoriFor initial data, ImedFor the image of medium filtering;
Thereafter, isolated noise medium filtering image will be filtered out and carry out binary conversion treatment, i.e.,
Ibw(R, G, B)=im2bw (Imed(R,G,B),thre) (2)
Wherein, IbwFor binary image, thre is the threshold value of binary conversion treatment;
Binary image is multiplied point by point with initial data, i.e.,
Idenoise(i, j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor the image after denoising, Idenoise(i, j) is the gray value of the i-th row of image jth row;
Step 3:Based on RPC without control block adjustment
Sift feature point extractions and matching based on GPU are carried out to the overlapping region of multiple spot imaging task neighbour's close-range image, Multiframe night scene data are carried out using matching result to eliminate error present in RPC without control block adjustment based on RPC;Step Rapid four:Light and color homogenization processing
The even light of Mask is carried out to night scene image and the even color based on Wallis conversion is handled;
Using the even light of Mask differential techniques, obtained using Gassian low-pass filter method after the background video of raw video using original Image carries out phase reducing with background video, obtains the image of illuminance distribution, and its phase reducing is using the public affairs shown in formula 6 Formula;
Iout(x, y)=Iin(x,y)-Iback(x,y)+offset (6)
Offset is grayscale shift amount in formula 6;
According to the maximum f of raw video gray levelmax, minimum value fmin, average valueAnd result image greyscale is maximum Value gmax, minimum value gmin, average valueCarry out divided linear strength;Stretching formula is provided by formula 7:
G (x, y) is even light result image in formula 7, g'(x, y) it is to the image after even light result image stretch processing;
Wallis conversion can be expressed as:
G (x, y) and f (x, y) is respectively the gray value of raw video and Wallis transformation results images in formula 8;mgAnd mfPoint Not Wei raw video local gray level average and standard deviation;sgAnd sfRespectively result image local gray level average and standard deviation The desired value of difference;C ∈ [0,1] are the extension constant of image variance;B ∈ [0,1] are the luminance factor of image, as b → 1, shadow As average is forced to mf, as b → 0, image average is forced to mg
Step 5:Correction and image resampling are just penetrated based on RPC
The night scene image obtained to multiple spot imaging is carried out just penetrating correction based on RPC, and weight is carried out to the image after correction Sampling obtains final splicing image.
In the above-mentioned technical solutions, in step 5, it is as follows that just penetrating based on RPC corrects step:
1) angle point object coordinates are calculated
The initial object coordinates of correspondence provided according to the angle point image space coordinate of image four and regularization coefficient are just being calculated with RPC respectively Geographical coordinates is obtained, and solves initial affine transformation coefficient, then object space point coordinates corresponding to picpointed coordinate is calculated based on RPC;
2) result image is built
Longitude and latitude minimum, maximum are worth to image covering in the ground areas object space plane coordinates according to corresponding to being imaged multiple spot Scope (lat0~lat1, lon0~lon1), setting orthography resolution ratio gsd, you can calculate image size (W, H):
3) pixel-by-pixel Ergodic Maps to raw video
The each pixel (s, l) of orthography can be calculated by projection formula and arrive object space plane coordinates initial value (lat, lon), Formula is as follows:
The elevation H of (lat, lon) position is obtained according to dem data, RPC model formations is substituted into and picpointed coordinate is calculated (x,y);
4) interpolation gray value and assignment
The picpointed coordinate (x, y) obtained by inverse in 3), the interpolation gray scale on raw video;After calculating gray scale p, assignment Give result image (s, l) position, final output night scene remote sensing splicing image.
In the above-mentioned technical solutions, in step 1), RPC just calculations obtain object space point coordinates and comprised the following steps:
(1) object space plane coordinates initial value (Lon, Lat) is given according to initial affine transformation figure parameters for picpointed coordinate;
(2) dem data is read according to given object space plane coordinates initial value and obtains height value H, corresponding picture is solved using RPC Point coordinates, new radiation conversion coefficient is solved using picture point and object space point coordinates;
(3) object space plane coordinates is given according to new affine transformation figure parameters for picpointed coordinate, and it is corresponding to read DEM Height value, solve and finish if the height value solved twice is less than threshold value, otherwise repeatedly said process, calculated until twice Untill object space elevation difference out is less than threshold value.
In the above-mentioned technical solutions, in step 4), the interpolation uses bilinear interpolation method, and formula is as follows:P=p (i, j)*(1-dx)*(1-dy)+p(i+1,j)*dx*(1-dy)+p(i+1,j+1)*dx*dy+p(i,j+1)*(1-dx)*dy。
The present invention has following beneficial effect:
The present invention is based on existing image mosaic Processing Algorithm, and the attitude of satellite becomes when taking into full account the imaging of optical satellite multiple spot Change larger, night scene video imaging object is hot spot and the imaging characteristicses of isolated noise be present, and large area array high-resolution image number According to the characteristics of big is measured, to raw video progress relative detector calibration, image denoising, based on RPC without control block adjustment, even light Even color, correction just penetrated based on RPC, and the processing such as image resampling obtains large-scale night scene remote sensing splicing image.
The quick joining method of large area array sub-meter grade night scene remote sensing image provided by the invention, by being carried out to raw video Relative detector calibration, image denoising, correction just penetrated based on RPC without control block adjustment, light and color homogenization based on RPC, and The processing such as image resampling obtain large-scale night scene remote sensing splicing image, and have speeded up to the quick of algorithm by GPU Processing, both ensure that the accuracy of algorithm, and had greatly improved processing speed again, algorithm is simple and easy, easily in project treatment In directly apply.
Brief description of the drawings
The present invention is described in further detail with reference to the accompanying drawings and detailed description.
Fig. 1 is the flow chart of the quick joining method of large area array sub-meter grade night scene remote sensing image of the present invention.
Fig. 2 and Fig. 3 be night scene image joint of the present invention before and after effect contrast figure, wherein Fig. 2 be splicing before image, Fig. 3 For spliced image.
Embodiment
The original color image that the present invention obtains sensor carries out relative radiation calibration and obtains the consistent night scene shadow of radiation Picture, image denoising is carried out for isolated noise, carried out for denoising night scene image based on RPC without control block adjustment, for Orthography carries out light and color homogenization processing, and just penetrates correction based on RPC, finally several night scene images consistent to color search Splicing line obtains preferable night scene splicing image.
The present invention is described in detail below in conjunction with the accompanying drawings.
Referring to Fig. 1:A kind of quick joining method of large area array sub-meter grade night scene remote sensing image, comprises the following steps:
Step 1:Relative detector calibration
Relative detector calibration is also known as the normalized of sensor detecting element, is that each pixel spoke monochrome information is rung The processing of correction is normalized in the quantized value (DN) answered, and reduces or eliminates each detecting element response difference of sensor, make spy Survey response uniformity of the element to spoke brightness.Relative detector calibration is carried out using the result of relative radiometric calibration.
Step 2:Remove isolated noise
First, initial data is isolated to the information of tri- wave bands of R, G, B, medium filtering is carried out to each wave band, i.e.,
Imed(R, G, B)=medfilt (Iori) (1)
Wherein, IoriFor initial data, ImedFor the image of medium filtering.Because the highlighted noise in image is isolated makes an uproar Point, therefore the algorithm of medium filtering can filter out the highlighted noise in image well.
Thereafter, isolated noise medium filtering image will be filtered out and carry out binary conversion treatment, i.e.,
Ibw(R, G, B)=im2bw (Imed(R,G,B),thre) (2)
Wherein, IbwFor binary image, thre is the threshold value of binary conversion treatment.By binary conversion treatment, can isolate Background noise and foreground information.
Binary image is multiplied point by point with initial data, i.e.,
Idenoise(i, j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor the image after denoising, Idenoise(i, j) is the gray value of the i-th row of image jth row.Now obtain Denoising image not only remove image dark place ambient noise and isolated highlighted noise, while remain the brighter place of image High-frequency information.
Step 3:Based on RPC without control block adjustment
Rational function model is that picpointed coordinate (r, c) is expressed as with corresponding topocentric coordinates (X, Y, Z) as independent variable Multinomial ratio, i.e.,
(r in formulan,cn) and (Xn,Yn,Zn) represent that pixel coordinate (r, c) and topocentric coordinates (X, Y, Z) contract through translation respectively Normalized coordinates after putting, between value is -1.0~+1.0, wherein polynomial coefficient is referred to as the coefficient of rational function (rational function coefficient, RPC), can be established between image coordinate system and earth axes by RPC and closed System.
In the case that control participates on no ground, satellite attitude parameters and the RPC moulds of orbit parameter generation are directly utilized Type often contains larger systematic error, have impact on image positioning precision so that more scape images just penetrate correction after overlapping region without Method overlaps exactly, it is therefore desirable to is imaged the night scene image obtained for multiple spot and carries out based on RPC without control regional network Adjustment.Need to carry out Feature Points Matching for night scene image overlap area during adjustment, using with anti-rotation consistency Sift is matched, in order to realize that the quick splicing of night scene image, can be by routine based on CPU using the sift matchings based on GPU Sift match times shorten about 140 times.
Using acquisition point-to-multipoint of the same name imaging obtain night scene image carry out based on image space without control block adjustment, Error existing for determining single scape image RPC models to be spliced simultaneously, it ensure that and eliminate after error corresponding to the image of overlapping region Atural object there's almost no relative geographical position deviation.
Step 4:Light and color homogenization processing
Due to the image of the time of image capturing, shooting angle, external light source, atmospheric attenuation and some other factor, Different degrees of difference on color be present in the night scene image that can cause to obtain, thus night scene image is carried out the even light of Mask and The even color processing of Wallis.
Using the even light of Mask differential techniques, obtained using Gassian low-pass filter method after the background video of raw video using original Image carries out phase reducing with background video, you can obtains the image of illuminance distribution, its phase reducing can use the institute of formula 6 The formula shown.
Iout(x, y)=Iin(x,y)-Iback(x,y)+offset (6)
Offset is grayscale shift amount in formula 6.
In order to increase adjacent thin portion contrast, the overall contrast of whole image is improved, it is necessary to divide the image after processing Section linear stretch processing.According to the maximum f of raw video gray levelmax, minimum value fmin, average valueAnd result image Gray scale maximum gmax, minimum value gmin, average valueCarry out divided linear strength.Stretching formula is provided by formula 7:
G (x, y) is even light result image in formula 7, g'(x, y) it is to the image after even light result image stretch processing.This Kind divided linear strength need not add additional parameter, and image greyscale dynamic range after processing can be restored into raw video In tonal range.
Need to carry out based on Wallis conversion colors equilibrium treatment side to adjust the color balance between night scene remote sensing image Method, Wallis become transducing and suppress noise again while original sub-meter grade image local contrast is strengthened, and have local auto-adaptive work( Energy.Wallis conversion can be expressed as:
G (x, y) and f (x, y) is respectively the gray value of raw video and Wallis transformation results images in formula 8;mgAnd mfPoint Not Wei raw video local gray level average and standard deviation (variance);sgAnd sfRespectively result image local gray level average and The desired value of standard deviation;C ∈ [0,1] are the extension constant of image variance;B ∈ [0,1] are the luminance factor of image, when b → 1 When, image average is forced to mf, as b → 0, image average is forced to mg
Step 5:Correction and image resampling are just penetrated based on RPC
Projected centered on large area array night scene remote sensing image, and certain inclination angle during image capturing be present, in order to eliminate due to Deformation caused by image inclination and hypsography etc. carries out just penetrating based on RPC, it is necessary to be imaged the night scene image obtained to multiple spot Correct, and resampling is carried out to the image after correction and obtains final splicing image.Just penetrating wherein based on RPC corrects step such as Under:
1) angle point object coordinates are calculated
The initial object coordinates of correspondence provided according to the angle point image space coordinate of image four and regularization coefficient are just being calculated with RPC respectively To geographical coordinates, and solve initial affine transformation coefficient.Hereafter object space point corresponding to picpointed coordinate is calculated based on RPC again to sit Mark.Following steps can specifically be used:
(1) object space plane coordinates initial value (Lon, Lat) is given according to initial affine transformation figure parameters for picpointed coordinate;
(2) dem data is read according to given object space plane coordinates initial value and obtains height value H, corresponding picture is solved using RPC Point coordinates, new radiation conversion coefficient is solved using picture point and object space point coordinates;
(3) object space plane coordinates is given according to new affine transformation figure parameters for picpointed coordinate, and it is corresponding to read DEM Height value, solve and finish if the height value solved twice is less than threshold value, otherwise repeatedly said process, calculated until twice Untill object space elevation difference out is less than threshold value.
2) result image is built
Longitude and latitude minimum, maximum are worth to image covering in the ground areas object space plane coordinates according to corresponding to being imaged multiple spot Scope (lat0~lat1, lon0~lon1), setting orthography resolution ratio gsd, you can calculate image size (W, H):
3) pixel-by-pixel Ergodic Maps to raw video
The each pixel (s, l) of orthography can be calculated by projection formula and arrive object space plane coordinates initial value (lat, lon), Formula is as follows:
The elevation H of (lat, lon) position is obtained according to dem data, RPC model formations is substituted into and image space coordinate is calculated (x,y)。
4) interpolation gray value and assignment
The picpointed coordinate (x, y) obtained by inverse in 3), the interpolation gray scale on raw video.Interpolation is divided into closest to interpolation Method, three kinds of methods of bilinear interpolation method and bi-cubic interpolation method are public using bilinear interpolation method in order to take into account efficiency and precision Formula is as follows:
P=p (i, j) * (1-dx) * (1-dy)+p (i+1, j) * dx* (1-dy)+p (i+1, j+1) * dx*dy+p (i, j+1) * (1-dx)*dy
After calculating gray scale p, result image (s, l) position is assigned to, the remote sensing of final output night scene splices image (referring to figure 2 and 3).
The quick joining method of large area array sub-meter grade night scene remote sensing image provided by the invention, by being carried out to raw video Relative detector calibration, image denoising, correction just penetrated without control block adjustment, light and color homogenization, based on RPC based on RPC, and The processing such as image resampling obtain large-scale night scene remote sensing splicing image, and have speeded up to the quick of algorithm by GPU Processing, both ensure that the accuracy of algorithm, and had greatly improved processing speed again, algorithm is simple and easy, easily in project treatment In directly apply.
The satellite launched with reference to Chang Guang satellite technologies Co., Ltd --- exemplified by the star of video 03, describe night scene shadow in detail The denoising of picture and enhancing processing method.
The star of video 03 uses main square as 3200mm video cameras, and the resolution ratio of substar is 0.92m, the single frames night scene of collection Image size is 12000 × 5000 pixels.For the star of video 03 on April 1st, 2017,5 points were taken the photograph London night scene, shooting point longitude and latitude for 43 minutes It is 51.4628 ° to spend for -0.179 ° of longitude, latitude.The big face of the present invention is illustrated for this multiple spot night scene video imaging task The quick joining method of battle array sub-meter grade night scene remote sensing image, this method comprise the following steps:
Step 1:Relative radiation calibration
Relative detector calibration is carried out to night scene image according to relative radiometric calibration result.
Step 2:Isolated noise denoising
First, initial data is isolated to the information of tri- wave bands of R, G, B, R, G, B size are respectively 3000 × 1250 Pixel, 6000 × 2500 pixels and 3000 × 1250 pixels, medium filtering is carried out using formula (1) to each wave band, gone Except the filtered image I of highlighted noisemed(R,G,B)。
Thereafter, isolated noise medium filtering image I will be filtered outmed(R, G, B) carries out binary conversion treatment, root according to formula (2) According to the view data of acquisition, it is 6 to set binary-state threshold thre, so as to obtain binary image Ibw.Pass through binary conversion treatment, energy Background noise and foreground information is enough effectively separated.By binary image and initial data according to formula (3) carry out by Point is multiplied, and obtains the image I after denoisingdenoise
The denoising image now obtained not only removes the ambient noise of the dark place of image and isolated highlighted noise, protects simultaneously The high-frequency information at the brighter place of image is stayed.
Step 3:Based on RPC without control block adjustment
Sift feature point extractions and matching based on GPU are carried out to the overlapping region of multiple spot imaging task neighbour's close-range image, Multiframe night scene data are carried out using matching result to eliminate error present in RPC without control block adjustment based on RPC.
Step 4:Light and color homogenization processing
The dodging based on Mask principles is carried out for multiframe night scene image and the even color based on Wallis conversion is handled Obtain the consistent night scene remote sensing image of color.
Step 5:Correction and resampling are just penetrated based on RPC
The image point displacement as caused by imaging angle and hypsography, and root are eliminated by the correction processing of just penetrating based on RPC Obtain splicing image size according to the object space range computation of image, and for splicing image node-by-node algorithm its in single scape night scene image Position simultaneously carries out bilinear interpolation and obtains the night scene remote sensing image of final multiple spot splicing.
Obviously, above-described embodiment is only intended to clearly illustrate example, and is not the restriction to embodiment.It is right For those of ordinary skill in the art, can also make on the basis of the above description it is other it is various forms of change or Change.There is no necessity and possibility to exhaust all the enbodiments.And the obvious change thus extended out or Among changing still in the protection domain of the invention.

Claims (4)

1. a kind of quick joining method of large area array sub-meter grade night scene remote sensing image, it is characterised in that comprise the following steps:
Step 1:Relative detector calibration
The processing of correction is normalized to the quantized value of original night remote sensing image each pixel spoke monochrome information response, reduced Or each detecting element response difference of sensor is eliminated, make response uniformity of the detecting element to spoke brightness;Utilize relative spoke The result for penetrating calibration carries out relative detector calibration;
Step 2:Remove isolated noise
First, initial data is isolated to the information of tri- wave bands of R, G, B, medium filtering is carried out to each wave band, i.e.,
Imed(R, G, B)=medfilt (Iori) (1)
Wherein, IoriFor initial data, ImedFor the image of medium filtering;
Thereafter, isolated noise medium filtering image will be filtered out and carry out binary conversion treatment, i.e.,
Ibw(R, G, B)=im2bw (Imed(R,G,B),thre) (2)
Wherein, IbwFor binary image, thre is the threshold value of binary conversion treatment;
Binary image is multiplied point by point with initial data, i.e.,
Idenoise(i, j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor the image after denoising, Idenoise(i, j) is the gray value of the i-th row of image jth row;
Step 3:Based on RPC without control block adjustment
Sift feature point extractions and matching based on GPU are carried out to the overlapping region of multiple spot imaging task neighbour's close-range image, utilized Matching result carries out eliminating error present in RPC without control block adjustment based on RPC to multiframe night scene data;
Step 4:Light and color homogenization processing
The even light of Mask is carried out to night scene image and the even color based on Wallis conversion is handled;
Using the even light of Mask differential techniques, raw video is utilized after obtaining the background video of raw video using Gassian low-pass filter method Phase reducing is carried out with background video, obtains the image of illuminance distribution, its phase reducing is using the formula shown in formula 6;
Iout(x, y)=Iin(x,y)-Iback(x,y)+offset (6)
Offset is grayscale shift amount in formula 6;
According to the maximum f of raw video gray levelmax, minimum value fmin, average valueAnd result image greyscale maximum gmax, minimum value gmin, average valueCarry out divided linear strength;Stretching formula is provided by formula 7:
<mrow> <msup> <mi>g</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <mover> <mi>f</mi> <mo>&amp;OverBar;</mo> </mover> <mo>-</mo> <msub> <mi>f</mi> <mi>min</mi> </msub> </mrow> <mrow> <mover> <mi>g</mi> <mo>&amp;OverBar;</mo> </mover> <mo>-</mo> <msub> <mi>g</mi> <mi>min</mi> </msub> </mrow> </mfrac> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mi>g</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mi>g</mi> <mi>min</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>f</mi> <mi>min</mi> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>g</mi> <mi>min</mi> </msub> <mo>&amp;le;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&lt;</mo> <mover> <mi>g</mi> <mo>&amp;OverBar;</mo> </mover> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mrow> <msub> <mi>f</mi> <mi>max</mi> </msub> <mo>-</mo> <mover> <mi>f</mi> <mo>&amp;OverBar;</mo> </mover> </mrow> <mrow> <msub> <mi>g</mi> <mi>max</mi> </msub> <mo>-</mo> <mover> <mi>g</mi> <mo>&amp;OverBar;</mo> </mover> </mrow> </mfrac> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mi>g</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <mover> <mi>g</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>+</mo> <mover> <mi>f</mi> <mo>&amp;OverBar;</mo> </mover> </mrow> </mtd> <mtd> <mrow> <mover> <mi>g</mi> <mo>&amp;OverBar;</mo> </mover> <mo>&amp;le;</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;le;</mo> <msub> <mi>g</mi> <mi>max</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
G (x, y) is even light result image in formula 7, g'(x, y) it is to the image after even light result image stretch processing;
Wallis conversion can be expressed as:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>m</mi> <mi>g</mi> </msub> <mo>&amp;rsqb;</mo> <mfrac> <mrow> <msub> <mi>cs</mi> <mi>f</mi> </msub> </mrow> <mrow> <msub> <mi>cs</mi> <mi>g</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>c</mi> <mo>)</mo> </mrow> <msub> <mi>s</mi> <mi>f</mi> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>bm</mi> <mi>f</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> <msub> <mi>m</mi> <mi>g</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
G (x, y) and f (x, y) is respectively the gray value of raw video and Wallis transformation results images in formula 8;mgAnd mfRespectively The local gray level average and standard deviation of raw video;sgAnd sfRespectively result image local gray level average and standard deviation Desired value;C ∈ [0,1] are the extension constant of image variance;B ∈ [0,1] are the luminance factor of image, and as b → 1, image is equal Value is forced to mf, as b → 0, image average is forced to mg
Step 5:Correction and image resampling are just penetrated based on RPC
The night scene image obtained to multiple spot imaging carries out just penetrating correction based on RPC, and carries out resampling to the image after correction Obtain final splicing image.
2. the quick joining method of large area array sub-meter grade night scene remote sensing image according to claim 1, it is characterised in that step In rapid five, it is as follows that just penetrating based on RPC corrects step:
In step 5, it is as follows that just penetrating based on RPC corrects step:
1) angle point object coordinates are calculated
The initial object coordinates of correspondence provided according to the angle point image space coordinate of image four and regularization coefficient are just being obtained with RPC respectively Geographical coordinates, and initial affine transformation coefficient is solved, then object space point coordinates corresponding to picpointed coordinate is calculated based on RPC;
2) result image is built
Longitude and latitude minimum, maximum are worth to image coverage in the ground areas object space plane coordinates according to corresponding to being imaged multiple spot (lat0~lat1, lon0~lon1), setting orthography resolution ratio gsd, you can calculate image size (W, H):
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>W</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mi>n</mi> <mn>1</mn> <mo>-</mo> <mi>l</mi> <mi>o</mi> <mi>n</mi> <mn>0</mn> <mo>)</mo> </mrow> <mo>/</mo> <mi>g</mi> <mi>s</mi> <mi>d</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>H</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>l</mi> <mi>a</mi> <mi>t</mi> <mn>1</mn> <mo>-</mo> <mi>l</mi> <mi>a</mi> <mi>t</mi> <mn>0</mn> <mo>)</mo> </mrow> <mo>/</mo> <mi>g</mi> <mi>s</mi> <mi>d</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
3) pixel-by-pixel Ergodic Maps to raw video
The each pixel (s, l) of orthography can be calculated by projection formula and arrive object space plane coordinates initial value (lat, lon), formula It is as follows:
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>l</mi> <mi>a</mi> <mi>t</mi> <mo>=</mo> <mi>l</mi> <mo>*</mo> <mi>g</mi> <mi>s</mi> <mi>d</mi> <mo>+</mo> <mi>l</mi> <mi>a</mi> <mi>t</mi> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>l</mi> <mi>o</mi> <mi>n</mi> <mo>=</mo> <mi>s</mi> <mo>*</mo> <mi>g</mi> <mi>s</mi> <mi>d</mi> <mo>+</mo> <mi>l</mi> <mi>o</mi> <mi>n</mi> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
The elevation H of (lat, lon) position is obtained according to dem data, RPC model formations is substituted into and picpointed coordinate (x, y) is calculated;
4) interpolation gray value and assignment
The picpointed coordinate (x, y) obtained by inverse in 3), the interpolation gray scale on raw video;After calculating gray scale p, knot is assigned to Fruit image (s, l) position, final output night scene remote sensing splicing image.
3. the quick joining method of large area array sub-meter grade night scene remote sensing image according to claim 2, it is characterised in that step It is rapid 1) in, RPC is just calculated and is obtained object space point coordinates and comprise the following steps:
(1) object space plane coordinates initial value (Lon, Lat) is given according to initial affine transformation figure parameters for picpointed coordinate;
(2) dem data is read according to given object space plane coordinates initial value and obtains height value H, solving corresponding picture point using RPC sits Mark, new radiation conversion coefficient is solved using picture point and object space point coordinates;
(3) object space plane coordinates is given according to new affine transformation figure parameters for picpointed coordinate, and read high corresponding to DEM Journey value, solve and finish if the height value solved twice is less than threshold value, otherwise repeatedly said process, calculated until twice Object space elevation difference be less than threshold value untill.
4. the quick joining method of large area array sub-meter grade night scene remote sensing image according to claim 2, it is characterised in that step It is rapid 4) in, the interpolation uses bilinear interpolation method, and formula is as follows:
P=p (i, j) * (1-dx) * (1-dy)+p (i+1, j) * dx* (1-dy)+p (i+1, j+1) * dx*dy+p (i, j+1) * (1- dx)*dy。
CN201710722702.4A 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images Active CN107563964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710722702.4A CN107563964B (en) 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710722702.4A CN107563964B (en) 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images

Publications (2)

Publication Number Publication Date
CN107563964A true CN107563964A (en) 2018-01-09
CN107563964B CN107563964B (en) 2020-09-04

Family

ID=60976225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710722702.4A Active CN107563964B (en) 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images

Country Status (1)

Country Link
CN (1) CN107563964B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550129A (en) * 2018-04-20 2018-09-18 北京航天宏图信息技术股份有限公司 Even color method and device based on geographical template
CN108734685A (en) * 2018-05-10 2018-11-02 中国矿业大学(北京) A kind of joining method of UAV system EO-1 hyperion linear array remote sensing image
CN109063711A (en) * 2018-07-06 2018-12-21 航天星图科技(北京)有限公司 A kind of satellite image based on LLTS frame just penetrates correct algorithm
CN109118421A (en) * 2018-07-06 2019-01-01 航天星图科技(北京)有限公司 A kind of image light and color homogenization method based on Distributed Architecture
CN109118429A (en) * 2018-08-02 2019-01-01 武汉大学 A kind of medium-wave infrared-visible light multispectral image rapid generation
CN110276280A (en) * 2019-06-06 2019-09-24 刘嘉津 A kind of optical processing method of crop pests image automatic identification
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation
CN110988908A (en) * 2019-12-19 2020-04-10 长光卫星技术有限公司 Quantitative analysis method for influence of spectral shift of optical filter on imaging of spatial optical remote sensor
CN112017115A (en) * 2020-07-09 2020-12-01 卢凯旋 Remote sensing image splicing method, device, equipment and storage medium
CN112184546A (en) * 2020-06-10 2021-01-05 中国人民解放军32023部队 Satellite remote sensing image data processing method
CN112233190A (en) * 2020-05-19 2021-01-15 同济大学 Satellite remote sensing image color balancing method based on block adjustment
CN112288650A (en) * 2020-10-28 2021-01-29 武汉大学 A multi-source remote sensing satellite image geometry and semantic integration processing method and system
CN112465986A (en) * 2020-11-27 2021-03-09 航天恒星科技有限公司 Method and device for inlaying satellite remote sensing image
CN113469899A (en) * 2021-06-04 2021-10-01 中国资源卫星应用中心 Optical remote sensing satellite relative radiation correction method based on radiant energy reconstruction
US11430128B2 (en) * 2018-02-07 2022-08-30 Chang'an University Geological linear body extraction method based on tensor voting coupled with Hough transformation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282006A (en) * 2014-09-30 2015-01-14 中国科学院国家天文台 High-resolution image splicing method based on CE-2 data
CN106373088A (en) * 2016-08-25 2017-02-01 中国电子科技集团公司第十研究所 Quick mosaic method for aviation images with high tilt rate and low overlapping rate

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282006A (en) * 2014-09-30 2015-01-14 中国科学院国家天文台 High-resolution image splicing method based on CE-2 data
CN106373088A (en) * 2016-08-25 2017-02-01 中国电子科技集团公司第十研究所 Quick mosaic method for aviation images with high tilt rate and low overlapping rate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪韬阳等: "卫星遥感影像的区域正射纠正", 《武汉大学学报(信息科学版)》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11430128B2 (en) * 2018-02-07 2022-08-30 Chang'an University Geological linear body extraction method based on tensor voting coupled with Hough transformation
CN108550129B (en) * 2018-04-20 2019-04-09 北京航天宏图信息技术股份有限公司 Even color method and device based on geographical template
CN108550129A (en) * 2018-04-20 2018-09-18 北京航天宏图信息技术股份有限公司 Even color method and device based on geographical template
CN108734685A (en) * 2018-05-10 2018-11-02 中国矿业大学(北京) A kind of joining method of UAV system EO-1 hyperion linear array remote sensing image
CN108734685B (en) * 2018-05-10 2022-06-03 中国矿业大学(北京) Splicing method for unmanned aerial vehicle-mounted hyperspectral line array remote sensing images
CN109063711B (en) * 2018-07-06 2021-10-29 中科星图股份有限公司 Satellite image orthorectification algorithm based on LLTS framework
CN109118421A (en) * 2018-07-06 2019-01-01 航天星图科技(北京)有限公司 A kind of image light and color homogenization method based on Distributed Architecture
CN109063711A (en) * 2018-07-06 2018-12-21 航天星图科技(北京)有限公司 A kind of satellite image based on LLTS frame just penetrates correct algorithm
CN109118429B (en) * 2018-08-02 2023-04-25 武汉大学 Method for rapidly generating intermediate wave infrared-visible light multispectral image
CN109118429A (en) * 2018-08-02 2019-01-01 武汉大学 A kind of medium-wave infrared-visible light multispectral image rapid generation
CN110276280A (en) * 2019-06-06 2019-09-24 刘嘉津 A kind of optical processing method of crop pests image automatic identification
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation
CN110988908A (en) * 2019-12-19 2020-04-10 长光卫星技术有限公司 Quantitative analysis method for influence of spectral shift of optical filter on imaging of spatial optical remote sensor
CN110988908B (en) * 2019-12-19 2023-06-09 长光卫星技术股份有限公司 Quantitative analysis method for imaging influence of spectral shift of optical filter on space optical remote sensor
CN112233190A (en) * 2020-05-19 2021-01-15 同济大学 Satellite remote sensing image color balancing method based on block adjustment
CN112233190B (en) * 2020-05-19 2023-04-07 同济大学 Satellite remote sensing image color balancing method based on block adjustment
CN112184546A (en) * 2020-06-10 2021-01-05 中国人民解放军32023部队 Satellite remote sensing image data processing method
CN112184546B (en) * 2020-06-10 2024-03-15 中国人民解放军32023部队 Satellite remote sensing image data processing method
CN112017115A (en) * 2020-07-09 2020-12-01 卢凯旋 Remote sensing image splicing method, device, equipment and storage medium
CN112288650A (en) * 2020-10-28 2021-01-29 武汉大学 A multi-source remote sensing satellite image geometry and semantic integration processing method and system
CN112465986A (en) * 2020-11-27 2021-03-09 航天恒星科技有限公司 Method and device for inlaying satellite remote sensing image
CN113469899A (en) * 2021-06-04 2021-10-01 中国资源卫星应用中心 Optical remote sensing satellite relative radiation correction method based on radiant energy reconstruction
CN113469899B (en) * 2021-06-04 2023-12-29 中国资源卫星应用中心 Optical remote sensing satellite relative radiation correction method based on radiation energy reconstruction

Also Published As

Publication number Publication date
CN107563964B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN107563964A (en) The quick joining method of large area array sub-meter grade night scene remote sensing image
CN110310248B (en) A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system
CN111080724B (en) Fusion method of infrared light and visible light
CN110648398B (en) Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
CN103413272B (en) Low spatial resolution multi-source Remote Sensing Images Space Consistency bearing calibration
WO2021120406A1 (en) Infrared and visible light fusion method based on saliency map enhancement
CN106373088B (en) The quick joining method of low Duplication aerial image is tilted greatly
TWI322261B (en)
Grumpe et al. Construction of lunar DEMs based on reflectance modelling
CN109934788B (en) Remote sensing image missing data restoration method based on standard remote sensing image
EP2095330A1 (en) Panchromatic modulation of multispectral imagery
CN102073874A (en) Geometric constraint-attached spaceflight three-line-array charged coupled device (CCD) camera multi-image stereo matching method
EP2100268A2 (en) Structured smoothing for superresolution of multispectral imagery based on registered panchromatic image
CN108734685A (en) A kind of joining method of UAV system EO-1 hyperion linear array remote sensing image
CN111693025B (en) Remote sensing image data generation method, system and equipment
CN103810706B (en) A kind of roughness of ground surface participates in the remote sensing images inverted stereo bearing calibration of shadow model
CN108335261B (en) A kind of Optical remote satellite orthography garland region automatic testing method
US20220020178A1 (en) Method and system for enhancing images using machine learning
CN102938145A (en) Consistency regulating method and system of splicing panoramic picture
CN108230326A (en) Satellite image garland based on GPU-CPU collaborations deforms rapid detection method
CN116245757B (en) Multi-scene universal remote sensing image cloud restoration method and system for multi-mode data
CN104202538A (en) Double-registration method for different-exposure images in wide dynamic camera
CN111340895A (en) Image color uniformizing method based on pyramid multi-scale fusion
CN112016478A (en) Complex scene identification method and system based on multispectral image fusion
CN104700356B (en) A Method of Anti-Stereo Correction for Remote Sensing Image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No. 1299, Mingxi Road, Beihu science and Technology Development Zone, Changchun City, Jilin Province

Patentee after: Changguang Satellite Technology Co.,Ltd.

Address before: 130032 No. 1759, Mingxi Road, Gaoxin North District, Changchun City, Jilin Province

Patentee before: CHANG GUANG SATELLITE TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Fast Splicing Method for Large Area Array Submeter Level Night Scene Remote Sensing Images

Granted publication date: 20200904

Pledgee: Jilin credit financing guarantee Investment Group Co.,Ltd.

Pledgor: Changguang Satellite Technology Co.,Ltd.

Registration number: Y2024220000032

PE01 Entry into force of the registration of the contract for pledge of patent right