Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an image space scanning imaging method based on an achromatic cascade prism.
The purpose of the invention can be realized by the following technical scheme:
an image space scanning imaging method based on an achromatic cascade prism is characterized in that an imaging system comprises an optical lens group, an achromatic cascade prism device and a camera which are sequentially arranged, wherein the optical lens group is used for expanding an imaging field range and projecting scattered light from a wide-range target scene onto a virtual primary image surface; the achromatic cascade prism device is used for changing the direction of an imaging visual axis of the camera so as to capture the imaging light of the primary image surface sequentially and regionally; the camera is used for recording image information under different imaging visual angles and generating a high-resolution regional image sequence; the image space scanning imaging method comprises the following steps:
s1, constructing a parameter matching and model system: combining the field angle and the resolution of the imaging system, determining the optical parameters and the structural parameters of the camera, the achromatic cascade prism and the optical lens group, and constructing an image space scanning imaging model system and a working coordinate system thereof according to the relative pose relationship of the camera, the achromatic cascade prism and the optical lens group;
s2, establishing a primary imaging projection model: according to the structural parameters and the arrangement parameters of the optical lens group, describing the process of multiple refractions of the optical lens group to light rays by using geometrical optics, and establishing an imaging projection model and a space mapping relation of the light rays which are incident to the lens group from an object and then emergent to a primary image surface;
s3, achromatic cascade prism scanning motion planning: determining a subregion division strategy of image space scanning secondary imaging by combining the coverage range of the primary image surface and the transient field range of the camera, and calculating a visual axis pointing angle required by the camera for imaging each subregion, thereby designing a corner change rule of the cascade prism in the visual axis adjusting process;
s4, image area image acquisition correction: when the achromatic cascade prism rotates to the appointed corner positions respectively, triggering the camera to perform secondary imaging on the image space subregion under the pointing direction of the visual axis of the camera, and correcting the image space subregion image into an object space subregion image by combining a reverse ray tracing model and a primary imaging projection model;
s5, registering the object region image sequence: the method comprises the steps that the change relation of the direction of adjacent imaging visual axes is utilized, the overlapped area of two object space sub-area images is positioned in advance, a certain number of characteristic point pairs are extracted and matched from the overlapped area, the perspective transformation matrix of the adjacent images is estimated, and the rough and fine two-stage registration relation of an image sequence is established;
s6, generating a large-view-field high-resolution image: and based on the object space subregion image sequence after accurate registration, processing the intensity information of the adjacent subregion images in the overlapping region by using a linear fusion strategy, and finally splicing to obtain a large-field-of-view high-resolution image formed by all the subregion images.
Further, in step S1, a working coordinate system O-XYZ of the image space scanning imaging model system is established according to the right-hand rule, the origin O is fixed at the optical center position of the camera, the Z axis coincides with the optical axis direction of the camera, the X axis and the Y axis are both orthogonal to the Z axis, and the X axis and the Y axis respectively correspond to the row scanning direction and the column scanning direction of the image sensor in the camera.
Further, in step S2, the projection model from the object to the primary image plane is described by using the vector refraction law to describe the propagation process of the object light sequentially passing through each element in the optical lens group
Expressed as:
wherein the symbols
Representing a process of refracting the projection light propagating along the left vector with the right vector as a normal vector; s
objThe object-side light ray vector incident on the optical lens group,
the light vectors are projected to a primary image surface after being refracted for multiple times; n is
1,n
2,...,n
2kRepresenting a normal vector of a lens surface through which primary imaging light passes in sequence; k denotes the number of lens elements included in the optical lens group.
Further, the step S3 specifically includes:
s31, calculating the horizontal angle covered by the primary image surface according to the structural parameters and the optical parameters of the optical lens group
And vertical angle
Then the horizontal angle with the transient visual field of the camera
And vertical angle
By comparison, dividing the sub-region of the image space scanning secondary imaging into n
v×n
hArray, wherein n
vAnd n
hRespectively the number of rows and columns, ensuring that the system passes through n
v×n
hThe sub-regional scanning imaging can collect all information on a primary image surface, and a certain size of overlapping region exists between all adjacent sub-regions;
s32, estimating an imaging boresight orientation corresponding to the center of each sub-region in combination with the sub-region division condition of image scanning secondary imaging, which is described by a pitch angle Φ and an azimuth angle Θ, and expressed as:
where i and j are the row number and column number of the sub-region, respectively, and atan2 is the value range (-pi, pi)]Of the arctangent function, λvAnd λhRespectively representing the coincidence coefficients of the adjacent subregions in the vertical direction and the horizontal direction;
s33, aiming at the pitch angle and the azimuth angle of each image space scanning sub-region center, solving the corresponding rotation angle of the achromatic cascade prism by using a two-step method to enable the camera imaging visual axis to point to the sub-region center, wherein the analytic form is as follows:
wherein theta is1And theta2Angle of rotation, theta, of each of the two prismsdIs the difference between the rotation angles of two prisms, b1And c1Are intermediate variables, respectively expressed as:
wherein alpha and n are the wedge angle and the equivalent refractive index of the achromatic cascade prism respectively;
s34, a series of corner data of the achromatic cascade prism is given, the rotation motion rule of the achromatic cascade prism is designed on the basis of the principle that the prism motion time is shortest, and therefore the corner sequence of the cascade prism arriving successively is determined.
Further, the step S4 specifically includes:
s41, when the achromatic cascade prism rotates to an expected group of corner positions each time, triggering the camera to capture image information of the corresponding image sub-area under the pointing direction of the current imaging visual axis through software;
s42, determining secondary imaging light ray vector according to the actually collected image space subregion image by the reverse light ray tracing method
And the emergent ray of the achromatic cascade prism
Can make the corresponding incident light
Expressed as:
wherein
The normal vector of the prism refraction surface is expressed in the sequence of the reverse tracking light from the camera imaging plane;
s43, in the actual imaging process, the light directly enters the achromatic cascade prism after reaching the primary image surface, so the primary imaging projection light can be determined according to the incident light of the achromatic cascade prism, that is
And then the projection model of one-time imaging is utilized to calculate the corresponding object space projection light ray vector s
objExpressed as:
wherein
Representing a one-time imaging projection model
The reverse process of (2);
and S44, acquiring all secondary imaging light ray vectors from the image side subregion image collected by the camera, and substituting the vectors into the steps S42 and S43 to determine the corresponding object side projection light rays, so that the distorted image side subregion image is restored to an undistorted object side subregion image.
Further, the step S5 specifically includes:
s51, combining the deflection characteristic of the achromatic cascade prism to the camera imaging visual axis direction, establishing secondaryImaging light vector
Reverse solving primary imaging light vector
Is expressed as:
R(Φ,Θ)=A(Θ)+[I-A(Θ)]·cosΦ+B(Θ)·sinΦ
where I is a third order identity matrix, both matrices A and B are related to the azimuth angle Θ and are represented as:
s52, determining the relative position of one image in the other image according to the approximate transformation matrix between the images of the adjacent subregions on the image side, thereby determining the boundary of the overlapped region of the two images; the sub-area image I in the ith row and the jth columnijAnd the adjacent sub-area image I of the ith row and the (j + 1) th columni(j+1)As an example, image Ii(j+1)Is in the image IijIs expressed as:
wherein is p
i(j+1)Representing an image I
i(j+1)Homogeneous image coordinates of any point on the boundary,
to convert it to image I
ijThe coordinates of subsequent homogeneous images under a coordinate system, wherein omega is a scale factor; in picture I
ijIn a coordinate system of (1) comparing adjacent images I
ijAnd I
i(j+1)The boundary position of the two can be determined, namely the boundary of the overlapped area of the two is determined
S53, use ofA primary imaging projection model, namely overlapping region boundary E of images of adjacent subregions in image spaceimgOverlapped area boundary E mapped into object space adjacent subarea imagesobjExpressed as:
wherein EobjCoarse registration constraints for images of object-side neighboring subregions can be provided;
s54, extracting a certain number of image features in the overlapping area of the images of the adjacent sub-areas of the object space, and establishing a feature matching relationship between the two images, thereby estimating a projection transformation matrix M of the two images, wherein the fine registration relationship is expressed as:
wherein K
i(j+1)And
respectively representing the homogeneous image coordinates of the object subregion images of the ith row and the jth +1 column and the homogeneous image coordinates after the homogeneous image coordinates are registered to the object subregion images of the ith row and the jth column.
Further, in step S6, for any two object-side adjacent subregion images, the intensity information in the overlapped region is processed by using a linear fusion strategy, that is, the distance from a certain point to the centers of the two images is taken as the weight of the fusion intensity, and is expressed as:
where (x, y) is the image coordinate of a particular image point within the overlap region, D
ijAnd D
i(j+1)Respectively represent two images of adjacent subregions of the object space,
representing the image after the two have been fused, omega
ijAnd ω
i(j+1)The value ranges are [0,1 ] respectively along with the Euclidean distance between the image point and the centers of the two images]。
Furthermore, the achromatic cascade prism device comprises a pair of achromatic prisms and respective rotary driving mechanisms, and the two achromatic prisms keep optical axes aligned with each other and adopt an arrangement form of plane opposition or wedge surface opposition.
Furthermore, the rotary driving mechanism adopts a torque motor direct drive or gear drive, synchronous belt drive or worm and gear drive mode.
Furthermore, the camera, the achromatic cascade prism and the optical lens group all satisfy the coaxial arrangement relationship, and the imaging target surface of the camera is parallel to the plane sides of the two achromatic prisms.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the achromatic cascade prism and the optical lens group are introduced in front of the single camera, and the object space view field expansion function of the optical lens group and the image space scanning imaging function of the achromatic cascade prism are combined, so that large-range, high-efficiency and wide-spectrum imaging is realized on the basis of ensuring the compactness of the overall structure and the flexibility of moving parts.
2. The invention provides an automatic division strategy of image scanning imaging subregions, and the corresponding achromatic cascade prism rotation angle is quickly obtained by utilizing a reverse analysis method, so that the scanning motion of the imaging visual axis of a camera is controlled, the image information of each image scanning subregion is captured in the shortest time, and the real-time property and the adaptability of the whole imaging process are improved.
3. The invention utilizes the vector refraction law and the reverse ray tracing method to establish the vector mapping relation of the object space projection ray, the primary imaging ray and the secondary imaging ray, can correct the actually collected image space scanning image into an undistorted object space image, and overcomes the problem of image degradation caused by introducing a refraction optical element.
4. The invention provides a coarse and fine two-stage image registration method facing cascaded prism scanning imaging, which comprises the steps of firstly positioning the overlapping area of images of adjacent subregions in advance to realize coarse registration, and then estimating a transformation matrix from an image characteristic matching relation to realize fine registration, so that the accuracy and the reliability of the image space scanning image sequence splicing process can be fully ensured.
5. The invention restrains the information fusion process of the image space scanning image sequence in the pre-positioned overlapping area, does not need to cover the whole range of each sub-area image, can greatly reduce the time complexity of fusion operation, and improves the generation efficiency of the large-view-field high-resolution image.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides an image space scanning imaging method based on an achromatic cascade prism.
As shown in fig. 1, the imaging system includes an optical lens group 3, an achromatic cascade prism apparatus, and a camera 1, which are arranged in this order.
The camera 1 includes an image sensor and a lens, parameters such as a target surface size and a pixel size of the image sensor, a focal length and a depth of field of the lens are determined by a range of a target scene, and a detection band of the image sensor is determined by an attribute of the target scene and is a visible light band or an infrared light band.
The achromatic cascade prism includes a first achromatic prism 21 and a second achromatic prism 22, each of which is composed of a combination of elements made of two different materials (e.g., germanium and silicon, lithium fluoride, zinc sulfide, etc.). The two achromatic prisms keep optical axes aligned with each other, adopt an arrangement form of plane opposite or wedge surface opposite, and simultaneously keep the plane sides of the two prisms parallel to the sensor target surface of the camera 1; the two achromatic prisms are fixed on respective supporting structures in an optical glue bonding mode and are driven by an independent rotating mechanism to realize rotating motion around the optical axis direction; the rotating mechanism adopts the modes of torque motor direct drive or gear drive, synchronous belt drive, worm and gear drive and the like.
The optical lens group 3 comprises a plurality of lens elements in different forms, all the elements meet the coaxial relationship with the camera 1 and the achromatic cascade prism, the optical parameters and the arrangement scheme are designed and matched according to the range requirement of an imaging view field, and the film coating treatment is carried out on the detection waveband of the camera 1 so as to increase the light transmittance.
The imaging system of the embodiment introduces the achromatic cascade prism and the optical lens group in front of the camera, can capture large-range target scene information through the visual field expansion and the primary imaging action of the optical lens group, projects the target scene information onto a primary image surface, collects all information on the primary image surface in different areas through the visual axis adjustment and the secondary imaging action of the achromatic cascade prism, and finally splices to obtain a large-visual-field high-resolution image. Compared with the existing multi-camera imaging system and single-camera imaging system, the image space scanning imaging system of the embodiment does not need the camera body to move in any form, does not introduce a reflecting element sensitive to error disturbance, and can simultaneously meet the performance requirements of structural compactness, imaging field range, image resolution, imaging efficiency, flexibility and the like.
As shown in fig. 2 to 5, the image space scanning imaging method includes the specific steps of:
step S1, parameter matching and model system construction
Determining optical parameters and structural parameters of the camera 1, the first achromatic prism 21, the second achromatic prism 22 and the optical lens group 3 according to requirements of the field angle, the resolution and the like of the imaging system, and constructing an image space scanning imaging model system based on the achromatic cascade prism according to the coaxial arrangement relationship of the three;
and establishing a working coordinate system O-XYZ of the image space scanning imaging model system according to a right-hand rule, wherein an origin O is fixed at the optical center position of the camera 1, a Z axis is overlapped with the optical axis direction of the camera 1, and an X axis and a Y axis are both orthogonal to the Z axis and respectively correspond to the row scanning direction and the column scanning direction of the image sensor.
Step S2, establishing a primary imaging projection model
According to the structural parameters and the arrangement parameters of the optical lens group, the propagation process of the object light rays sequentially passing through each element in the optical lens group is described by using a vector refraction law, the process of multiple refractions of the light rays is described by using a geometric optics optical lens group, and an imaging projection model of the light rays which are incident from the object to the lens group and then emergent to a primary image surface is established
Expressed as:
wherein the symbols
Representing a process of refracting the projection light propagating along the left vector with the right vector as a normal vector; s
objThe object-side light ray vector incident on the optical lens group,
the light vectors are projected to a primary image surface after being refracted for multiple times; n is
1,n
2,...,n
2kRepresenting a normal vector of a lens surface through which primary imaging light passes in sequence; k denotes the number of lens elements included in the optical lens group.
Step S3, planning the scanning motion of the cascaded prism
Step S31, calculating the horizontal angle covered by the primary image plane according to the structural parameters and the optical parameters of the optical lens group
And vertical angle
Then the horizontal angle with the transient visual field of the camera
And vertical angle
And comparing, dividing the sub-regions of the image space scanning secondary imaging into 4 multiplied by 4 arrays, ensuring that the system can collect all information on the primary image surface through the sub-region scanning imaging, and ensuring that a certain size of overlapping region exists between all adjacent sub-regions.
Step S32, estimating an imaging boresight orientation corresponding to the center of each sub-region according to the sub-region division condition of image scanning secondary imaging, where the pitch angle Φ and the azimuth angle Θ are expressed as:
where i and j are the row number and column number of the sub-region, respectively, and atan2 is the value range (-pi, pi)]Arctangent function of, nv4 and nh4 denotes the number of rows and columns, λ, respectively, of the subdivision into subregionsv0.15 and λhThe coincidence coefficient of the adjacent subregions in the vertical direction and the horizontal direction is represented by 0.15.
Step S33, aiming at the pitch angle and the azimuth angle of each image space scanning sub-region center, solving the corresponding cascade prism rotation angle by using a two-step method to make the camera imaging visual axis point to the sub-region center, wherein the analytic form is as follows:
wherein theta is1And theta2Angle of rotation, theta, of each of the two prismsdIs the difference between the rotation angles of two prisms, b1And c1Are intermediate variables, respectively expressed as:
the wedge angle of the achromatic cascade prism in this embodiment is 5 °, and the equivalent refractive index is 3.
Step S34, giving 4 x 4 groups of corner data of the achromatic cascade prism, designing the rotation motion rule of the achromatic cascade prism based on the principle that the prism motion time is shortest, and determining the corner sequence (theta) of the cascade prism arriving successively1}ijAnd { theta [ ]2}ij。
Step S4, image area image capture correction
Step S41, controlling the achromatic cascade prism to rotate to the expected rotation angle position [ theta ]1}ijAnd { theta [ ]2}ijAnd triggering the camera to capture the image information of the corresponding image side sub-area by the software when the current imaging visual axis points downwards.
Step S42, determining secondary imaging light ray vector according to the actually collected image space subregion image by a reverse light ray tracing method
Outgoing ray vector incorporating achromatic cascaded prisms
Determine its corresponding incident ray
Expressed as:
wherein
And the normal vector of the prism refraction surface is shown, wherein the reverse tracking light rays sequentially pass through the normal vector from the camera imaging plane.
Step S43, because the light directly enters the achromatic cascade prism after reaching the primary image surface in the actual imaging process, the primary imaging projection light can be determined according to the incident light of the cascade prism, that is to say
Calculating the corresponding object space projection light ray vector s by using the primary imaging projection model
objExpressed as:
wherein
Representing a one-time imaging projection model
The reverse process of (2).
Step S44, acquiring all secondary imaging light ray vectors from the image side subregion image collected by the camera, and substituting the vectors into steps S42 and S43 to determine the corresponding object side projection light rays, so as to obtain the distorted image side subregion image { I }img}ijRestored to undistorted object space subregion image { Iobj}ij。
Step S5, object region image sequence registration
Step S51, combining the deflection characteristic of the achromatic cascade prism to the camera imaging visual axis direction, establishing secondary imaging light ray vector
Reverse solving primary imaging light vector
Is expressed as:
R(Φ,Θ)=A(Θ)+[I-A(Θ)]·cosΦ+B(Θ)·sinΦ
where I is a third order identity matrix, both matrices A and B are related to the azimuth angle Θ and are represented as:
step S52, determining the relative position of one image in the other image according to the approximate transformation matrix between the adjacent subarea images on the image side, thereby determining the boundary of the overlapping area of the two images; the sub-area image I in the ith row and the jth columnijAnd the adjacent sub-area image I of the ith row and the (j + 1) th columni(j+1)As an example, image Ii(j+1)Is in the image IijCan be expressed as:
wherein is p
i(j+1)Representing an image I
i(j+1)Homogeneous image coordinates of any point on the boundary,
to convert it to image I
ijThe coordinates of subsequent homogeneous images under a coordinate system, wherein omega is a scale factor; in picture I
ijIn a coordinate system of (1) comparing adjacent images I
ijAnd I
i(j+1)The boundary position of the two can be determined, namely the boundary of the overlapped area of the two is determined
Step S53, overlapping area boundary E of the sub-area images adjacent to the image space by using the primary imaging projection modelimgOverlapped area boundary E mapped into object space adjacent subarea imagesobjExpressed as:
wherein EobjA coarse registration constraint for the object-side neighboring subregion images can be provided.
Step S54, in the overlapping area of the images of the adjacent sub-areas of the object space, respectively extracting image features with the quantity not less than 4 from the two images by using a Scale Invariant Feature Transform (SIFT) algorithm, and establishing an accurate matching relation between the features by combining a fast approximate nearest neighbor matching algorithm and a random sampling consistency method, thereby estimating a projection transformation matrix M of the two images, wherein the accurate registration relation is expressed as:
wherein K
i(j+1)And
respectively representing the homogeneous image coordinates of the object subregion images of the ith row and the jth +1 column and the homogeneous image coordinates after the homogeneous image coordinates are registered to the object subregion images of the ith row and the jth column.
Step S6, generating high resolution image with large visual field
For the object subregion image sequence after accurate registration, processing the intensity information of the adjacent subregion images in the overlapping region by using a linear fusion strategy, namely taking the distance from the image point to the centers of the two images as weight, calculating the intensity value of the fused image at the position, and expressing the intensity value as follows:
wherein (x, y) is the image coordinate of any image point in the overlapping region, D
ijAnd D
i(j+1)Respectively representing intensity images of two adjacent sub-areas of the object space,
representing the image after the two have been fused, omega
ijAnd ω
i(j+1)The value ranges are [0,1 ] respectively along with the Euclidean distance between the image point and the centers of the two images]。
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.