[go: up one dir, main page]

CN110313020A - Image processing method, equipment and computer readable storage medium - Google Patents

Image processing method, equipment and computer readable storage medium Download PDF

Info

Publication number
CN110313020A
CN110313020A CN201880012242.9A CN201880012242A CN110313020A CN 110313020 A CN110313020 A CN 110313020A CN 201880012242 A CN201880012242 A CN 201880012242A CN 110313020 A CN110313020 A CN 110313020A
Authority
CN
China
Prior art keywords
points
folding
deformation
point
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880012242.9A
Other languages
Chinese (zh)
Inventor
周游
刘洁
唐克坦
刘昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN110313020A publication Critical patent/CN110313020A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

A kind of image processing method, equipment and computer readable storage medium, which comprises obtain include target object multiple 2D original images;The first kind point cloud of the 3D scene of the target object is obtained according to the multiple 2D original image;Folding is carried out to the first kind point cloud using deformation function, obtains the second class point cloud;The second class point cloud is projected to as plane, the 2D virtual image with folding effect is obtained.Using the embodiment of the present invention, the acquisition complexity with the image of folding effect (as folded city special efficacy) is reduced, user can shoot the image with folding effect, improve user experience.

Description

Image processing method, equipment and computer readable storage medium Technical field
The present invention relates to technical field of image processing, more particularly, to a kind of image processing method, equipment and computer readable storage medium.
Background technique
Computer animation is the technology that animation is made by computer, including 2 D animation (2D) and three-dimensional animation (3D), can complete computer animation by CG (Computer Graphics, computer graphics).Wherein, the field for carrying out Vision Design and production using computer technology is known as CG, CG is by the general name of all figures of computer drawing, such as webpage design, video display special efficacy, multimedia technology.
In the video display special efficacy of computer animation, folding city special efficacy is a kind of typical case, and still, currently generalling use the production of CG technology has the image for folding city special efficacy, and complexity is higher, experiences poor.
Summary of the invention
The present invention provides a kind of image processing method, equipment and computer readable storage medium, can reduce the acquisition complexity of the image with folding effect (as folded city special efficacy), improve the user experience.
The embodiment of the present invention is in a first aspect, provide a kind of image processing method, which comprises
Obtain multiple 2D original images comprising target object;
The first kind point cloud of the 3D scene of the target object is obtained according to the multiple 2D original image;
Folding is carried out to the first kind point cloud using deformation function, obtains the second class point cloud;
The second class point cloud is projected to as plane, the 2D virtual image with folding effect is obtained.
Second aspect of the embodiment of the present invention provides a kind of image processing equipment, including memory and processor;
The memory, for storing program code;
The processor, for calling said program code, when said program code is performed, for performing the following operations: acquisition includes multiple 2D original images of target object;
The first kind point cloud of the 3D scene of the target object is obtained according to the multiple 2D original image;
Folding is carried out to the first kind point cloud using deformation function, obtains the second class point cloud;
The second class point cloud is projected to as plane, the 2D virtual image with folding effect is obtained.
The third aspect of the embodiment of the present invention, a kind of computer readable storage medium is provided, is stored with computer instruction on the computer readable storage medium, is performed in the computer instruction, realize above-mentioned image processing method, the i.e. image processing method of first aspect of embodiment of the present invention proposition.
Based on the above-mentioned technical proposal, in the embodiment of the present invention, since mobile platform has good trajectory planning, have the function of to fly automatically and intelligence follows, after mobile platform obtains multiple 2D original images comprising target object, the 2D virtual image with folding effect can be obtained according to 2D original image, reduce the acquisition complexity with the image of folding effect (as folded city special efficacy), the above process is to be automatically performed, user is without intervening, user can shoot the image with folding effect, improve the user experience.
Detailed description of the invention
In order to clearly illustrate the embodiment of the present invention or technical solution in the prior art, the embodiment of the present invention or attached drawing needed to be used in the description of the prior art will be briefly described below, apparently, the accompanying drawings in the following description is only some embodiments recorded in the present invention, for those of ordinary skill in the art, other attached drawings can be obtained with these attached drawings according to an embodiment of the present invention.
Fig. 1 is a structural schematic diagram of unmanned plane;
Fig. 2 is the embodiment schematic diagram of an image processing method;
Fig. 3 is the schematic diagram of a deformation function;
Fig. 4 is the embodiment schematic diagram of another image processing method;
Fig. 5 A- Fig. 5 F is the schematic diagram converted to characteristic point;
Fig. 6 is the embodiment schematic diagram of another image processing method;
Fig. 7 is the embodiment schematic diagram of another image processing method;
Fig. 8 is one embodiment block diagram of image processing equipment.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, every other embodiment obtained by those of ordinary skill in the art without making creative efforts, shall fall within the protection scope of the present invention.In addition, in the absence of conflict, the feature in following embodiment and embodiment can be combined with each other.
Terminology used in the present invention is not intended to limit the present invention merely for the sake of for the purpose of describing particular embodiments.The "an" of singular used in the present invention and claims, " described " and "the" are also intended to including most forms, unless context clearly shows that other meanings.It should be appreciated that term "and/or" used herein refers to comprising one or more associated any or all possible combination for listing project.
Although may describe various information using term first, second, third, etc. in the present invention, these information should not necessarily be limited by these terms.These terms are used to for same type of information being distinguished from each other out.For example, without departing from the present invention, the first information can also be referred to as the second information, similarly, the second information can also be referred to as the first information.Depending on context, in addition, used word " if " can be construed to " ... when ", perhaps " when ... " or " in response to determination ".
A kind of image processing method is proposed in the embodiment of the present invention, for obtaining the image with folding effect (as folded city special efficacy), this method can be applied to image processing equipment.Image processing equipment can be mobile platform, and after multiple original images comprising target object are obtained such as mobile platform, the image with folding effect is obtained using these original images.Image processing equipment may be control equipment, after obtaining multiple original images comprising target object such as mobile platform, these original images are sent to control equipment, after control equipment gets multiple original images comprising target object, the image with folding effect is obtained using these original images.Wherein, mobile platform can include but is not limited to: robot, unmanned plane, unmanned vehicle, VR glasses, AR glasses, without limitation, as long as having the carrier of multi-cam.Furthermore, control equipment can include but is not limited to: remote controler, smart phone/mobile phone, tablet computer, personal digital assistant (PDA), laptop computer, desktop computer, media content player, video-game station/system, virtual reality system, augmented reality system, wearable device (such as, wrist-watch, glasses, gloves, headwear (such as, cap, the helmet, virtual reality headphone, augmented reality headphone, head dress formula device (HMD), headband), pendant, armband, leg ring, shoes, vest), gesture identifying device, microphone, it is capable of providing or any electronic device of rendered image data, it is without limitation.
In order to facilitate description, it is mobile platform with image processing equipment, and for mobile platform is unmanned plane, unmanned plane is equipped with holder, it is fixed on holder capture apparatus (such as camera, video camera), and unmanned plane has good trajectory planning, has the function of to fly automatically and intelligence follows, multiple original images comprising target object can be obtained by capture apparatus, the image with folding effect is obtained according to original image.
It is shown in Figure 1, it is a structural schematic diagram of unmanned plane.10 indicate the head of unmanned plane, 11 indicate the propeller of unmanned plane, 12 indicate the fuselage of unmanned plane, 13 indicate the foot prop of unmanned plane, 14 indicate the holder on unmanned plane, and 15 indicate that the capture apparatus that holder 14 carries, capture apparatus 15 are connect by holder 14 with the fuselage 12 of unmanned plane, 16 indicate the taking lens of capture apparatus, and 17 indicate target object.
Holder 14 can be three axis holders, i.e. holder 14 is axis rotation with the roll axis (Roll axis), pitch axis (Pitch axis), yaw axis (Yaw axis) of holder.As shown in Figure 1,1 indicates the Roll axis of holder, 2 indicate the Pitch axis of holder, and 3 indicate the Yaw axis of holder.When holder is rotated using Roll axis as axis, the roll angle of holder changes;When holder is rotated using Pitch axis as axis, the pitch angle of holder changes;When holder is rotated using Yaw axis as axis, the yaw angle of holder changes.And, when holder is rotated using one or more in Yaw axis, Pitch axis, Yaw axis as axis, capture apparatus 15 follows the rotation of holder 14 and rotates, and capture apparatus 15 shoots target object 17 from different shooting direction and shooting angle.
Similarly with holder 14, the fuselage 12 of unmanned plane can also be rotated using the Roll axis of fuselage, Pitch axis, Yaw axis as axis.When the fuselage of unmanned plane is rotated using Roll axis as axis, then the roll angle of fuselage changes;When the fuselage of unmanned plane is rotated using Pitch axis as axis, then the pitch angle of fuselage changes;When the fuselage of unmanned plane is rotated using Yaw axis as axis, then the yaw angle of fuselage changes.
Shown in Figure 2 based on above-mentioned application scenarios, for the flow chart of the image processing method proposed in the embodiment of the present invention, this method can be applied to image processing equipment, and this method may include:
Step 201, multiple 2D original images comprising target object are obtained.
Wherein, obtain multiple 2D original images comprising target object, it include: the flight path that mobile platform (such as unmanned plane) is planned according to the position of target object, and control mobile platform and fly according to the flight path, acquisition includes multiple 2D original images of target object in the flight course of mobile platform.
Wherein, for unmanned plane, there is good trajectory planning, have the function of to fly automatically and intelligence follows, therefore, the flight path of unmanned plane can be planned according to the position of target object, unmanned plane flies automatically according to the flight path.In automatic flight course, unmanned plane keeps the direction of capture apparatus in real time, guarantees target object always at picture center, so as to obtain the multiple images for including target object by capture apparatus, in order to distinguish conveniently, the image comprising target object is known as 2D original image.
Step 202, the first kind point cloud of the 3D scene of target object is obtained according to multiple 2D original images.
Wherein, the first kind point cloud of the 3D scene of target object is obtained according to multiple 2D original images, it include: that multiple 2D original images are handled by image processing algorithm, the first kind point cloud of the 3D scene of target object is obtained, the first kind point cloud may include multiple characteristic points with three-dimensional information.
Wherein, image processing algorithm can include but is not limited to: SfM (Structure from Motion, structural remodeling) algorithm;SfM algorithm may include: sparse SfM algorithm;Alternatively, intensive SfM algorithm.
In one example, image processing algorithm obtains first kind point cloud, with no restrictions to this process for handling multiple 2D original images.Such as, input data is multiple 2D original images, by initialization sequence of pictures and camera calibration, extracts characteristic point and calculate the processes such as feature description, images match calculating, solution inferred motion structure problem, optional optimization processing, available output data, and output data is exactly the first kind point cloud (Cloud Point) of the 3D scene of target object, first kind point cloud is the set of multiple characteristic points, each characteristic point all has three-dimensional information, that is three-dimensional coordinate (X, Y, Z).Certainly, each characteristic point can also have the contents such as laser reflection intensity and colouring information, without limitation.
First kind point cloud in order to obtain can use sparse SfM algorithm, and the first kind point cloud obtained using sparse SfM algorithm be sparse cloud, the i.e. small number of characteristic point, and the spacing of characteristic point and characteristic point is bigger.Further, it is also possible to which the first kind point cloud obtained using intensive SfM algorithm is point off density cloud, i.e. the quantity of characteristic point is relatively more, and the spacing of characteristic point and characteristic point is smaller using intensive SfM algorithm.
Step 203, folding is carried out to first kind point cloud using deformation function, obtains the second class point cloud.
Wherein, deformation function can include but is not limited to: exponential function;Alternatively, parabolic function;Alternatively, Archimedes spiral function with no restrictions to this deformation function can be selected rule of thumb, as long as folding can be carried out to first kind point cloud.For the convenience of description, subsequent by taking deformation function is exponential function as an example, shown in Figure 3, the plane of lower section is the ground of script, is watched for convenience, has certain inclination here, by constructing deformation function, i.e. exponential function y=e x, curved surface can be folded into.
Wherein, shown in Figure 4, folding is carried out to first kind point cloud using deformation function, the second class point cloud (also may include multiple characteristic points with three-dimensional information) is obtained, may include:
Step 2031, N number of DEFORMATION POINTS is chosen on the corresponding curve of deformation function, N is the positive integer more than or equal to 1.Wherein, the value of N can be selected rule of thumb, such as 5,10, without limitation.
Wherein, N number of DEFORMATION POINTS is chosen on the corresponding curve of deformation function, may include: to choose a DEFORMATION POINTS every first distance in the abscissa direction of deformation function homologous thread;Alternatively, choosing a DEFORMATION POINTS every second distance in the ordinate direction of deformation function homologous thread.First distance can be configured rule of thumb, without limitation.Second distance can be configured rule of thumb, without limitation.
Referring to shown in Fig. 5 A, for the corresponding curve synoptic diagram of deformation function, Fig. 5 A is by taking the lateral sectional view of plane as an example, it is assumed that heading is the direction x, is the direction z towards day, and the direction y is confirmed by the right-hand rule, can be such as direction paper.It is folded by multistage, so that it may gradually approach the curve of target, the line segment divided is more, more can infinitely approach true curve.Therefore, N number of DEFORMATION POINTS can be first chosen from the corresponding curve of deformation function, two neighboring DEFORMATION POINTS forms a line segment, and DEFORMATION POINTS quantity is more, then the line segment of DEFORMATION POINTS composition more approaches curve, to remove close approximation curve using multi-line section.
Referring to shown in Fig. 5 B, DEFORMATION POINTS O, DEFORMATION POINTS A, DEFORMATION POINTS B, DEFORMATION POINTS C and DEFORMATION POINTS D can be chosen from the corresponding curve of deformation function.In figure 5B, by taking 5 DEFORMATION POINTSs as an example, in practical application, the quantity of DEFORMATION POINTS can be more.In figure 5B, by abscissa direction for first distance chooses a DEFORMATION POINTS, in practical application, DEFORMATION POINTS can also be chosen using other way, it is without limitation as different at a distance from the distance of DEFORMATION POINTS O and DEFORMATION POINTS A, with DEFORMATION POINTS A and DEFORMATION POINTS B.
Referring to shown in Fig. 5 C, after choosing DEFORMATION POINTS O, DEFORMATION POINTS A, DEFORMATION POINTS B, DEFORMATION POINTS C and DEFORMATION POINTS D, the line segment between line segment, DEFORMATION POINTS C and the DEFORMATION POINTS D between line segment, DEFORMATION POINTS B and the DEFORMATION POINTS C between line segment, DEFORMATION POINTS A and the DEFORMATION POINTS B between DEFORMATION POINTS O and DEFORMATION POINTS A can be used, remove the corresponding curve of close approximation deformation function, i.e. curve shown in close approximation Fig. 5 A.
In practical applications, in order to which line segment is fitted to curve, RDP (Ramer Douglas Peucker, Douglas-Pu Ke) algorithm can also be used, certainly, RDP algorithm is an example, without limitation.Wherein, RDP algorithm is a kind of algorithm that curve approximation is expressed as to series of points, and reduces quantity a little.With the increase of quantity, the line segment quantity of fitting is more, closer with script curve, but complexity is higher;With the reduction of quantity, the line segment quantity of fitting is fewer, differs bigger with script curve, but complexity is lower.Therefore, suitable point quantity can be rule of thumb chosen, it is without limitation.
Step 2032, for each characteristic point in first kind point cloud, at least one corresponding DEFORMATION POINTS of this feature point is determined.For example, can determine the corresponding DEFORMATION POINTS of this feature point according to the abscissa value of characteristic point.
Wherein, determine at least one corresponding DEFORMATION POINTS of this feature point, it may include: the abscissa value based on each DEFORMATION POINTS in N number of DEFORMATION POINTS, the DEFORMATION POINTS that abscissa value (i.e. the abscissa value of DEFORMATION POINTS) is less than the abscissa value of this feature point is chosen, the DEFORMATION POINTS of selection is as the corresponding DEFORMATION POINTS of this feature point.
For example, with reference to shown in Fig. 5 D, the abscissa value of DEFORMATION POINTS O is X O, the abscissa value of DEFORMATION POINTS A is X A, the abscissa value of DEFORMATION POINTS B is X B, the abscissa value of DEFORMATION POINTS C is X C, the abscissa value of DEFORMATION POINTS D is X D, for the characteristic point 1 in first kind point cloud (subsequent to be illustrated by taking characteristic point 1 as an example, the processing mode of each characteristic point in first kind point cloud is similar with characteristic point 1, subsequent to repeat no more), it is assumed that the abscissa value of characteristic point 1 is located at X CWith X DBetween, due to the abscissa value of DEFORMATION POINTS O, DEFORMATION POINTS A, DEFORMATION POINTS B, DEFORMATION POINTS C, the respectively less than abscissa value of characteristic point 1, accordingly, it is determined that the corresponding DEFORMATION POINTS of characteristic point 1 is DEFORMATION POINTS O, DEFORMATION POINTS A, DEFORMATION POINTS B, DEFORMATION POINTS C.
Step 2033, folding is carried out to this feature point using the slope of determining each DEFORMATION POINTS, characteristic point after the corresponding folding of each characteristic point in characteristic point after being folded, the i.e. first kind point cloud.
Wherein, folding is carried out to this feature point using the slope of determining each DEFORMATION POINTS, characteristic point after being folded, if including: that this feature point corresponds to M DEFORMATION POINTS, then according to the sequence of first DEFORMATION POINTS to m-th DEFORMATION POINTS, using the corresponding slope of M DEFORMATION POINTS, M folding, characteristic point after being folded are carried out to this feature point;M is the positive integer more than or equal to 1, and M is less than or equal to N.
For example, carrying out folding to characteristic point 1 using the corresponding slope of DEFORMATION POINTS O (i.e. the slope of line segment between DEFORMATION POINTS O and DEFORMATION POINTS A), characteristic point 1a is obtained.Then, folding is carried out to characteristic point 1a using the corresponding slope of DEFORMATION POINTS A (i.e. the slope of line segment between DEFORMATION POINTS A and DEFORMATION POINTS B), obtains characteristic point 1b.Then, folding is carried out to characteristic point 1b using the corresponding slope of DEFORMATION POINTS B (i.e. the slope of line segment between DEFORMATION POINTS B and DEFORMATION POINTS C), obtains characteristic point 1c.Then, folding is carried out to characteristic point 1c using the corresponding slope of DEFORMATION POINTS C (i.e. the slope of line segment between DEFORMATION POINTS C and DEFORMATION POINTS D), obtains characteristic point 1d, characteristic point is characterized point 1d after the folding of characteristic point 1.
Wherein, folding is carried out to this feature point using the slope of determining each DEFORMATION POINTS, characteristic point after being folded, if including: that this feature point corresponds to M DEFORMATION POINTS, then utilize the corresponding slope of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point, obtains characteristic point after the folding of i-th;Wherein, the value of i is followed successively by 1,2 ..., and M, M are the positive integer more than or equal to 1, and M is less than or equal to N.
For example, carrying out the 1st folding to characteristic point 1 using the 1st corresponding slope of DEFORMATION POINTS O, characteristic point 1a after the 1st folding is obtained.The 2nd folding is carried out to characteristic point 1a using the 2nd corresponding slope of DEFORMATION POINTS A, obtains characteristic point 1b after the 2nd folding.The 3rd folding is carried out to characteristic point 1b using the 3rd corresponding slope of DEFORMATION POINTS B, obtains characteristic point 1c after the 3rd folding.The 4th folding is carried out to characteristic point 1c using the 4th DEFORMATION POINTS C corresponding slope, obtains characteristic point 1d after the folding of the 4th, therefore, characteristic point is characterized point 1d after the folding of characteristic point 1.
In one example, using the corresponding slope of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point, the process of characteristic point after the folding of i-th is obtained, may comprise steps of:
Step 20331, the rotation parameter of coordinate system is obtained using the corresponding slope of i-th of DEFORMATION POINTS.
Wherein, the rotation parameter that coordinate system is obtained using the corresponding slope of i-th of DEFORMATION POINTS may include: the first angle that i-th of DEFORMATION POINTS and abscissa are obtained using the corresponding slope of i-th of DEFORMATION POINTS.Utilize first angle target angle corresponding with i-th of DEFORMATION POINTS of the second angle acquisition;Wherein, which can be (i-1)-th angle between DEFORMATION POINTS and abscissa.The rotation parameter of coordinate system is obtained using the corresponding target angle of i-th of DEFORMATION POINTS (rotation parameter is referred to as spin matrix, i.e. Rotation Matrix).
Wherein, the first angle of i-th of DEFORMATION POINTS and abscissa is obtained using the corresponding slope of i-th of DEFORMATION POINTS, it may include: the coordinate value of the coordinate value using i-th of DEFORMATION POINTS and i+1 DEFORMATION POINTS, the corresponding slope of i-th of DEFORMATION POINTS is obtained, and obtains the first angle using the corresponding slope of i-th of DEFORMATION POINTS.
Referring to shown in Fig. 5 E, it is assumed that i-th of DEFORMATION POINTS is DEFORMATION POINTS C, can use the coordinate value of DEFORMATION POINTS C and the coordinate value of DEFORMATION POINTS D, obtains the corresponding slope of DEFORMATION POINTS C, and obtain the first angle (angle 4) using the corresponding slope of DEFORMATION POINTS C.Similarly, it is available to the first angle (angle 3) of DEFORMATION POINTS B, the first angle (angle 2) of DEFORMATION POINTS A, DEFORMATION POINTS O the first angle (angle 1).
It wherein, may include: that the difference of the first angle and the second angle is determined as the corresponding target angle of i-th of DEFORMATION POINTS using the first angle target angle corresponding with i-th of DEFORMATION POINTS of the second angle acquisition.
For example, it is assumed that i-th of DEFORMATION POINTS is DEFORMATION POINTS C, then (i-1)-th DEFORMATION POINTS is DEFORMATION POINTS B, and therefore, target angle is the difference of the corresponding angle 4 of DEFORMATION POINTS C angle 3 corresponding with DEFORMATION POINTS B.
Wherein, the rotation parameter that coordinate system is obtained using the corresponding target angle of i-th of DEFORMATION POINTS, may include: that the target angle is converted to spin matrix;The spin matrix is determined as to the rotation parameter of coordinate system.Further, the target angle is converted into spin matrix, may include: that the target angle is converted into spin matrix using following formula: θ is the target angle.
Step 20332, the displacement parameter of coordinate system is obtained using the abscissa value of i-th of DEFORMATION POINTS.
The displacement parameter that coordinate system is obtained using the abscissa value of i-th of DEFORMATION POINTS may include: to construct transposed matrix using the abscissa value of i-th of DEFORMATION POINTS;Transposed matrix is determined as to the displacement parameter of coordinate system.
Further, transposed matrix is constructed using the abscissa value of i-th of DEFORMATION POINTS, may include: to utilize following formula construction transposed matrix: [x, 0,0] T;Wherein, x is the abscissa value of i-th of DEFORMATION POINTS.
Step 20333, i-th folding is carried out to this feature point using rotation parameter and displacement parameter, obtains characteristic point after the folding of i-th.For example, characteristic point carries out i-th folding after (i-1)-th folding corresponding to characteristic point using rotation parameter and displacement parameter, characteristic point after the folding of i-th is obtained.
Wherein, using rotation parameter and displacement parameter, i-th folding is carried out to characteristic point after corresponding (i-1)-th folding of this feature point, obtains characteristic point after the folding of the corresponding i-th of this feature point, comprising:
It can use the three-dimensional information that following formula obtains characteristic point after the folding of i-th: P i=R* (P i-1-T)+T;Wherein, P iThe three-dimensional information of characteristic point, P after the folding of expression i-th i-1The three-dimensional information of characteristic point after the folding that expression is (i-1)-th time, R indicate that the rotation parameter, T indicate the displacement parameter.
Below in conjunction with concrete application scene, the process of above-mentioned steps 2033 is described in detail.In step 2033, it can use the corresponding slope of DEFORMATION POINTS O and the 1st folding carried out to characteristic point 1, obtain characteristic point 1a, the 2nd folding is carried out to characteristic point 1a using the corresponding slope of DEFORMATION POINTS A, characteristic point 1b is obtained, and the 3rd folding is carried out to characteristic point 1b using the corresponding slope of DEFORMATION POINTS B, obtains characteristic point 1c, the 4th folding is carried out to characteristic point 1c using DEFORMATION POINTS C corresponding slope, obtains characteristic point 1d.Referring to shown in Fig. 5 F, above-mentioned folding process can be different coordinates (such as F 0、F 1、F 2、F 3Deng) between transformation problem, therefore, as long as knowing the rotation parameter R (such as Rotation Matrix) and displacement parameter T (such as Translation Matrix) between coordinate system, so that it may realize folding process.
In the solution procedure of rotation parameter R (rotation relationship i.e. between adjacent coordinates system), target angle is only differed between adjacent coordinates system, if the target angle θ 1 of DEFORMATION POINTS O is the difference of angle 1 and 0, the target angle θ 2 of DEFORMATION POINTS A is the difference of angle 2 and angle 1, the target angle θ 3 of DEFORMATION POINTS B is the difference of angle 3 and angle 2, and the target angle θ 4 of DEFORMATION POINTS C is the difference of angle 4 and angle 3.Further, target angle can be switched into spin matrix, referring to above-mentioned conversion formula.When θ 1 is substituted into above-mentioned formula, the rotation parameter R1 of available DEFORMATION POINTS O, when θ 2 is substituted into above-mentioned formula, the rotation parameter R2 of available DEFORMATION POINTS A, when θ 3 is substituted into above-mentioned formula, the rotation parameter R3 of available DEFORMATION POINTS B, when θ 4 is substituted into above-mentioned formula, the rotation parameter R4 of available DEFORMATION POINTS C.
In the solution procedure of displacement parameter T (displacement relation i.e. between adjacent coordinates system), displacement is the distance of each DEFORMATION POINTS on transverse axis, such as the abscissa value X of DEFORMATION POINTS O O, the abscissa value X of DEFORMATION POINTS A A, the abscissa value X of DEFORMATION POINTS B B, the abscissa value X of DEFORMATION POINTS C CDeng therefore, can use X OThe displacement parameter T1 of tectonic derormation point O, utilizes X AThe displacement parameter T2 of tectonic derormation point A, utilizes X BThe displacement parameter T3 of tectonic derormation point B, utilizes X CThe displacement parameter T4 of tectonic derormation point C.
Then, the 1st folding: P is carried out to characteristic point 1 using following formula 1=R1* (P 0-T1)+T1;Wherein, P 0It is characterized a little 1 three-dimensional information (i.e. coordinate system F 0Coordinate points), P 1It is characterized three-dimensional information (the i.e. coordinate system F of point 1a 1Coordinate points).Then, the 2nd folding: P is carried out to characteristic point 1 using following formula 2=R2* (P 1-T2)+T2;Wherein, P 2It is characterized three-dimensional information (the i.e. coordinate system F of point 1b 2Coordinate points).And so on.Finally, the three-dimensional information of available characteristic point 1d.
In practical applications, when there are multiple characteristic points, multiple characteristic points can be folded together, such as, all characteristic points corresponding to DEFORMATION POINTS O carry out folding, and the specific formula that folds is referring to characteristic point 1, and details are not described herein, in this way, can be by the corresponding all characteristic points of DEFORMATION POINTS O from original coordinate system F 0It is transformed into folded coordinate system F 1.Then, all characteristic points corresponding to DEFORMATION POINTS A carry out folding, and the specific formula that folds is referring to characteristic point 1, and details are not described herein, in this way, can be by the corresponding all characteristic points of DEFORMATION POINTS A from original coordinate system F 1It is transformed into folded coordinate system F 2, and so on.
Step 2034, by characteristic point after the corresponding folding of each characteristic point in first kind point cloud, it is determined as the second class point cloud.That is, after being folded after characteristic point, characteristic point after all foldings can be determined as to the second class point cloud carrying out folding to each characteristic point in first kind point cloud.
For example, the second class point cloud may include characteristic point 1d after the corresponding folding of characteristic point 1, characteristic point 2d after the corresponding folding of characteristic point 2, characteristic point 3c after the corresponding folding of characteristic point 3, and so on.
Step 204, the second class point cloud is projected to as plane, obtains the 2D virtual image with folding effect.For example, projecting to each characteristic point (characteristic point after folding) in the second class point cloud as plane.
Wherein, shown in Figure 6, the second class point cloud is projected to the 2D virtual image (i.e. final target image) for obtaining having folding effect (as folded city special efficacy) as plane, may include:
Step 2041, the corresponding location information of mobile platform and posture information are obtained.
Wherein, the corresponding location information of mobile platform and posture information are obtained, may include: to obtain the corresponding camera site of mobile platform and shooting posture according to multiple 2D original images;Alternatively, obtaining the corresponding camera site of mobile platform and shooting posture according to multiple 2D original images, and virtual location and virtual posture are obtained according to the camera site and the shooting posture.Certainly, in practical applications, the corresponding camera site of mobile platform and shooting posture can also be obtained using other way, for example, the corresponding camera site of mobile platform and shooting posture can be directly measured by the sensor of mobile platform, it is without limitation.
Wherein, the corresponding camera site of mobile platform and shooting posture are obtained according to multiple 2D original images, it may include: after obtaining multiple 2D original images, it can use the shooting seat in the plane (i.e. actual position and posture) that these 2D original images obtain mobile platform, to obtain camera site (Translation) and shooting posture (Rotation), to this camera site and the acquisition modes of posture are shot with no restrictions.
Further, mobile platform has Multi-sensor Fusion positioning system (such as VO, IMU, GPS), the Multi-sensor Fusion positioning system can provide position and the posture relationship of each 2D original image, to reduce the uncertainty on position, by using position and posture relationship as initial value, at the camera site and shooting posture for obtaining mobile platform, arithmetic speed can be accelerated, and reduce the consumption of calculation resources.
Wherein, virtual location and virtual posture are obtained according to the camera site and the shooting posture, may include: that a virtual seat in the plane, i.e. virtual location and virtual posture are fictionalized according to actual camera site and shooting posture.For example, fictionalizing a curve according to multiple actual camera sites and shooting posture, select a visual angle as virtual location and virtual posture from the curve.Alternatively, rotation 3D scene can also be pulled according to user, artificial virtual one goes out a virtual seat in the plane, obtains virtual location and virtual posture.
Step 2042, the 2D virtual image with folding effect is obtained by each projecting characteristic points in the second class point cloud to the picture plane of mobile platform according to the corresponding location information of mobile platform and posture information.
Wherein, according to the corresponding location information of mobile platform and posture information, by the picture plane of each projecting characteristic points in the second class point cloud to mobile platform, obtain the 2D virtual image with folding effect, it may include: to utilize the corresponding internal reference of mobile platform, the location information, the posture information, by the picture plane of each projecting characteristic points in the second class point cloud to mobile platform, the corresponding 2D pixel of each characteristic point is obtained;The corresponding 2D pixel of each characteristic point in second class point cloud is formed into 2D virtual image.
Further, utilize the corresponding internal reference of mobile platform, the location information, the posture information, by the picture plane of each projecting characteristic points in the second class point cloud to mobile platform, the corresponding 2D pixel of each characteristic point is obtained, may include: the corresponding 2D pixel of characteristic point obtained using following formula in the second class point cloud: In above-mentioned formula, K is the corresponding internal reference of mobile platform, and R is the location information, T is the posture information, (x w, y w, z w) be the second class point cloud in the corresponding three-dimensional information of characteristic point, (u, v) is characterized a little corresponding two-dimensional signal of corresponding 2D pixel.
For example, the second class point cloud that step 203 obtains may include multiple characteristic points (characteristic point after i.e. multiple foldings), each characteristic point has three-dimensional information, such as (x w, y w, z w).For example, the second class point cloud includes features described above point 1d, it is illustrated by taking characteristic point 1d as an example, the processing of the other feature point in the second class point cloud is similar with the treatment process of characteristic point 1d, subsequent to repeat no more.For characteristic point 1d, the corresponding three-dimensional information (x of available characteristic point 1d w, y w, z w), then, by the corresponding three-dimensional information (x of characteristic point 1d w, y w, z w) substitute into above-mentioned formula, so that it may obtain the corresponding 2D pixel of characteristic point 1d.
Further, after carrying out above-mentioned processing to each characteristic point in the second class point cloud, the corresponding 2D pixel of available each characteristic point, then, these 2D pixels can be formed into 2D virtual image, this 2D virtual image namely has the 2D virtual image of folding effect, is a 2D image.
In above-mentioned formula, the corresponding internal reference K of mobile platform is a matrix, and internal reference K is a known parameters, is camera internal reference, without limitation.In addition, location information R can be spin matrix, posture information T can be transposed matrix, and spin matrix R and transposed matrix T are Camera extrinsic, and expression is the rotation of world coordinate system to camera coordinates system and shift transformation in three-dimensional space, without limitation.
In one example, since folding process is operated on threedimensional model, that is each characteristic point in the second class point cloud is 3D characteristic point, therefore, by above-mentioned processing, 3D characteristic point can be converted to 2D pixel, and all 2D pixels are formed into 2D virtual image, so as to which folded threedimensional model is re-converted into 2D virtual image, that is, obtain the 2D virtual image with folding effect.
Based on the above-mentioned technical proposal, in the embodiment of the present invention, since mobile platform has good trajectory planning, have the function of to fly automatically and intelligence follows, after mobile platform obtains multiple 2D original images comprising target object, the 2D virtual image with folding effect can be obtained according to 2D original image, reduce the acquisition complexity with the image of folding effect (as folded city special efficacy), the above process is to be automatically performed, user is without intervening, user can shoot the image with folding effect, improve the user experience.
In one example, when obtaining first kind point cloud using intensive SfM algorithm, for most or whole 2D points of 2D original image, it is corresponding with 3D characteristic point in first kind point cloud, to be corresponding with 3D characteristic point in the second class point cloud.In another example, when obtaining first kind point cloud using sparse SfM algorithm, only have 2D point in part (in order to distinguish conveniently in 2D original image, this part 2D point is known as 2D characteristic point) 3D characteristic point is corresponding in first kind point cloud, to be corresponding with 3D characteristic point in the second class point cloud, in this way, 2D virtual image only includes the corresponding 2D pixel of these 2D characteristic points, for 2D point remaining in 2D original image (in order to distinguish conveniently, the remaining 2D point in this part is known as 2D original point), also it may map to 2D virtual image, the process is illustrated below.
It is shown in Figure 7, the second class point cloud is projected to as plane, after obtaining 2D virtual image, in order to which 2D original point remaining in 2D original image is mapped to 2D virtual image, can also include:
Step 701, from the corresponding 2D characteristic point of each characteristic point determined in 2D original image in the second class point cloud.Wherein, the characteristic point in first kind point cloud is corresponded to due to the characteristic point in the second class point cloud, and the characteristic point in first kind point cloud is obtained based on the 2D characteristic point in 2D original image, therefore, each characteristic point in the second class point cloud has corresponding 2D characteristic point in 2D original image.
Step 702,2D original image is marked off into multiple Delta Regions using the 2D characteristic point;It wherein, may include 3 2D characteristic points, multiple 2D original points for each Delta Region.
Wherein, 2D original image is marked off into multiple Delta Regions using the 2D characteristic point, may include: that the 2D characteristic point in 2D original image is connected by multiple Delta Regions using triangulation.
Specifically, after getting all 2D characteristic points in 2D original image, triangulation can be carried out to these 2D characteristic points using triangulation (such as Delaunay Triangulation algorithm), to be partitioned into multiple Delta Regions, with no restrictions to this triangulation mode.Three vertex of each Delta Region are exactly 3 2D characteristic points, and other points in Delta Region are 2D original point.
Step 703, for each Delta Region, obtain the corresponding projective parameter in the Delta Region using 3 2D characteristic points that the Delta Region includes (for 2D original point to be projected to 2D virtual image).
Wherein, the corresponding projective parameter in the Delta Region is obtained using 3 2D characteristic points that the Delta Region includes, it include: to obtain the corresponding projective parameter in the Delta Region in the two-dimensional signal of two-dimensional signal, 3 2D characteristic points in 2D virtual image in 2D original image using 3 2D characteristic points.
Further, using 3 2D characteristic points in the two-dimensional signal of two-dimensional signal, 3 2D characteristic points in 2D virtual image in 2D original image, the corresponding projective parameter in the Delta Region is obtained, may include: to obtain the corresponding projective parameter in the Delta Region using following formula:
Wherein, p ' is two-dimensional signal of the 2D characteristic point in 2D virtual image, and p is two-dimensional signal of the 2D characteristic point in 2D original image, and A and t are the corresponding projective parameter in the Delta Region.
In above-mentioned formula, That is, one co-exists in 6 unknown parameters, and Delta Region includes 3 2D characteristic points, it is assumed that is 2D characteristic point 1,2D characteristic point 2 and 2D characteristic point 3, due to two-dimensional signal p of the 2D characteristic point 1 in 2D original image 1It is known that two-dimensional signal p of the 2D characteristic point 1 in 2D virtual image 1' it is known that two-dimensional signal p of the 2D characteristic point 2 in 2D original image 2It is known that two-dimensional signal p of the 2D characteristic point 2 in 2D virtual image 2' it is known that two-dimensional signal p of the 2D characteristic point 3 in 2D original image 3It is known that two-dimensional signal p of the 2D characteristic point 3 in 2D virtual image 3' it is known that therefore, passing through above-mentioned p 1、p 1’、p 2、p 2’、p 3、p 3', so that it may 6 unknown parameters are calculated, so as to calculate the corresponding projective parameter A and t in Delta Region.
Step 704, projection process is carried out to all 2D original points in the Delta Region using the corresponding projective parameter in Delta Region, i.e., all 2D original points in Delta Region is projected into 2D virtual image.
Wherein, projection process is carried out to all 2D original points in the Delta Region using the corresponding projective parameter in Delta Region, may include: that the 2D original point in the Delta Region is projected into 2D virtual image using following formula: Wherein, p is two-dimensional signal of the 2D original point in 2D original image, and A and t are the corresponding projective parameter in Delta Region, and p ' is two-dimensional signal of the 2D original point in 2D virtual image.Due to p, A and t it is known that therefore, can calculate p ' by above-mentioned formula, and p ' is two-dimensional signal of the 2D original point in 2D virtual image, i.e., 2D original point is projected to 2D virtual image.
After carrying out above-mentioned processing for all 2D original points in Delta Region, so that it may which all 2D original points in Delta Region are projected to 2D virtual image.In addition, after carrying out above-mentioned processing to each Delta Region, so that it may which all 2D original points of all Delta Regions are projected to 2D virtual image.By above-mentioned processing, so that it may which all 2D original points in 2D original image are projected to 2D virtual image.
It is shown in Figure 8 based on similarly conceiving with the above method, a kind of image processing equipment 80, including memory 801 and processor 802 (such as one or more processors) are also provided in the embodiment of the present invention.
Wherein, the memory, for storing program code;The processor, for calling said program code, when said program code is performed, for performing the following operations: acquisition includes multiple 2D original images of target object;The first kind point cloud of the 3D scene of the target object is obtained according to the multiple 2D original image;Folding is carried out to the first kind point cloud using deformation function, obtains the second class point cloud;The second class point cloud is projected to as plane, the 2D virtual image with folding effect is obtained.
Preferably, the processor is specifically used for when obtaining and including multiple 2D original images of target object: the flight path of mobile platform is planned according to the position of target object, control mobile platform flies according to the flight path, and acquisition includes multiple 2D original images of target object in flight course.
Preferably, the processor is specifically used in the first kind point cloud for obtaining the 3D scene of the target object according to the multiple 2D original image: being handled by image processing algorithm the multiple 2D original image, obtains the first kind point cloud of the 3D scene of the target object;The first kind point cloud includes multiple characteristic points with three-dimensional information.
Preferably, the processor is carrying out folding to the first kind point cloud using deformation function, obtains being specifically used for when the second class point cloud: N number of DEFORMATION POINTS is chosen on the corresponding curve of deformation function, N is the positive integer more than or equal to 1;For each characteristic point in first kind point cloud, at least one corresponding DEFORMATION POINTS of this feature point is determined;Folding, characteristic point after being folded are carried out to this feature point using the slope of determining each DEFORMATION POINTS;By characteristic point after the corresponding folding of each characteristic point in first kind point cloud, it is determined as the second class point cloud.
Preferably, the processor is specifically used for when choosing N number of DEFORMATION POINTS on the corresponding curve of deformation function: in the abscissa direction of deformation function homologous thread, choosing a DEFORMATION POINTS every first distance;Alternatively, choosing a DEFORMATION POINTS every second distance in the ordinate direction of deformation function homologous thread.
Preferably, the processor is specifically used for when determining corresponding at least one DEFORMATION POINTS of this feature point: the abscissa value based on each DEFORMATION POINTS in N number of DEFORMATION POINTS, the DEFORMATION POINTS that abscissa value is less than the abscissa value of this feature point is chosen, as the corresponding DEFORMATION POINTS of this feature point.
Preferably, the processor carries out folding to this feature point in the slope using determining each DEFORMATION POINTS, it is specifically used for when characteristic point after being folded: if this feature point corresponds to M DEFORMATION POINTS, then according to the sequence of first DEFORMATION POINTS to m-th DEFORMATION POINTS, utilize the corresponding slope of the M DEFORMATION POINTS, M folding, characteristic point after being folded are carried out to this feature point;M is the positive integer more than or equal to 1, and M is less than or equal to N.
Preferably, the processor carries out folding to this feature point in the slope using determining each DEFORMATION POINTS, it is specifically used for when characteristic point after being folded: if this feature point corresponds to M DEFORMATION POINTS, then utilize the corresponding slope of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point, obtains characteristic point after the folding of i-th;Wherein, the value of the i is followed successively by 1,2 ... M, the M are the positive integer more than or equal to 1, and M is less than or equal to N.
Preferably, the processor is utilizing the corresponding slope of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point, it obtains being specifically used for when characteristic point after the folding of i-th: the corresponding slope of i-th of DEFORMATION POINTS being utilized to obtain the rotation parameter of coordinate system, the displacement parameter of coordinate system is obtained using the abscissa value of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point using the rotation parameter and the displacement parameter, obtains characteristic point after the folding of i-th.
Preferably, the processor is specifically used for when obtaining the rotation parameter of coordinate system using the corresponding slope of i-th of DEFORMATION POINTS: the first angle of i-th of DEFORMATION POINTS and abscissa is obtained using the corresponding slope of i-th of DEFORMATION POINTS;Utilize first angle target angle corresponding with the second angle acquisition i-th of DEFORMATION POINTS;Wherein, second angle is (i-1)-th angle between DEFORMATION POINTS and abscissa;The rotation parameter of coordinate system is obtained using the corresponding target angle of i-th of DEFORMATION POINTS.
Preferably, the processor is specifically used for when obtaining the displacement parameter of coordinate system using the abscissa value of i-th of DEFORMATION POINTS: constructing transposed matrix using the abscissa value of i-th of DEFORMATION POINTS;The transposed matrix is determined as to the displacement parameter of coordinate system.
Preferably, the processor is carrying out i-th folding to this feature point using the rotation parameter and the displacement parameter, it obtains being specifically used for when characteristic point after the folding of i-th: utilizing the rotation parameter and the displacement parameter, i-th folding is carried out to characteristic point after corresponding (i-1)-th folding of this feature point, obtains characteristic point after the folding of the corresponding i-th of this feature point.
Preferably, the processor is projected to by the second class point cloud as plane, obtains being specifically used for when the 2D virtual image with folding effect: obtaining the corresponding location information of mobile platform and posture information;The picture plane of each projecting characteristic points in the second class point cloud to mobile platform is obtained into the 2D virtual image with folding effect according to the corresponding location information of the mobile platform and posture information.
Preferably, the processor is specifically used for when obtaining the corresponding location information of mobile platform and posture information: obtaining the corresponding camera site of mobile platform and shooting posture according to the multiple 2D original image;Alternatively, obtaining the corresponding camera site of mobile platform and shooting posture according to the multiple 2D original image, and virtual location and virtual posture are obtained according to the camera site and the shooting posture.
Preferably, the processor is according to the corresponding location information of the mobile platform and posture information, by the picture plane of each projecting characteristic points in the second class point cloud to mobile platform, it obtains being specifically used for when the 2D virtual image with folding effect: utilizing the corresponding internal reference of mobile platform, the location information, the posture information, each projecting characteristic points in second class point cloud are obtained into the corresponding 2D pixel of each characteristic point as plane to described;The corresponding 2D pixel of each characteristic point in second class point cloud is formed into the 2D virtual image.
Preferably, the processor is also used to after the second class point cloud is projected to the 2D virtual image for obtaining having folding effect as plane: from the corresponding 2D characteristic point of each characteristic point determined in the second class point cloud in 2D original image;The 2D original image is marked off into multiple Delta Regions using the 2D characteristic point;Wherein, for each Delta Region, including 3 2D characteristic points, multiple 2D original points;The corresponding projective parameter in the Delta Region is obtained using 3 2D characteristic points;Projection process is carried out to all 2D original points in the Delta Region using the projective parameter.
Preferably, the processor is specifically used for when the 2D original image is marked off multiple Delta Regions using the 2D characteristic point: the 2D characteristic point in 2D original image being connected into multiple Delta Regions using triangulation.
The processor is specifically used in projective parameter corresponding using 3 2D characteristic points acquisition Delta Region: using 3 2D characteristic points in the two-dimensional signal of two-dimensional signal, 3 2D characteristic points in 2D virtual image in 2D original image, obtaining the corresponding projective parameter in the Delta Region.
Based on similarly conceiving with the above method, the embodiment of the present invention also provides a kind of computer readable storage medium, is stored with computer instruction on the computer readable storage medium, is performed in the computer instruction, realizes above-mentioned image processing method.
System, device, module or the unit that above-described embodiment illustrates can be realized by computer chip or entity, or be realized by the product with certain function.A kind of typically to realize that equipment is computer, the concrete form of computer can be the combination of any several equipment in personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation equipment, E-mail receiver/send equipment, game console, tablet computer, wearable device or these equipment.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, the function of each unit can be realized in the same or multiple software and or hardware in carrying out the present invention.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program product.Therefore, the form of complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the present invention.Moreover, the form for the computer program product implemented in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) that one or more wherein includes computer usable program code can be used in the embodiment of the present invention.
The present invention be referring to according to the method for the embodiment of the present invention, the flowchart and/or the block diagram of equipment (system) and computer program product describes.It is generally understood that the combination of process and/or box in each flow and/or block and flowchart and/or the block diagram that are realized by computer program instructions in flowchart and/or the block diagram.These computer program instructions be can provide to the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate a machine, so that generating by the instruction that computer or the processor of other programmable data processing devices execute for realizing the device for the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
And, these computer program instructions also can store in being able to guide computer or other programmable data processing devices computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates the manufacture including command device, which realizes the function of specifying in one process of flow chart or multiple processes and/or one box of block diagram or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing devices, so that series of operation steps are executed on computer or other programmable devices to generate computer implemented processing, thus the step of instruction executed on computer or other programmable devices is provided for realizing the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
The foregoing is merely the embodiment of the present invention, are not intended to restrict the invention.To those skilled in the art, the invention may be variously modified and varied.All any modification, equivalent replacement, improvement within the spirit and principles of the present invention, should be included within scope of the presently claimed invention.

Claims (39)

  1. A kind of image processing method, which is characterized in that the described method includes:
    Obtain multiple 2D original images comprising target object;
    The first kind point cloud of the 3D scene of the target object is obtained according to the multiple 2D original image;
    Folding is carried out to the first kind point cloud using deformation function, obtains the second class point cloud;
    The second class point cloud is projected to as plane, the 2D virtual image with folding effect is obtained.
  2. The method according to claim 1, wherein
    It is described to obtain multiple 2D original images comprising target object, comprising:
    The flight path of mobile platform is planned according to the position of target object, control mobile platform flies according to the flight path, and acquisition includes multiple 2D original images of target object in flight course.
  3. The method according to claim 1, wherein the first kind point cloud of the 3D scene for obtaining the target object according to the multiple 2D original image, comprising:
    The multiple 2D original image is handled by image processing algorithm, obtains the first kind point cloud of the 3D scene of the target object;The first kind point cloud includes multiple characteristic points with three-dimensional information.
  4. According to the method described in claim 3, it is characterized in that,
    Described image Processing Algorithm includes: structural remodeling SfM algorithm;
    Wherein, the SfM algorithm includes: sparse SfM algorithm;Alternatively, intensive SfM algorithm.
  5. The method according to claim 1, wherein
    Folding is carried out to the first kind point cloud using deformation function, obtains the second class point cloud, comprising:
    N number of DEFORMATION POINTS is chosen on the corresponding curve of deformation function, N is the positive integer more than or equal to 1;
    For each characteristic point in first kind point cloud, at least one corresponding DEFORMATION POINTS of this feature point is determined;Folding, characteristic point after being folded are carried out to this feature point using the slope of determining each DEFORMATION POINTS;
    By characteristic point after the corresponding folding of each characteristic point in first kind point cloud, it is determined as the second class point cloud.
  6. Method according to claim 1 or 5, which is characterized in that the deformation function includes:
    Exponential function;Alternatively, parabolic function;Alternatively, Archimedes spiral function.
  7. According to the method described in claim 5, it is characterized in that,
    It is described to choose N number of DEFORMATION POINTS on the corresponding curve of deformation function, comprising:
    In the abscissa direction of deformation function homologous thread, a DEFORMATION POINTS is chosen every first distance;Alternatively, choosing a DEFORMATION POINTS every second distance in the ordinate direction of deformation function homologous thread.
  8. According to the method described in claim 5, it is characterized in that,
    At least one corresponding DEFORMATION POINTS of determining this feature point, comprising:
    Based on the abscissa value of each DEFORMATION POINTS in N number of DEFORMATION POINTS, the DEFORMATION POINTS that abscissa value is less than the abscissa value of this feature point is chosen, as the corresponding DEFORMATION POINTS of this feature point.
  9. According to the method described in claim 5, it is characterized in that, the slope using determining each DEFORMATION POINTS carries out folding, characteristic point after being folded to this feature point, comprising:
    If this feature point corresponds to M DEFORMATION POINTS, according to the sequence of first DEFORMATION POINTS to m-th DEFORMATION POINTS, using the corresponding slope of the M DEFORMATION POINTS, M folding, characteristic point after being folded are carried out to this feature point;M is the positive integer more than or equal to 1, and M is less than or equal to N.
  10. According to the method described in claim 5, it is characterized in that, the slope using determining each DEFORMATION POINTS carries out folding, characteristic point after being folded to this feature point, comprising:
    If this feature point corresponds to M DEFORMATION POINTS, the corresponding slope of i-th of DEFORMATION POINTS is utilized, i-th folding is carried out to this feature point, obtains characteristic point after the folding of i-th;Wherein, the value of the i is followed successively by 1,2 ... M, the M are the positive integer more than or equal to 1, and M is less than or equal to N.
  11. According to the method described in claim 10, it is characterized in that, carrying out i-th folding using the corresponding slope of i-th of DEFORMATION POINTS to this feature point, obtaining characteristic point after the folding of i-th, comprising:
    The rotation parameter of coordinate system is obtained using the corresponding slope of i-th of DEFORMATION POINTS, the displacement parameter of coordinate system is obtained using the abscissa value of i-th of DEFORMATION POINTS, and i-th folding is carried out to this feature point using the rotation parameter and the displacement parameter, obtain characteristic point after the folding of i-th.
  12. According to the method for claim 11, which is characterized in that
    The rotation parameter that coordinate system is obtained using the corresponding slope of i-th of DEFORMATION POINTS, comprising:
    The first angle of i-th of DEFORMATION POINTS and abscissa is obtained using the corresponding slope of i-th of DEFORMATION POINTS;
    Utilize first angle target angle corresponding with the second angle acquisition i-th of DEFORMATION POINTS;Wherein, second angle is (i-1)-th angle between DEFORMATION POINTS and abscissa;
    The rotation parameter of coordinate system is obtained using the corresponding target angle of i-th of DEFORMATION POINTS.
  13. According to the method for claim 11, which is characterized in that
    The abscissa value using i-th of DEFORMATION POINTS obtains the displacement parameter of coordinate system, comprising:
    Transposed matrix is constructed using the abscissa value of i-th of DEFORMATION POINTS;
    The transposed matrix is determined as to the displacement parameter of coordinate system.
  14. According to the method for claim 11, which is characterized in that
    It is described that i-th folding is carried out to this feature point using the rotation parameter and the displacement parameter, obtain characteristic point after the folding of i-th, comprising:
    Using the rotation parameter and the displacement parameter, i-th folding is carried out to characteristic point after corresponding (i-1)-th folding of this feature point, obtains characteristic point after the folding of the corresponding i-th of this feature point.
  15. The method according to claim 1, wherein described project to the second class point cloud as plane, the 2D virtual image with folding effect is obtained, comprising:
    Obtain the corresponding location information of mobile platform and posture information;
    The picture plane of each projecting characteristic points in the second class point cloud to mobile platform is obtained into the 2D virtual image with folding effect according to the corresponding location information of the mobile platform and posture information.
  16. According to the method for claim 15, which is characterized in that
    The corresponding location information of the acquisition mobile platform and posture information, comprising:
    The corresponding camera site of mobile platform and shooting posture are obtained according to the multiple 2D original image;
    Alternatively, obtaining the corresponding camera site of mobile platform and shooting posture according to the multiple 2D original image, and virtual location and virtual posture are obtained according to the camera site and the shooting posture.
  17. According to the method for claim 15, it is characterized in that, it is described according to the corresponding location information of the mobile platform and posture information, by the picture plane of each projecting characteristic points in the second class point cloud to mobile platform, obtain the 2D virtual image with folding effect, comprising:
    Using the corresponding internal reference of mobile platform, the location information, the posture information, each projecting characteristic points in the second class point cloud are obtained into the corresponding 2D pixel of each characteristic point as plane to described;
    The corresponding 2D pixel of each characteristic point in second class point cloud is formed into the 2D virtual image.
  18. The method according to claim 1, wherein described project to the second class point cloud as plane, obtain after there is the 2D virtual image of folding effect, the method also includes:
    From the corresponding 2D characteristic point of each characteristic point determined in 2D original image in the second class point cloud;
    The 2D original image is marked off into multiple Delta Regions using the 2D characteristic point;Wherein, for each Delta Region, including 3 2D characteristic points, multiple 2D original points;
    The corresponding projective parameter in the Delta Region is obtained using 3 2D characteristic points;
    Projection process is carried out to all 2D original points in the Delta Region using the projective parameter.
  19. According to the method for claim 18, which is characterized in that
    The 2D original image is marked off into multiple Delta Regions using the 2D characteristic point, comprising:
    2D characteristic point in 2D original image is connected by multiple Delta Regions using triangulation.
  20. According to the method for claim 18, which is characterized in that
    The corresponding projective parameter in the Delta Region is obtained using 3 2D characteristic points, comprising:
    Using 3 2D characteristic points in the two-dimensional signal of two-dimensional signal, 3 2D characteristic points in 2D virtual image in 2D original image, the corresponding projective parameter in the Delta Region is obtained.
  21. A kind of image processing equipment characterized by comprising memory and processor;
    The memory, for storing program code;
    The processor, for calling said program code, when said program code is performed, for performing the following operations: acquisition includes multiple 2D original images of target object;
    The first kind point cloud of the 3D scene of the target object is obtained according to the multiple 2D original image;
    Folding is carried out to the first kind point cloud using deformation function, obtains the second class point cloud;
    The second class point cloud is projected to as plane, the 2D virtual image with folding effect is obtained.
  22. Equipment according to claim 21, which is characterized in that
    The processor is specifically used for when obtaining and including multiple 2D original images of target object: the flight path of mobile platform is planned according to the position of target object, control mobile platform flies according to the flight path, and acquisition includes multiple 2D original images of target object in flight course.
  23. Equipment according to claim 21, it is characterized in that, the processor is specifically used in the first kind point cloud for obtaining the 3D scene of the target object according to the multiple 2D original image: being handled by image processing algorithm the multiple 2D original image, obtains the first kind point cloud of the 3D scene of the target object;The first kind point cloud includes multiple characteristic points with three-dimensional information.
  24. Equipment according to claim 21, which is characterized in that the processor is carrying out folding to the first kind point cloud using deformation function, obtains being specifically used for when the second class point cloud:
    N number of DEFORMATION POINTS is chosen on the corresponding curve of deformation function, N is the positive integer more than or equal to 1;
    For each characteristic point in first kind point cloud, at least one corresponding DEFORMATION POINTS of this feature point is determined;Folding, characteristic point after being folded are carried out to this feature point using the slope of determining each DEFORMATION POINTS;
    By characteristic point after the corresponding folding of each characteristic point in first kind point cloud, it is determined as the second class point cloud.
  25. Equipment according to claim 24, which is characterized in that
    The processor is specifically used for when choosing N number of DEFORMATION POINTS on the corresponding curve of deformation function: in the abscissa direction of deformation function homologous thread, choosing a DEFORMATION POINTS every first distance;Alternatively, choosing a DEFORMATION POINTS every second distance in the ordinate direction of deformation function homologous thread.
  26. Equipment according to claim 24, which is characterized in that
    The processor is specifically used for when determining corresponding at least one DEFORMATION POINTS of this feature point: the abscissa value based on each DEFORMATION POINTS in N number of DEFORMATION POINTS, the DEFORMATION POINTS that abscissa value is less than the abscissa value of this feature point is chosen, as the corresponding DEFORMATION POINTS of this feature point.
  27. Equipment according to claim 24, it is characterized in that, the processor carries out folding to this feature point in the slope using determining each DEFORMATION POINTS, it is specifically used for when characteristic point after being folded: if this feature point corresponds to M DEFORMATION POINTS, then according to the sequence of first DEFORMATION POINTS to m-th DEFORMATION POINTS, using the corresponding slope of the M DEFORMATION POINTS, M folding, characteristic point after being folded are carried out to this feature point;M is the positive integer more than or equal to 1, and M is less than or equal to N.
  28. Equipment according to claim 24, it is characterized in that, the processor carries out folding to this feature point in the slope using determining each DEFORMATION POINTS, it is specifically used for when characteristic point after being folded: if this feature point corresponds to M DEFORMATION POINTS, then utilize the corresponding slope of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point, obtains characteristic point after the folding of i-th;Wherein, the value of the i is followed successively by 1,2 ... M, the M are the positive integer more than or equal to 1, and M is less than or equal to N.
  29. Equipment according to claim 28, it is characterized in that, the processor is utilizing the corresponding slope of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point, it obtains being specifically used for when characteristic point after the folding of i-th: the corresponding slope of i-th of DEFORMATION POINTS being utilized to obtain the rotation parameter of coordinate system, the displacement parameter of coordinate system is obtained using the abscissa value of i-th of DEFORMATION POINTS, i-th folding is carried out to this feature point using the rotation parameter and the displacement parameter, obtains characteristic point after the folding of i-th.
  30. Equipment according to claim 29, which is characterized in that
    The processor is specifically used for when obtaining the rotation parameter of coordinate system using the corresponding slope of i-th of DEFORMATION POINTS: the first angle of i-th of DEFORMATION POINTS and abscissa is obtained using the corresponding slope of i-th of DEFORMATION POINTS;Utilize first angle target angle corresponding with the second angle acquisition i-th of DEFORMATION POINTS;Wherein, second angle is (i-1)-th angle between DEFORMATION POINTS and abscissa;The rotation parameter of coordinate system is obtained using the corresponding target angle of i-th of DEFORMATION POINTS.
  31. Equipment according to claim 29, which is characterized in that the processor is specifically used for when obtaining the displacement parameter of coordinate system using the abscissa value of i-th of DEFORMATION POINTS: transposed matrix is constructed using the abscissa value of i-th of DEFORMATION POINTS;The transposed matrix is determined as to the displacement parameter of coordinate system.
  32. Equipment according to claim 29, which is characterized in that
    The processor is carrying out i-th folding to this feature point using the rotation parameter and the displacement parameter, it obtains being specifically used for when characteristic point after the folding of i-th: utilizing the rotation parameter and the displacement parameter, i-th folding is carried out to characteristic point after corresponding (i-1)-th folding of this feature point, obtains characteristic point after the folding of the corresponding i-th of this feature point.
  33. Equipment according to claim 21, which is characterized in that
    The processor is projected to by the second class point cloud as plane, obtains being specifically used for when the 2D virtual image with folding effect: obtaining the corresponding location information of mobile platform and posture information;The picture plane of each projecting characteristic points in the second class point cloud to mobile platform is obtained into the 2D virtual image with folding effect according to the corresponding location information of the mobile platform and posture information.
  34. Equipment according to claim 33, which is characterized in that
    The processor is specifically used for when obtaining the corresponding location information of mobile platform and posture information: obtaining the corresponding camera site of mobile platform and shooting posture according to the multiple 2D original image;Alternatively, obtaining the corresponding camera site of mobile platform and shooting posture according to the multiple 2D original image, and virtual location and virtual posture are obtained according to the camera site and the shooting posture.
  35. Equipment according to claim 33, it is characterized in that, the processor is according to the corresponding location information of the mobile platform and posture information, by the picture plane of each projecting characteristic points in the second class point cloud to mobile platform, obtain being specifically used for when the 2D virtual image with folding effect:
    Using the corresponding internal reference of mobile platform, the location information, the posture information, each projecting characteristic points in the second class point cloud are obtained into the corresponding 2D pixel of each characteristic point as plane to described;
    The corresponding 2D pixel of each characteristic point in second class point cloud is formed into the 2D virtual image.
  36. Equipment according to claim 21, which is characterized in that the processor is also used to after the second class point cloud is projected to the 2D virtual image for obtaining having folding effect as plane:
    From the corresponding 2D characteristic point of each characteristic point determined in 2D original image in the second class point cloud;
    The 2D original image is marked off into multiple Delta Regions using the 2D characteristic point;Wherein, for each Delta Region, including 3 2D characteristic points, multiple 2D original points;
    The corresponding projective parameter in the Delta Region is obtained using 3 2D characteristic points;
    Projection process is carried out to all 2D original points in the Delta Region using the projective parameter.
  37. Equipment according to claim 36, it is characterized in that, the processor is specifically used for when the 2D original image is marked off multiple Delta Regions using the 2D characteristic point: the 2D characteristic point in 2D original image being connected into multiple Delta Regions using triangulation.
  38. Equipment according to claim 36, it is characterized in that, the processor is specifically used in projective parameter corresponding using 3 2D characteristic points acquisition Delta Region: using 3 2D characteristic points in the two-dimensional signal of two-dimensional signal, 3 2D characteristic points in 2D virtual image in 2D original image, obtaining the corresponding projective parameter in the Delta Region.
  39. A kind of computer readable storage medium, which is characterized in that be stored with computer instruction on the computer readable storage medium, be performed in the computer instruction, realize the described in any item image processing methods of claim 1-20.
CN201880012242.9A 2018-01-22 2018-01-22 Image processing method, equipment and computer readable storage medium Pending CN110313020A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/073632 WO2019140688A1 (en) 2018-01-22 2018-01-22 Image processing method and apparatus and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110313020A true CN110313020A (en) 2019-10-08

Family

ID=67301242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880012242.9A Pending CN110313020A (en) 2018-01-22 2018-01-22 Image processing method, equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110313020A (en)
WO (1) WO2019140688A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101935820B1 (en) 2017-08-28 2019-01-07 세종화학 (주) Method for manufacturing high purity ammonium dihydrogenphosphate using ammonium phosphate waste solutions
CN113362236A (en) * 2020-03-05 2021-09-07 北京京东乾石科技有限公司 Point cloud enhancement method, point cloud enhancement device, storage medium and electronic equipment
CN113628322A (en) * 2021-07-26 2021-11-09 阿里巴巴(中国)有限公司 Image processing method, AR display live broadcast method, AR display equipment, AR display live broadcast equipment and storage medium
CN113926191A (en) * 2021-11-18 2022-01-14 网易(杭州)网络有限公司 Method, device and electronic device for virtual camera control

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111699454B (en) * 2019-05-27 2024-04-12 深圳市大疆创新科技有限公司 Flight planning method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425183A (en) * 2008-11-13 2009-05-06 上海交通大学 Deformable body three-dimensional tracking method based on second order cone programing
US20130057542A1 (en) * 2011-09-07 2013-03-07 Ricoh Company, Ltd. Image processing apparatus, image processing method, storage medium, and image processing system
CN104915986A (en) * 2015-06-26 2015-09-16 北京航空航天大学 Physical three-dimensional model automatic modeling method
US9466143B1 (en) * 2013-05-03 2016-10-11 Exelis, Inc. Geoaccurate three-dimensional reconstruction via image-based geometry
GB201701383D0 (en) * 2017-01-27 2017-03-15 Ucl Business Plc Apparatus, method and system for alignment of 3D datasets

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100443854C (en) * 2006-09-15 2008-12-17 东南大学 Phase Unwrapping Method Based on Gray Code in 3D Scanning System
US9087381B2 (en) * 2013-11-13 2015-07-21 Thomas Tsao Method and apparatus for building surface representations of 3D objects from stereo images
CA2948903C (en) * 2014-05-13 2020-09-22 Pcp Vr Inc. Method, system and apparatus for generation and playback of virtual reality multimedia
CN107576275A (en) * 2017-08-11 2018-01-12 哈尔滨工业大学 A kind of method for carrying out straining field measurement to inflatable structure using photogrammetric technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425183A (en) * 2008-11-13 2009-05-06 上海交通大学 Deformable body three-dimensional tracking method based on second order cone programing
US20130057542A1 (en) * 2011-09-07 2013-03-07 Ricoh Company, Ltd. Image processing apparatus, image processing method, storage medium, and image processing system
US9466143B1 (en) * 2013-05-03 2016-10-11 Exelis, Inc. Geoaccurate three-dimensional reconstruction via image-based geometry
CN104915986A (en) * 2015-06-26 2015-09-16 北京航空航天大学 Physical three-dimensional model automatic modeling method
GB201701383D0 (en) * 2017-01-27 2017-03-15 Ucl Business Plc Apparatus, method and system for alignment of 3D datasets

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101935820B1 (en) 2017-08-28 2019-01-07 세종화학 (주) Method for manufacturing high purity ammonium dihydrogenphosphate using ammonium phosphate waste solutions
CN113362236A (en) * 2020-03-05 2021-09-07 北京京东乾石科技有限公司 Point cloud enhancement method, point cloud enhancement device, storage medium and electronic equipment
CN113362236B (en) * 2020-03-05 2024-03-05 北京京东乾石科技有限公司 Point cloud enhancement method, point cloud enhancement device, storage medium and electronic equipment
CN113628322A (en) * 2021-07-26 2021-11-09 阿里巴巴(中国)有限公司 Image processing method, AR display live broadcast method, AR display equipment, AR display live broadcast equipment and storage medium
CN113628322B (en) * 2021-07-26 2023-12-05 阿里巴巴(中国)有限公司 Image processing, AR display and live broadcast method, device and storage medium
CN113926191A (en) * 2021-11-18 2022-01-14 网易(杭州)网络有限公司 Method, device and electronic device for virtual camera control
CN113926191B (en) * 2021-11-18 2024-12-20 网易(杭州)网络有限公司 Virtual camera control method, device and electronic device

Also Published As

Publication number Publication date
WO2019140688A1 (en) 2019-07-25

Similar Documents

Publication Publication Date Title
US10349033B2 (en) Three-dimensional map generating and displaying apparatus and method
US11967162B2 (en) Method and apparatus for 3-D auto tagging
US10958891B2 (en) Visual annotation using tagging sessions
EP3970117B1 (en) Distributed pose estimation
US11417069B1 (en) Object and camera localization system and localization method for mapping of the real world
EP3786892B1 (en) Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
US20230045393A1 (en) Volumetric depth video recording and playback
US10732797B1 (en) Virtual interfaces for manipulating objects in an immersive environment
US20200320777A1 (en) Neural rerendering from 3d models
CN110313020A (en) Image processing method, equipment and computer readable storage medium
JP6775776B2 (en) Free viewpoint movement display device
US9978180B2 (en) Frame projection for augmented reality environments
CN105765631A (en) Large-scale surface reconstruction that is robust against tracking and mapping errors
EP3832605B1 (en) Method and device for determining potentially visible set, apparatus, and storage medium
US20240428437A1 (en) Object and camera localization system and localization method for mapping of the real world
US10325403B2 (en) Image based rendering techniques for virtual reality
CN111602104A (en) Method and apparatus for presenting synthetic reality content in association with identified objects
CN117152313A (en) Virtual paper
CN111222586A (en) Inclined image matching method and device based on three-dimensional inclined model visual angle
US20240161377A1 (en) Physics-based simulation of human characters in motion
US11385856B2 (en) Synchronizing positioning systems and content sharing between multiple devices
WO2017212999A1 (en) Video generation device, video generation method, and video generation program
Strothoff et al. Interactive generation of virtual environments using muavs
TW202311815A (en) Display of digital media content on physical surface
US12271992B2 (en) Hybrid depth maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191008