[go: up one dir, main page]

CN117745528A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN117745528A
CN117745528A CN202310918880.XA CN202310918880A CN117745528A CN 117745528 A CN117745528 A CN 117745528A CN 202310918880 A CN202310918880 A CN 202310918880A CN 117745528 A CN117745528 A CN 117745528A
Authority
CN
China
Prior art keywords
image
processing
images
perspective correction
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310918880.XA
Other languages
Chinese (zh)
Inventor
叶伟文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202310918880.XA priority Critical patent/CN117745528A/en
Publication of CN117745528A publication Critical patent/CN117745528A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses an image processing method and device, and belongs to the technical field of image processing. The specific scheme comprises the following steps: acquiring attitude information of at least two images, wherein the at least two images are adjacent image frames; performing first processing on at least two images according to the gesture information, wherein the first processing comprises perspective correction processing; and performing stitching processing on at least two images after the first processing to obtain a third image.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and an image processing device.
Background
In daily life, there are many scenes in which panoramic images are required to be photographed, for example, the user photographs images of a landscape.
In the related art, even an ultra-wide angle lens has a limited photographing range, and thus, an image stitching technique can be used to improve the scene coverage of a panoramic image.
However, since it is difficult to maintain consistency of shooting angles of images respectively shot, in the prior art, there is a problem that perspective distortion of panoramic images formed by stitching a plurality of images is high.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device, which can solve the problem of high perspective deformation of panoramic images formed by splicing a plurality of images.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring attitude information of at least two images, wherein the at least two images are adjacent image frames; performing first processing on at least two images according to the gesture information, wherein the first processing comprises perspective correction processing; and performing stitching processing on at least two images after the first processing to obtain a third image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises an acquisition module and a processing module; the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring gesture information of at least two images, and the at least two images are adjacent image frames; the processing module is used for carrying out first processing on at least two images according to the gesture information, wherein the first processing comprises perspective correction processing; and performing stitching processing on at least two images after the first processing to obtain a third image.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, firstly, attitude information of at least two images is acquired, wherein the at least two images are adjacent image frames; and performing perspective correction processing on at least two images according to the posture information, reducing perspective deformation of adjacent image frames caused by different shooting angles through the perspective correction processing, and then performing splicing processing on the at least two images subjected to the perspective correction processing to obtain a panoramic image with low perspective deformation, namely a third image. Through this scheme, the perspective deformation degree of the panoramic image that the concatenation of a plurality of images formed when having reduced panoramic shooting has promoted user's experience of shooing.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an image processing procedure of the image processing method provided in the embodiment of the present application;
FIG. 3 is a second flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a hardware schematic of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
The execution subject of the image processing method provided in the embodiment of the present application may be an electronic device or a functional module or a functional entity capable of implementing the image processing method in the electronic device, where the electronic device in the embodiment of the present application includes, but is not limited to, a mobile phone, a tablet computer, a camera, a wearable device, and the like, and the image processing method provided in the embodiment of the present application is described below by taking the electronic device as an execution subject.
As shown in fig. 1, an embodiment of the present application provides an image processing method, which may include steps 101 to 103:
and 101, acquiring the attitude information of at least two images.
Wherein the at least two images may be adjacent image frames.
Optionally, the electronic device in an embodiment of the present application may include an inertial measurement sensor (Inertial measurement unit, IMU) that may measure pose information of the electronic device when the electronic device captures at least two images through the camera. Since the operating frequency of the inertial measurement sensor is generally higher than the photographing frame rate of the camera, the electronic device may determine pose information, i.e., pose information, corresponding to each of the at least two images according to the time stamp of each image frame.
Optionally, the triggering mechanism for the electronic device to acquire the gesture information may be: in a case where the electronic device is in the image-synthesizing photographing mode, a first input of a user is received, and the electronic device may determine at least two images in response to the first input and acquire pose information of the at least two images.
It should be noted that, the image composition shooting mode is one of panorama shooting modes, and in the image composition shooting mode, the electronic device may continuously shoot at least two images and perform stitching processing on the at least two images to generate a panoramic image.
Optionally, in the image composition shooting mode, the user may perform custom setting on the number of images continuously shot by the electronic device. The larger the number of images continuously shot by the electronic equipment is, the lower the offensiveness of the finally generated panoramic image is; the smaller the number of images continuously photographed by the electronic device, the faster the panoramic image is generated.
Optionally, after acquiring the gesture information of the at least two images, the electronic device may analyze and compare the gesture information of the at least two images, and if the position difference of the at least two images is greater than a preset value, which indicates that the content overlapping ratio of the at least two images is low, the electronic device may output first prompt information, where the first prompt information may be used to prompt the user to re-shoot the images.
And 102, performing first processing on at least two images according to the gesture information.
Wherein the first process includes a perspective correction process.
Alternatively, the at least two images may include a first image and a second image. The gesture information may include first gesture information corresponding to the first image and second gesture information corresponding to the second image. As shown in fig. 2, the electronic device may perform perspective correction processing on the first image according to the first posture information, and perform perspective correction processing on the second image according to the second posture information.
The perspective correction processing refers to compositing or editing the image so that vertical lines in the real scene are also represented as lines in the vertical direction in the image, that is, deformation of the image can be reduced through the perspective correction processing.
Based on the scheme, as the perspective correction processing can be respectively carried out on the first image and the second image, the perspective deformation problem in the first image and the second image can be avoided, and a basis is provided for synthesizing the third image.
Optionally, the electronic device performs perspective correction processing on the first image according to the first gesture information, including: the electronic equipment determines a first rotation vector of the first image in the gravity direction according to the first gesture information; determining a first rotation matrix according to the first rotation vector and shooting parameters of the first image; and performing perspective correction processing on the first image according to the first rotation matrix.
Specifically, the electronic device may determine the first gesture information v according to equation (1) 1 Determining a first rotation vector r of the first image in the direction of gravity g Wherein g is a vector of the gravitational direction. I.e. by combining the first gesture information v 1 The first image can be subjected to perspective correction processing by rotating to the gravity direction.
r g =v 1 *g, (1)
Alternatively, the electronic device may be based on the first rotation vector r g And determining a first rotation matrix K R from the shooting parameters of the first image g * K_inv, wherein K is an imaging parameter matrix, K_inv is an inverse matrix of K, R g Rotating the matrix for the pose information of the first image, R g Can be according to the first rotation vector r g And equation (2).
Wherein nr is g =norm(r g ) I may be a unit array of 3*3, T representing the matrix transpose.
Finally, the electronic device may be configured to perform a first rotation according to the first rotation matrix K g * And K_inv respectively performs perspective correction processing on pixels in the first image.
Similarly, the process of performing perspective correction processing on the second image by the electronic device according to the second gesture information includes: the electronic equipment determines a second rotation vector of the second image in the gravity direction according to the second gesture information; determining a second rotation matrix according to the second rotation vector and the shooting parameters; and performing perspective correction processing on the second image according to the second rotation matrix.
Concrete embodimentsThe electronic device may be configured to determine the second gesture information v according to equation (3) i Determining a second rotation vector r of the second image in the direction of gravity gi I.e. by combining the second gesture information v i The second image can be subjected to perspective correction processing by rotating to the gravity direction.
r gi =v i *g, (3)
The electronic device can then follow the second rotation vector r gi And determining a second rotation matrix K R by using the image pickup parameters gi * K_inv, wherein R gi Rotating the matrix for pose information of the second image, R gi Can be according to the second rotation vector r gi Equation (4) determines.
Wherein nr is gi =norm(r gi )。
In this embodiment of the present application, the electronic device may be configured according to the second rotation matrix k×r gi * And K_inv respectively performs perspective correction processing on pixels in the second image.
Based on the above scheme, since the first rotation vector of the first image in the gravity direction can be determined, the first posture information can be rotated to the gravity direction, thereby realizing perspective correction processing of the first image.
Optionally, the first process may further include a scaling process. The electronic device may determine a first pixel from the first image that has not been subjected to perspective correction processing; determining a second pixel corresponding to the first pixel from the second image which is not subjected to perspective correction processing; determining a first coefficient according to the first pixel, the second pixel and the gesture information; and scaling the second image subjected to perspective correction processing according to the first coefficient.
Specifically, due to the parallax, the image subjected to the perspective correction process may be deformed to different degrees, so in order to reduce the deformation degree, with continued reference to fig. 2, the electronic device may perform the scaling process on the second image according to the first image. If in the first imageFirst posture information v corresponding to original image 1 For the reference pose information, a third rotation vector corresponding to the alignment of the second image to the first image is r=v i *v 1 The third rotation matrix corresponding to the third rotation vector is k×r×k_inv, where R may be determined according to the third rotation vector R and formula (5).
Where nr=norm (r).
The first coefficient s between the first image which is not subjected to perspective correction processing and the second image which is subjected to perspective correction processing can be determined by the calculation of the above formula (5), wherein the first coefficient s can be determined according to the following formula (6).
Wherein P is 1 Pixel coordinates, P, of the first pixel i Is the pixel coordinates of the second pixel.
Based on the scheme, the second image subjected to perspective correction processing can be subjected to scaling processing according to the first coefficient, so that deformation of the first image and the second image can be reduced, and the image display quality is improved.
It should be noted that the above formula (2), formula (4) and formula (5) are determined based on the Rodrigues rotation formula.
And 103, performing stitching processing on the at least two images after the first processing to obtain a third image.
Alternatively, with continued reference to fig. 2, the electronic device may perform stitching processing on the first image subjected to the first processing and the second image subjected to the second processing, to obtain a third image.
Optionally, the electronic device may extract image feature points from the first processed at least two images based on a first algorithm; the image feature points are fused based on a second algorithm; wherein the first algorithm may be one of: a Scale-invariant feature transform algorithm (Scale-invariant feature transform, SIFT), an accelerated robust feature algorithm (Speeded Up Robust Features, SURF), and a feature point extraction and description algorithm (Oriented FAST and Rotated BRIEF, ORB); the second algorithm may be one of the following: linear hybrid algorithms, multi-resolution fusion algorithms, fusion algorithms based on energy optimization.
The image processing method provided in the embodiment of the present application is described fully below.
As shown in fig. 3, the image processing method provided in the embodiment of the present application may include steps 301 to 308:
step 301, the electronic device may acquire first gesture information corresponding to the first image.
The electronic device may capture the first image, determine a timestamp of the first image, and determine first pose information corresponding to the first image from pose information measured by the IMU according to the timestamp of the first image, where the first pose information is pose information when the electronic device captures the first image.
Step 302, the electronic device may perform perspective correction processing on the first image according to the first gesture information.
The electronic device can determine a first rotation vector of the first image in the gravity direction according to the first gesture information; determining a first rotation matrix according to the first rotation vector and shooting parameters of the first image; and finally, performing perspective correction processing on the first image according to the first rotation matrix.
In step 303, the electronic device may determine whether a second input of the user is received, where the second input is used to trigger the electronic device to end the shooting process.
Step 304, if the second input is not received, the electronic device may acquire second gesture information corresponding to the second image.
The process of the electronic device obtaining the second gesture information may refer to the process of the electronic device obtaining the first gesture information, which is not described herein.
In step 305, the electronic device may perform perspective correction processing on the second image according to the second pose information.
The process of performing perspective correction processing on the second image by the electronic device may refer to the process of performing perspective correction processing on the first image by the electronic device, which is not described herein.
Step 306, the electronic device may scale the second image according to the first image.
Due to the influence of parallax, the images subjected to perspective correction processing can be deformed to different degrees, so that in order to reduce the deformation degree, the electronic device can perform scaling processing on the second image according to the first image.
In step 307, the electronic device may determine the stitched image of the first image and the second image as a new first image.
If a second input is received, the electronic device may output an image, step 308.
Alternatively, the electronic device may determine the first image as the third image and output the third image.
In the embodiment of the application, firstly, attitude information of at least two images is acquired, wherein the at least two images are adjacent image frames; and performing perspective correction processing on at least two images according to the posture information, reducing perspective deformation of adjacent image frames caused by different shooting angles through the perspective correction processing, and then performing splicing processing on the at least two images subjected to the perspective correction processing to obtain a panoramic image with low perspective deformation, namely a third image. Through this scheme, the perspective deformation degree of the panoramic image that the concatenation of a plurality of images formed when having reduced panoramic shooting has promoted user's experience of shooing.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
As shown in fig. 4, the embodiment of the present application further provides an image processing apparatus 400, including: an acquisition module 401 and a processing module 402. An acquiring module 401, configured to acquire pose information of at least two images, where the at least two images are adjacent image frames; a processing module 402, configured to perform a first process on at least two images according to the pose information, where the first process includes perspective correction processing; and performing stitching processing on at least two images after the first processing to obtain a third image.
Optionally, the at least two images include a first image and a second image, and the gesture information includes first gesture information corresponding to the first image and second gesture information corresponding to the second image; the processing module 402 is specifically configured to perform perspective correction processing on the first image according to the first pose information, and perform perspective correction processing on the second image according to the second pose information.
Optionally, the processing module 402 is specifically configured to determine a first rotation vector of the first image in the gravity direction according to the first pose information; determining a first rotation matrix according to the first rotation vector and shooting parameters of the first image; and performing perspective correction processing on the first image according to the first rotation matrix.
Optionally, the first process further comprises a scaling process; a processing module 402, configured to determine a first pixel from a first image that has not been subjected to perspective correction processing; determining a second pixel corresponding to the first pixel from the second image which is not subjected to perspective correction processing; determining a first coefficient according to the first pixel, the second pixel and the gesture information; and scaling the second image subjected to perspective correction processing according to the first coefficient.
Optionally, the processing module 402 is specifically configured to extract image feature points from the at least two images after the first processing based on a first algorithm; carrying out fusion processing on the image characteristic points based on a second algorithm; wherein the first algorithm is one of: a scale-invariant feature transformation algorithm, an acceleration robust feature algorithm and a feature point extraction and description algorithm; the second algorithm is one of the following: linear hybrid algorithms, multi-resolution fusion algorithms, fusion algorithms based on energy optimization.
In the embodiment of the application, firstly, attitude information of at least two images is acquired, wherein the at least two images are adjacent image frames; and performing perspective correction processing on at least two images according to the posture information, reducing perspective deformation of adjacent image frames caused by different shooting angles through the perspective correction processing, and then performing splicing processing on the at least two images subjected to the perspective correction processing to obtain a panoramic image with low perspective deformation, namely a third image. Through this scheme, the perspective deformation degree of the panoramic image that the concatenation of a plurality of images formed when having reduced panoramic shooting has promoted user's experience of shooing.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus provided in this embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 6, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 5, the embodiment of the present application further provides an electronic device 500, including a processor 501 and a memory 502, where the memory 502 stores a program or an instruction that can be executed on the processor 501, and the program or the instruction implements each step of the embodiment of the image processing method when executed by the processor 501, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, and processor 610.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 610 is configured to obtain pose information of at least two images, where the at least two images are adjacent image frames; a processor 610 for performing a first process on at least two images according to the pose information, the first process including perspective correction processing; and performing stitching processing on at least two images after the first processing to obtain a third image.
In the embodiment of the application, firstly, attitude information of at least two images is acquired, wherein the at least two images are adjacent image frames; and performing perspective correction processing on at least two images according to the posture information, reducing perspective deformation of adjacent image frames caused by different shooting angles through the perspective correction processing, and then performing splicing processing on the at least two images subjected to the perspective correction processing to obtain a panoramic image with low perspective deformation, namely a third image. Through this scheme, the perspective deformation degree of the panoramic image that the concatenation of a plurality of images formed when having reduced panoramic shooting has promoted user's experience of shooing.
Optionally, the at least two images include a first image and a second image, and the gesture information includes first gesture information corresponding to the first image and second gesture information corresponding to the second image; the processor 610 is specifically configured to perform perspective correction processing on the first image according to the first pose information, and perform perspective correction processing on the second image according to the second pose information.
In the embodiment of the application, since perspective correction processing can be performed on the first image and the second image respectively, the problem of perspective deformation in the first image and the second image can be avoided, thereby providing a basis for synthesizing the third image.
Optionally, the processor 610 is specifically configured to determine a first rotation vector of the first image in a gravity direction according to the first pose information; determining a first rotation matrix according to the first rotation vector and shooting parameters of the first image; and performing perspective correction processing on the first image according to the first rotation matrix.
In the embodiment of the application, since the first rotation vector of the first image in the gravity direction can be determined, the first posture information can be rotated to the gravity direction, so that perspective correction processing of the first image is realized.
Optionally, the first process further comprises a scaling process; a processor 610 further configured to determine a first pixel from the first image that has not been subjected to perspective correction processing; determining a second pixel corresponding to the first pixel from the second image which is not subjected to perspective correction processing; determining a first coefficient according to the first pixel, the second pixel and the gesture information; and scaling the second image subjected to perspective correction processing according to the first coefficient.
In the embodiment of the application, since the second image subjected to perspective correction processing can be subjected to scaling processing according to the first coefficient, deformation of the first image and the second image can be reduced, and therefore image display quality is improved.
Optionally, the processor 610 is specifically configured to extract image feature points from the at least two images after the first processing based on a first algorithm; carrying out fusion processing on the image characteristic points based on a second algorithm; wherein the first algorithm is one of: a scale-invariant feature transformation algorithm, an acceleration robust feature algorithm and a feature point extraction and description algorithm; the second algorithm is one of the following: linear hybrid algorithms, multi-resolution fusion algorithms, fusion algorithms based on energy optimization.
It should be understood that in the embodiment of the present application, the input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, and the graphics processor 6041 processes image data of still pictures or videos obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. The touch panel 6071 is also called a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory 609 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 609 in the present embodiment includes, but is not limited to, these and any other suitable types of memory.
The processor 610 may include one or more processing units; optionally, the processor 610 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image processing method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring attitude information of at least two images, wherein the at least two images are adjacent image frames;
performing first processing on the at least two images according to the gesture information, wherein the first processing comprises perspective correction processing;
and performing stitching processing on the at least two images after the first processing to obtain a third image.
2. The image processing method according to claim 1, wherein the at least two images include a first image and a second image, and the pose information includes first pose information corresponding to the first image and second pose information corresponding to the second image;
the first processing of the at least two images according to the gesture information includes:
and performing perspective correction processing on the first image according to the first posture information, and performing perspective correction processing on the second image according to the second posture information.
3. The image processing method according to claim 2, wherein the performing perspective correction processing on the first image according to the first posture information includes:
determining a first rotation vector of the first image in the gravity direction according to the first posture information;
determining a first rotation matrix according to the first rotation vector and shooting parameters of the first image;
and performing perspective correction processing on the first image according to the first rotation matrix.
4. The image processing method according to claim 1, wherein the first process further includes a scaling process;
the first processing of the at least two images according to the gesture information includes:
determining a first pixel from the first image that has not been subjected to perspective correction;
determining a second pixel corresponding to the first pixel from the second image which is not subjected to perspective correction processing;
determining a first coefficient according to the first pixel, the second pixel and the gesture information;
and scaling the second image subjected to perspective correction according to the first coefficient.
5. The image processing method according to any one of claims 1 to 4, wherein the performing the stitching process on the at least two images after the first process includes:
extracting image feature points from the at least two images after the first processing based on a first algorithm;
carrying out fusion processing on the image characteristic points based on a second algorithm;
wherein the first algorithm is one of: a scale-invariant feature transformation algorithm, an acceleration robust feature algorithm and a feature point extraction and description algorithm; the second algorithm is one of the following: linear hybrid algorithms, multi-resolution fusion algorithms, fusion algorithms based on energy optimization.
6. An image processing apparatus, comprising: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring the attitude information of at least two images, wherein the at least two images are adjacent image frames;
the processing module is used for performing first processing on the at least two images according to the gesture information, wherein the first processing comprises perspective correction processing; and performing stitching processing on the at least two images after the first processing to obtain a third image.
7. The image processing apparatus according to claim 6, wherein the at least two images include a first image and a second image, and the pose information includes first pose information corresponding to the first image and second pose information corresponding to the second image;
the processing module is specifically configured to perform perspective correction processing on the first image according to the first posture information, and perform perspective correction processing on the second image according to the second posture information.
8. The image processing device according to claim 7, wherein the processing module is specifically configured to determine a first rotation vector of the first image in a gravitational direction according to the first pose information; determining a first rotation matrix according to the first rotation vector and shooting parameters of the first image; and performing perspective correction processing on the first image according to the first rotation matrix.
9. The image processing apparatus according to claim 6, wherein the first process further includes a scaling process; the processing module is further used for determining a first pixel from the first image which is not subjected to perspective correction processing; determining a second pixel corresponding to the first pixel from the second image which is not subjected to perspective correction processing; determining a first coefficient according to the first pixel, the second pixel and the gesture information; and scaling the second image subjected to perspective correction according to the first coefficient.
10. The image processing device according to any of claims 6-9, wherein the processing module is specifically configured to extract image feature points from the at least two images after the first processing based on a first algorithm; carrying out fusion processing on the image characteristic points based on a second algorithm; wherein the first algorithm is one of: a scale-invariant feature transformation algorithm, an acceleration robust feature algorithm and a feature point extraction and description algorithm; the second algorithm is one of the following: linear hybrid algorithms, multi-resolution fusion algorithms, fusion algorithms based on energy optimization.
CN202310918880.XA 2023-07-25 2023-07-25 Image processing method and device Pending CN117745528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310918880.XA CN117745528A (en) 2023-07-25 2023-07-25 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310918880.XA CN117745528A (en) 2023-07-25 2023-07-25 Image processing method and device

Publications (1)

Publication Number Publication Date
CN117745528A true CN117745528A (en) 2024-03-22

Family

ID=90251413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310918880.XA Pending CN117745528A (en) 2023-07-25 2023-07-25 Image processing method and device

Country Status (1)

Country Link
CN (1) CN117745528A (en)

Similar Documents

Publication Publication Date Title
CN110012209B (en) Panoramic image generation method, device, storage medium and electronic device
CN110636276B (en) Video shooting method and device, storage medium and electronic equipment
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN112637500B (en) Image processing method and device
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN114241127B (en) Panoramic image generation method, device, electronic device and medium
JP2010072813A (en) Image processing device and image processing program
CN114390206A (en) Shooting method, device and electronic device
CN112561787A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115426444B (en) Shooting method and device
CN113891005B (en) Shooting method and device and electronic equipment
CN114785957A (en) Shooting method and device thereof
CN117135445A (en) Image processing methods and devices
CN112367470B (en) Image processing method and device and electronic equipment
WO2023241495A1 (en) Photographic method and apparatus
CN117745528A (en) Image processing method and device
CN115861110A (en) Image processing method, device, electronic device and storage medium
CN114049473A (en) Image processing method and device
CN114125297A (en) Video shooting method and device, electronic equipment and storage medium
CN115103119B (en) Shooting method, device and electronic equipment
CN117097982B (en) Target detection method and system
CN115118879B (en) Image capturing and displaying method, device, electronic device and readable storage medium
CN115118884B (en) Shooting method, device and electronic equipment
CN114143462B (en) Shooting method and device
CN116342992A (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination