CN119697353A - Image perspective method, device, electronic device, wearable device and storage medium - Google Patents
Image perspective method, device, electronic device, wearable device and storage medium Download PDFInfo
- Publication number
- CN119697353A CN119697353A CN202311236070.2A CN202311236070A CN119697353A CN 119697353 A CN119697353 A CN 119697353A CN 202311236070 A CN202311236070 A CN 202311236070A CN 119697353 A CN119697353 A CN 119697353A
- Authority
- CN
- China
- Prior art keywords
- black
- image
- color
- pixel point
- white image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
- H04N13/268—Image signal generators with monoscopic-to-stereoscopic image conversion based on depth image-based rendering [DIBR]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image perspective method, an image perspective device, electronic equipment, wearable equipment and a storage medium, wherein the images shot by the wearable equipment are acquired, the images comprise black-and-white images and at least one color image, the black-and-white images and the at least one color image are simultaneously shot, the at least one color image and the black-and-white images are respectively converted into a first color image and a first black-and-white image, the first color image and the first black-and-white image are color images and black-and-white images corresponding to corresponding virtual viewpoints, and for one virtual viewpoint, the perspective image corresponding to the virtual viewpoint is determined according to the brightness of the first color image corresponding to the virtual viewpoint and the brightness of the first black-and-white image.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image perspective method, an image perspective device, an electronic device, a wearable device, and a storage medium.
Background
The Mixed Reality technology (MR) is a further development of virtual Reality technology, and information of a real scene can be introduced into a virtual environment, so that the sense of Reality of user experience is enhanced.
Through setting up binocular camera on wearable equipment, shoot the image and project on the screen through binocular camera, the user can see the image of projection on the screen to realize visual perspective. In the related art, two color cameras are usually adopted as the input of a perspective source when the visual perspective is realized, but the perspective effect is poor, and the actual requirement of human eyes cannot be met.
Disclosure of Invention
The invention provides an image perspective method, an image perspective device, electronic equipment, wearable equipment and a storage medium, which are used for solving the problems that the perspective effect is poor and the actual demands of human eyes cannot be met in the existing visual perspective.
In a first aspect, the present invention provides an image perspective method, the method being applied to a wearable device, the method comprising:
Acquiring a plurality of images shot by the wearable equipment, wherein the images comprise a black-and-white image and at least one color image;
converting the at least one color image and the black-and-white image into a first color image and a first black-and-white image respectively, wherein the first color image and the first black-and-white image are the color image and the black-and-white image corresponding to the corresponding virtual view point;
for one virtual viewpoint, determining a perspective image corresponding to the virtual viewpoint according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to the virtual viewpoint.
Optionally, converting the at least one color image and the black-and-white image into the first color image and the first black-and-white image, respectively, includes:
For any color image, determining first depth information corresponding to each pixel point in the color image, and converting the color image into the first color image according to the first depth information;
And determining second depth information corresponding to each pixel point in the black-and-white image aiming at the black-and-white image, and converting the black-and-white image into the first black-and-white image according to the second depth information.
Optionally, determining the perspective image corresponding to the virtual viewpoint according to the brightness of the first color image corresponding to the virtual viewpoint and the brightness of the first black-and-white image includes:
Determining the pixel correspondence of the first color image and the first black-and-white image through optical flow;
Determining a second pixel point corresponding to the first pixel point in the first black-and-white image according to the pixel corresponding relation aiming at any one first pixel point in the first color image;
Determining a first brightness value corresponding to the first pixel point and a second brightness value corresponding to the second pixel point, and determining a fusion brightness value of the first pixel point according to the first brightness value and the second brightness value;
And determining the perspective image corresponding to the virtual viewpoint according to the fusion brightness value of each first pixel point.
Optionally, determining the first luminance value corresponding to the first pixel point and the second luminance value corresponding to the second pixel point includes:
Determining the pyramid window size corresponding to the first pixel point according to the high-low frequency characteristic information of the first pixel point, wherein the high-low frequency characteristic information comprises high frequency or low frequency, and the pyramid window size corresponding to the high frequency is smaller than the pyramid window size corresponding to the low frequency;
And determining a first brightness value corresponding to the first pixel point and a second brightness value corresponding to the second pixel point according to the pyramid window size corresponding to the first pixel point.
Optionally, determining the fused luminance value of the first pixel according to the first luminance value and the second luminance value includes:
Determining weight values respectively corresponding to the first brightness value and the second brightness value according to first shielding information corresponding to the first pixel point and second shielding information corresponding to the second pixel point for any one first pixel point, wherein the first shielding information represents shielding conditions of the first pixel point on the first color image;
And determining a fusion brightness value of the first pixel point according to the weight value, the first brightness value and the second brightness value.
The wearable device comprises a black-and-white image imaging device and a color camera, wherein the two color cameras respectively correspond to a first eye and a second eye, the black-and-white image imaging device is arranged between the two color cameras, the color images comprise color images shot by the color camera corresponding to the first eye and color images shot by the color camera corresponding to the second eye, the virtual view point comprises a first eye view point and a second eye view point, the first color image comprises a first sub-color image under the first eye view point and a second sub-color image under the second eye view point, and the first black-and-white image comprises a first sub-black-and-white image under the first eye view point and a second sub-black-and-white image under the second eye view point;
Determining first depth information corresponding to each pixel point in the color image, including:
Determining first depth information corresponding to each pixel point in the color image according to the color image and the black-and-white image;
Determining second depth information corresponding to each pixel point in the black-and-white image comprises the following steps:
when a first sub-black-and-white image under a first eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the first eye;
And/or when a second sub-black-and-white image under the second eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the second eye.
The wearable device comprises a black-and-white image imaging device and a color camera, wherein the color camera corresponds to a first eye, the black-and-white image imaging device corresponds to a second eye, the color image is a color image shot by the color camera corresponding to the first eye, the black-and-white image is a black-and-white image output by the black-and-white image imaging device corresponding to the second eye, the first color image comprises a first sub-color image at a first eye viewpoint and a second sub-color image at a second eye viewpoint, the first black-and-white image comprises a first sub-black-and-white image at the first eye viewpoint and a second sub-black-and-white image at the second eye viewpoint, and the first pixel point is a pixel point in the second sub-black-and-white image at the second eye viewpoint;
Determining a perspective image corresponding to the virtual viewpoint according to the fusion brightness value of each first pixel point, including:
For any one first pixel point in a second sub-black-and-white image at the second eye viewpoint, extracting color information of a second pixel point corresponding to the first pixel point from a second sub-color image at the second eye viewpoint, and determining the extracted color information as the color information of the first pixel point;
and determining a perspective image corresponding to the second eye viewpoint according to the fusion brightness value of each first pixel point and the color information.
In a second aspect, the present invention provides an image perspective device, the device being applied to a wearable apparatus, the device comprising:
the wearable device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a plurality of images shot by the wearable device, and the images comprise a black-and-white image and at least one color image;
The conversion module is used for converting the at least one color image and the black-and-white image into a first color image and a first black-and-white image respectively, wherein the first color image and the first black-and-white image are the color image and the black-and-white image corresponding to the corresponding virtual view point;
And the determining module is used for determining a perspective image corresponding to a virtual viewpoint according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to the virtual viewpoint.
In a third aspect, the present invention provides an electronic device comprising at least one processor and a memory;
The memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method of any one of the first aspects.
In a fourth aspect, the invention provides a wearable device comprising a computing unit, a black and white image imaging means and at least one color camera, the black and white image imaging means being a depth sensor and/or a black and white camera, the computing unit being adapted to perform the method according to any of the first aspects.
In a fifth aspect, the present invention provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement a method as in any of the first aspects.
In a sixth aspect, the invention provides a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the first aspects.
The image perspective method, the device, the electronic equipment, the wearable equipment and the storage medium comprise the steps of acquiring a plurality of images shot by the wearable equipment, wherein the images comprise black-and-white images and at least one color image, the black-and-white images and the at least one color image are shot simultaneously, converting the at least one color image and the black-and-white images into a first color image and a first black-and-white image respectively, the first color image and the first black-and-white image are the color images and the black-and-white images corresponding to corresponding virtual viewpoints, determining perspective images corresponding to the virtual viewpoints according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to the virtual viewpoints for one virtual viewpoint, and acquiring the brightness information of the black-and-white images and the brightness information of the color images through the wearable equipment by utilizing the characteristic of high resolution of the black-and-white images so as to improve the resolution of the perspective images and the improvement effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is an application scene diagram of an image perspective method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an image perspective method according to an embodiment of the present invention;
FIG. 3 is a flowchart of another image perspective method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a black-and-white image forming apparatus provided with two color cameras according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a color camera and a black-and-white image imaging device according to an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of an image perspective device according to an embodiment of the present invention;
fig. 7 is a schematic hardware structure of an electronic device according to an embodiment of the present invention.
Specific embodiments of the present invention have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention.
VR (VirtualReality ) refers to a technique that utilizes a computer graphics system and various real and control interface devices to provide an immersive sensation in a computer-generated, interactive, three-dimensional environment. A completely enclosed virtual space, such as a head mounted display, is typically formed by a wearable device.
In some VR products, a function of visual perspective (See-Through) is provided, and Through the function of visual perspective, a user can interact with the external world without removing the head-mounted display, so that information of a real scene can be introduced into a virtual environment, and the sense of reality of user experience is enhanced.
In order to realize that the VR product has the function of visual perspective, an imaging device and a screen are required to be arranged on the VR product, an image is acquired through the imaging device, and the image is projected onto the screen for a user to watch. In the prior art, an imaging device arranged on a wearable device is two color cameras, and images shot by the two color cameras are respectively projected onto two screens for a user to watch. However, when two color cameras are used as the input sources of the perspective, since the bayer filter is provided in the color camera, part of the light is reflected when passing through the color cameras, and the bayer filter is not provided in the black-and-white image imaging device, the light entering the black-and-white image imaging device is not reflected, and therefore, the signal-to-noise ratio of the color image is lower than that of the black-and-white image, and furthermore, the resolution of the color image is lower than that of the black-and-white image, and therefore, when two color images are used as the input sources of the perspective, the perspective effect is poor, and thus, the demands of human eyes cannot be satisfied.
Based on the problems, the application considers the characteristics of richer details, higher resolution and higher signal to noise ratio of the black-and-white image, fuses the brightness information of the black-and-white image with the brightness information of the color image, and improves the resolution of the finally obtained perspective image. When the method is used in a dark light scene, less light exists in the scene, and when a color camera is used, less light exists in a formed color image, so that the method can improve the overall effect of the perspective image in the dark light scene.
Fig. 1 is an application scenario diagram of an image perspective method provided by an embodiment of the present invention, as shown in fig. 1, a black-and-white image imaging device, a color camera (at least one) and a computing unit are disposed on a wearable device, the computing unit may generate a first color image and a first black-and-white image under a human eye viewpoint according to a captured image, where a left eye and a right eye respectively correspond to a group of the first color image and the first black-and-white image, and then perform pixel alignment and brightness information fusion on the two images, so as to generate a perspective image, and display the perspective image on a corresponding screen.
Fig. 2 is a schematic flow chart of an image perspective method according to an embodiment of the present invention, where the method is applied to an image perspective device, and the image perspective device is disposed on an electronic device. The method includes steps S201 to S203:
Step S201, acquiring a plurality of images shot by the wearable equipment, wherein the images comprise a black-and-white image and at least one color image, and the black-and-white image and the at least one color image are shot simultaneously.
An imaging device is arranged on the wearable device, and the imaging device can be used for shooting images so as to be watched by a user. The wearable device may be a head-mounted display, and the imaging device may be provided outside the head-mounted display. In order to enhance the effect of the perspective image, the imaging device provided herein may include a black-and-white image imaging device, and at least one color camera, and thus, one black-and-white image and at least one color image may be obtained.
Alternatively, the black-and-white image imaging device may be a black-and-white camera or a depth sensor, such as iToF (INDIRECT TIME-of-Flight) sensor and dToF (DIRECT TIME-of-Flight) sensor, where the depth sensor outputs depth information and simultaneously outputs a black-and-white image based on infrared imaging, and the black-and-white image based on infrared imaging has a higher signal-to-noise ratio, so that the resolution of the perspective image can be effectively improved.
Optionally, the at least one color camera and the black-and-white image imaging device are simultaneously photographed to obtain at least one color image and one black-and-white image when photographing the images, so as to present a perspective image at any time for the user.
For example, the black-and-white image imaging device and the at least one color camera may take images in real time or at preset intervals while the user is wearing the wearable device. In particular, the time interval at which the image is taken may be adjusted according to the computational effort of the computing unit in the wearable device.
For example, at a first instant, the black and white image imaging device and the at least one color camera simultaneously capture images and present perspective images of the first instant to the user based on the simultaneously captured images, and at a second instant, 5 seconds later from the first instant, the black and white image imaging device and the at least one color camera simultaneously capture images and present perspective images of the second instant to the user based on the simultaneously captured images.
Step S202, converting the at least one color image and the black-and-white image into a first color image and a first black-and-white image respectively, wherein the first color image and the first black-and-white image are the color image and the black-and-white image corresponding to the corresponding virtual view point.
Because the imaging device is arranged outside the wearable equipment, the wearable equipment needs to be placed on the head of a user, and the position of the imaging device is different from the position of the human eye, the image shot by the camera is different from the image actually watched by the human eye, so that the image acquired by the imaging device cannot be directly presented to the human eye.
In order for the human eye to see an accurate image, it is necessary to convert at least one color image and a black-and-white image, i.e. to convert the color image into a first color image at the virtual viewpoint and to convert the black-and-white image into a first black-and-white image at the virtual viewpoint. The virtual viewpoint herein refers to a human eye, and since a human has two eyes, the virtual viewpoint includes a first human eye viewpoint and a second human eye viewpoint. Therefore, it is necessary to acquire a first color image and a first black-and-white image, which correspond to the first eye point and the second eye point, respectively.
Step S203, for one virtual viewpoint, determining a perspective image corresponding to the virtual viewpoint according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to the virtual viewpoint.
After the first color image and the first black-and-white image at the same virtual viewpoint are acquired, the brightness of the first color image and the brightness of the first black-and-white image may be fused. In order to meet the actual demands of human eyes, the brightness of the first black-and-white image and the brightness of the first color image can be fused by taking the first color image as a reference, the fused brightness is taken as the brightness information of the first color image, and the first color image fused with the brightness of the first black-and-white image is projected to a corresponding screen to obtain a perspective image.
By way of example, when the brightness of two images is fused, a neural network fusion method, a gaussian pyramid fusion method, an α -blending fusion method, and the like may be employed.
Wherein, two screens are arranged in the wearable device, which correspond to the first eye viewpoint and the second eye viewpoint respectively, and when the first color image fused with the brightness of the first black-and-white image is projected, the first color image is also required to be projected to the corresponding screen. For example, a first color image corresponding to the left eye may be projected onto the left screen and a first color image corresponding to the right eye may be projected onto the right screen.
The image perspective method comprises the steps of obtaining a plurality of images shot by a wearable device, wherein the images comprise black-and-white images and at least one color image, the black-and-white images and the at least one color image are shot at the same time, converting the at least one color image and the black-and-white images into a first color image and a first black-and-white image respectively, enabling the first color image and the first black-and-white image to be the color image and the black-and-white image corresponding to corresponding virtual viewpoints, determining perspective images corresponding to the virtual viewpoints according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to one virtual viewpoint, and setting a black-and-white image imaging device in the wearable device according to the characteristic that the resolution of the black-and-white images is high, and fusing brightness information of the black-and-white images and brightness information of the color images so as to improve the resolution of the perspective images and the perspective effect.
Fig. 3 is a flowchart of another image perspective method according to an embodiment of the present invention, where the method includes steps S301 to S303:
Step S301, acquiring a plurality of images shot by the wearable equipment, wherein the images comprise a black-and-white image and at least one color image, and the black-and-white image and the at least one color image are shot simultaneously.
The implementation process of step S301 may refer to step S201, and will not be described herein.
Step S302, for any color image, determining first depth information corresponding to each pixel point in the color image, and converting the color image into the first color image according to the first depth information.
When determining a first color image under the visual angle of human eyes according to a color image shot by a color camera, first depth information of each pixel point in the color image needs to be calculated first so as to calculate the first color image according to the first depth information.
Alternatively, in determining depth information of any one image, correction needs to be performed on each imaging device, and an imaging device will be described as an example of a camera. Each camera has a horizontal direction and a vertical direction, and the horizontal directions of the cameras can be parallel through correction, so that the problem of inaccurate calculated depth information caused by errors in the industrial installation process is avoided. When each camera is corrected, the correction can be performed according to the parameters calibrated in the installation process of each camera.
When calculating the depth information, it may be determined from two images, and since the two images are photographed by cameras at different positions, the two images are slightly different, and the first depth information may be determined from a pixel value of each pixel point in the two images.
The process of computing depth information may be referred to as stereo matching or binocular depth estimation. For example, when the first depth information of the color image is calculated, a disparity map corresponding to the color image may be calculated from pixel information of two images, the disparity map being composed of a disparity for each pixel point, the disparity being a difference in pixel values of corresponding point positions in the two images at a certain point in the three-dimensional scene.
The parallax is a direction difference generated by observing the same object from two cameras, and the smaller the parallax is, the larger the parallax is at a position point farther from the camera, and the closer the parallax is to the camera. Therefore, after determining the parallax of any pixel, the depth information of the pixel can be further determined according to the parallax, and the color image can be converted into the first color image under the human eye viewing angle according to the depth information.
When calculating the first color image, it is necessary to calculate the first depth information of each pixel point according to the positional relationship between the human eye and the camera.
There may be two ways of setting the imaging device in the wearable apparatus, the first way is to set two color cameras whose setting positions correspond to the two eyes respectively and one black-and-white image imaging device which may be set between the two color cameras.
The wearable device comprises a black-and-white image imaging device and a color camera, wherein the two color cameras respectively correspond to a first eye and a second eye, the black-and-white image imaging device is arranged between the two color cameras, the color images comprise color images shot by the color camera corresponding to the first eye and color images shot by the color camera corresponding to the second eye, the virtual view point comprises a first eye view point and a second eye view point, the first color image comprises a first sub-color image under the first eye view point and a second sub-color image under the second eye view point, and the first black-and-white image comprises a first sub-black-and-white image under the first eye view point and a second sub-black-and-white image under the second eye view point;
Determining first depth information corresponding to each pixel point in the color image, including:
and determining first depth information corresponding to each pixel point in the color image according to the color image and the black-and-white image.
Fig. 4 is a schematic diagram of a device for setting two color cameras and a black-and-white image, where, as shown in fig. 4, a wearable device may generate a color image 1, a black-and-white image and a color image 2, and the error is smaller when calculating depth information because images shot by cameras with similar distances are relatively similar. Therefore, a first sub-color image at a first eye viewpoint with reference to color image 1 can be obtained from color image 1 and the black-and-white image, and a second sub-color image at a second eye viewpoint with reference to color image 2 can be obtained from color image 2 and the black-and-white image.
By calculating the images of the two color images at the virtual viewpoints from the black-and-white images, respectively, the accuracy of the determined images at the virtual viewpoints can be improved.
Step S303, determining second depth information corresponding to each pixel point in the black-and-white image, and converting the black-and-white image into the first black-and-white image according to the second depth information.
Optionally, determining the second depth information corresponding to each pixel point in the black-and-white image includes:
when a first sub-black-and-white image under a first eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the first eye;
And/or when a second sub-black-and-white image under the second eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the second eye.
When there are two color cameras, the black-and-white image forming apparatus is disposed between the two color cameras, and at this time, the second depth information of each pixel point in the black-and-white image can be determined by the following method.
As shown in fig. 4, for a black-and-white image, a parallax map 1 corresponding to the black-and-white image can be determined according to the black-and-white image and the color image 1, further a second depth information 1 of each pixel point in the black-and-white image can be determined, then a first sub-black-and-white image corresponding to the black-and-white image is calculated according to the second depth information 1, and a parallax map 2 corresponding to the black-and-white image is calculated according to the black-and-white image and the color image 2, further a second depth information 2 of each pixel point in the black-and-white image is determined, and then a second sub-black-and-white image corresponding to the black-and-white image is calculated according to the second depth information 2.
When the second black-and-white image under the two virtual viewpoints corresponding to the black-and-white image is calculated, the second black-and-white image can be determined according to the two color images respectively, so that the accuracy of the first sub-black-and-white image and the second sub-black-and-white image under the determined virtual viewpoints is improved.
There is a second way to set up a camera in the wearable device, one color camera and one black and white image imaging device are set up, the two cameras correspond to the first eye and the second eye respectively.
The wearable device comprises a black-and-white image imaging device and a color camera, wherein the color camera corresponds to a first eye, the black-and-white image imaging device corresponds to a second eye, the color image is a color image shot by the color camera corresponding to the first eye, the black-and-white image is a black-and-white image output by the black-and-white image imaging device corresponding to the second eye, the first color image comprises a first sub-color image at a first eye viewpoint and a second sub-color image at a second eye viewpoint, the first black-and-white image comprises a first sub-black-and-white image at the first eye viewpoint and a second sub-black-and-white image at the second eye viewpoint, and the first pixel point is a pixel point in the second sub-black-and-white image at the second eye viewpoint.
Fig. 5 is a schematic diagram of a color camera and a black-and-white image forming apparatus according to an embodiment of the present invention. As shown in fig. 5, a first sub-color image corresponding to a first eye may be calculated from the color image and the black-and-white image, and a first sub-black-and-white image corresponding to the first eye may be calculated from the color image and the black-and-white image, and a second sub-color image corresponding to a second eye may be calculated from the color image and the black-and-white image.
Aiming at the two camera setting modes, the two camera setting modes have advantages and disadvantages, and the specific camera setting mode can be selected according to actual conditions. When two color cameras and a black-and-white image imaging device are arranged, the electronic equipment can acquire color images corresponding to the two eyes respectively, and in the subsequent algorithm, the processing mode aiming at each color image is the same, and the brightness information of the black-and-white image is used for supplementing the brightness information of the color image, so that the algorithm design is simpler. However, in the case of a color camera and a black-and-white image imaging device, there is a black-and-white image corresponding to one side, color information needs to be filled in the black-and-white image, and the images generated for the two cameras are slightly different in algorithm design when being processed.
Furthermore, the number of cameras is different, so is the bandwidth required. Illustratively, when two color cameras and one black-and-white image imaging device are employed, if the bandwidth required for one color camera is 3 mega and the bandwidth required for one black-and-white image imaging device is 1 mega, the bandwidth required for this solution is 7 mega. When a black-and-white image imaging device and a color camera are used, the required bandwidth is 4 megabits, and it is seen that the scheme of setting two color cameras and a black-and-white image imaging device has high bandwidth requirements on hardware equipment. In addition, when the number of cameras to be set is large, the cost is also large. In practice, a specific camera setting manner may be selected for practical situations.
Through the above operation, both of the two color cameras and the one black-and-white image imaging device and the one color camera and the one black-and-white image imaging device can obtain two images at two virtual viewpoints, namely a first sub-color image and a first sub-black-and-white image, and a second sub-color image and a second sub-black-and-white image. Hereinafter, two images corresponding to one virtual viewpoint are collectively referred to as a first color image and a first black-and-white image.
Because the brightness information of the first black-and-white image needs to be utilized to improve the brightness information of the first color image, the first color image and the first black-and-white image need to be subjected to pixel alignment, so that the brightness fusion of corresponding pixel points can be performed.
Step S304, determining, according to an optical flow, a pixel correspondence between the first color image and the first black-and-white image for a virtual viewpoint.
For the first color image and the first black-and-white image, since both are two images at the same virtual viewpoint, the two images are already in a coarsely aligned state. However, there is a certain difference between the two images after coarse alignment, for example, for a certain position point in the real world, there may be a certain misalignment of the pixels in the first color image and the first black-and-white image, such as one or half of the pixels. Thus, a fine alignment of the first color image and the first black-and-white image is required.
Alternatively, for both images, the optical flow may be used to determine the distance that any pixel in one image moves in the other, so that the first color image and the first black-and-white image may be precisely aligned according to the distance moved.
Because the resolution of the first black-and-white image is high, the first color image can be precisely aligned on the first black-and-white image during fine alignment.
Specifically, after fine alignment, the correspondence between each pixel point in the first color image and each pixel point in the first black-and-white image can be obtained. Illustratively, a first pixel in a first color image is aligned with a second pixel in a first black-and-white image, and so on.
Step S305, for any one first pixel point in the first color image, determining a second pixel point corresponding to the first pixel point in the first black-and-white image according to the pixel correspondence.
After the pixel corresponding relation is determined, a second pixel point corresponding to any first pixel point in the first color image in the first black-and-white image can be further determined according to the pixel corresponding relation, and then brightness fusion of the two corresponding pixel points is performed.
Step S306, determining a first luminance value corresponding to the first pixel point and a second luminance value corresponding to the second pixel point, and determining a fusion luminance value of the first pixel point according to the first luminance value and the second luminance value.
After determining the second pixel point in the first black-and-white image corresponding to the first pixel point in the first color image, the second luminance value and the first luminance value can be fused to obtain a fused luminance value, and the fused luminance value is determined as the luminance value of the first pixel point in the first color image.
For example, the first color image may be converted into an HSV color space or a YUV color space, and the second luminance value of the first black-and-white image is fused with the V value (first luminance value) in the first color image as the V value when converted into the HSV color space, and the second luminance value of the first black-and-white image is fused with the Y value (first luminance value) in the first color image as the Y value when converted into the YUV color space.
Optionally, determining the first luminance value corresponding to the first pixel point and the second luminance value corresponding to the second pixel point includes:
Determining the pyramid window size corresponding to the first pixel point according to the high-low frequency characteristic information of the first pixel point, wherein the high-low frequency characteristic information comprises high frequency or low frequency, and the pyramid window size corresponding to the high frequency is smaller than the pyramid window size corresponding to the low frequency;
And determining a first brightness value corresponding to the first pixel point and a second brightness value corresponding to the second pixel point according to the pyramid window size corresponding to the first pixel point.
When the brightness values corresponding to the two pixel points are fused, a Gaussian pyramid fusion method can be adopted. When the images are fused, the Gaussian kernel can be utilized to respectively decompose the first color image and the first black-and-white image into a multi-scale pyramid image sequence so as to respectively obtain N layers of images, two images of the same layer are fused so as to obtain N fused images, and then the N fused images are reconstructed according to the inverse process of pyramid image sequence generation so as to obtain the fused images.
When two images of the same layer are fused, a pyramid window size exists, and the brightness value of any pixel point is related to the brightness values of a plurality of pixel points corresponding to the pyramid window size. Different pyramid windows can be used at the low frequency and the high frequency of the image through the pyramid fusion method, so that the fused image is smoother.
When pyramid fusion is carried out, ghosts exist in the fusion result when the set pyramid window is larger, and the fusion result is truncated when the set pyramid window is smaller. Based on the above-mentioned problems, a suitable pyramid window size needs to be set, specifically, high-low frequency characteristic information of the first pixel point can be determined, when the first pixel point is high frequency, a smaller pyramid window size is set, and when the first pixel point is low frequency, a larger pyramid window size is set.
When the high-low frequency characteristic information is determined, the first color image can be subjected to Laplacian transformation to obtain the high-low frequency characteristic information of any first pixel point.
After determining the pyramid window size corresponding to the first pixel point, a first luminance value of the first pixel point may be determined. The first brightness value is a brightness average value of a plurality of pixel points contained in a preset area in the first color image, the center of the preset area is the first pixel point, and the size of the preset area is the size of the pyramid window. For example, when determining that the pyramid window size is 3*3, the average value of the brightness of each pixel point in the preset area with the size of 3*3 may be taken as the first brightness value of the first pixel point in the first color image by using the first pixel point as the center, and similarly, the second brightness value corresponding to the second pixel point may be determined by using the above method.
By determining the pyramid window size according to the high-low frequency characteristic information of the pixel points, different pyramid window sizes can be set on different pixel points, so that brightness values corresponding to the first pixel points and the second pixel points respectively can be accurately obtained, and smoothness of the fused image is improved.
Optionally, determining the fused luminance value of the first pixel according to the first luminance value and the second luminance value includes:
Determining weight values respectively corresponding to the first brightness value and the second brightness value according to first shielding information corresponding to the first pixel point and second shielding information corresponding to the second pixel point for any one first pixel point, wherein the first shielding information represents shielding conditions of the first pixel point on the first color image;
And determining a fusion brightness value of the first pixel point according to the weight value, the first brightness value and the second brightness value.
Since the first color image is obtained by processing a color image captured by a color camera, and the first black-and-white image is obtained by processing a black-and-white image obtained by a black-and-white image imaging device, since the position of the camera is different from that of the human eye, there are some occlusion parts (the pixel information of the occlusion parts cannot be determined) for the processed first color image, and there are some occlusion parts for the first black-and-white image. When fusion is performed, a weight value corresponding to the first brightness value and the second brightness value in fusion can be set according to the shielding information of the first pixel point and the shielding information of the second pixel point corresponding to the first pixel point.
Optionally, when the first shielding information corresponding to the first pixel point is shielded and the second shielding information corresponding to the second pixel point is not shielded, the weight value 1 corresponding to the first luminance value may be set smaller than the weight value 2 corresponding to the second luminance value, and the sum of the weight value 1 and the weight value 2 is 1.
For example, when the first shielding information corresponding to the first pixel point is shielded and the second shielding information corresponding to the second pixel point is not shielded, a weight value 1 corresponding to the first luminance value may be set to 0.3, a weight value 2 corresponding to the second luminance value may be set to 0.7, and the fused luminance value of the first pixel point is obtained by weighted summation, that is, a product of the weight value 1 and the first luminance value is calculated, a product of the weight value 2 and the second luminance value is calculated, and the two products are added, thereby obtaining the fused luminance value.
Optionally, when the shielding information corresponding to the first pixel point and the second pixel point are both non-shielding, calculating a first gradient at the first pixel point and a second gradient at the second pixel point, determining whether a detail feature exists at the position point according to the magnitude relation of the first gradient and the second gradient, setting a weight value corresponding to the second brightness value to be larger than the weight value corresponding to the first brightness value when the detail feature exists, and setting the weight value corresponding to the first brightness value to be larger than the weight value corresponding to the second brightness value when the detail feature does not exist.
When determining whether the detail feature exists in the position point, the first gradient and the second gradient can be compared, when the second gradient is larger than the first gradient and the difference value exceeds a preset value, the detail feature exists in the position point, at the moment, the resolution of the color image can be improved by utilizing the characteristic that the resolution of the detail position in the black-and-white image is higher, and therefore the weight value 2 corresponding to the second brightness value is set to be larger than the weight value 1 corresponding to the first brightness value. And conversely, the position point is flat, and the detail characteristic does not exist, and the weight value corresponding to the first brightness value can be set to be larger than the weight value corresponding to the second brightness value.
Optionally, when the first shielding information corresponding to the first pixel point is shielded and the second shielding information corresponding to the second pixel point is shielded, the pixel information of the shielded part may be complemented in a gaussian filtering smoothing manner.
By setting the weight value at the time of fusion according to the shielding condition of the first color image and the first black-and-white image, the accuracy of the fusion brightness value can be improved.
Step S307, determining a perspective image corresponding to the virtual viewpoint according to the fusion brightness value of each first pixel point.
Because the RGB image is an image corresponding to hardware, after the fusion brightness value of each first pixel point is determined, the HSV image or the YUV image can be obtained, and the HSV image or the YUV image can be further converted into the RGB image.
The first color image and the first black-and-white image are subjected to pixel alignment and then brightness fusion, so that the accuracy of a fusion result is improved.
In addition, for the scheme of providing one color camera and one black-and-white image forming apparatus, one processing step is also required to be added in calculating the perspective image.
Optionally, determining the perspective image corresponding to the virtual viewpoint according to the fused brightness value of each first pixel point includes:
For any one first pixel point in a second sub-black-and-white image at the second eye viewpoint, extracting color information of a second pixel point corresponding to the first pixel point from a second sub-color image at the second eye viewpoint, and determining the extracted color information as the color information of the first pixel point;
and determining a perspective image corresponding to the second eye viewpoint according to the fusion brightness value of each first pixel point and the color information.
In the three-camera scheme or the two-camera scheme, the brightness of the first black-and-white image is fused onto the first color image when brightness fusion is performed on the color camera side. In the two-camera scheme, a black-and-white image imaging device is arranged at the second eye, and at the moment, when brightness fusion is carried out, the brightness of the second sub-color image can be fused onto the second sub-black-and-white image. In addition, since color information does not exist in the second sub-black-and-white image, the color information may be extracted from the second sub-color image, so that a perspective image at the second eye viewpoint is obtained according to the extracted color information and the fusion luminance value.
In the scheme of the two cameras, when the perspective image at the second eye viewpoint is determined, the second sub-black-and-white image is used as a reference image, the second sub-black-and-white image is closer to the real image at the second eye viewpoint, and the determined perspective image at the second eye viewpoint can be more accurate through extraction of color information.
Fig. 6 is a schematic structural diagram of an image perspective device according to an embodiment of the present invention, where the device is applied to a wearable apparatus, and the device 60 includes:
An acquisition module 601, configured to acquire a plurality of images captured by the wearable device, where the plurality of images include a black-and-white image and at least one color image;
The conversion module 602 is configured to convert the at least one color image and the black-and-white image into a first color image and a first black-and-white image, respectively;
A determining module 603, configured to determine, for a virtual viewpoint, a perspective image corresponding to the virtual viewpoint according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to the virtual viewpoint.
Optionally, the conversion module 602 is specifically configured to:
For any color image, determining first depth information corresponding to each pixel point in the color image, and converting the color image into the first color image according to the first depth information;
And determining second depth information corresponding to each pixel point in the black-and-white image aiming at the black-and-white image, and converting the black-and-white image into the first black-and-white image according to the second depth information.
Optionally, the determining module 603 is specifically configured to:
Determining the pixel correspondence of the first color image and the first black-and-white image through optical flow;
Determining a second pixel point corresponding to the first pixel point in the first black-and-white image according to the pixel corresponding relation aiming at any one first pixel point in the first color image;
Determining a first brightness value corresponding to the first pixel point and a second brightness value corresponding to the second pixel point, and determining a fusion brightness value of the first pixel point according to the first brightness value and the second brightness value;
And determining the perspective image corresponding to the virtual viewpoint according to the fusion brightness value of each first pixel point.
Optionally, when determining the first luminance value corresponding to the first pixel point and the second luminance value corresponding to the second pixel point, the determining module 603 is specifically configured to:
Determining the pyramid window size corresponding to the first pixel point according to the high-low frequency characteristic information of the first pixel point, wherein the high-low frequency characteristic information comprises high frequency or low frequency, and the pyramid window size corresponding to the high frequency is smaller than the pyramid window size corresponding to the low frequency;
And determining a first brightness value corresponding to the first pixel point and a second brightness value corresponding to the second pixel point according to the pyramid window size corresponding to the first pixel point.
Optionally, the determining module 603 is specifically configured to, when determining the blended luminance value of the first pixel point according to the first luminance value and the second luminance value:
Determining weight values respectively corresponding to the first brightness value and the second brightness value according to first shielding information corresponding to the first pixel point and second shielding information corresponding to the second pixel point for any one first pixel point, wherein the first shielding information represents shielding conditions of the first pixel point on the first color image;
And determining a fusion brightness value of the first pixel point according to the weight value, the first brightness value and the second brightness value.
The wearable device comprises a black-and-white image imaging device and a color camera, wherein the two color cameras respectively correspond to a first eye and a second eye, the black-and-white image imaging device is arranged between the two color cameras, the color images comprise color images shot by the color camera corresponding to the first eye and color images shot by the color camera corresponding to the second eye, the virtual view point comprises a first eye view point and a second eye view point, the first color image comprises a first sub-color image under the first eye view point and a second sub-color image under the second eye view point, and the first black-and-white image comprises a first sub-black-and-white image under the first eye view point and a second sub-black-and-white image under the second eye view point;
The conversion module 602 is specifically configured to, when determining the first depth information corresponding to each pixel in the color image:
Determining first depth information corresponding to each pixel point in the color image according to the color image and the black-and-white image;
the conversion module 602 is specifically configured to, when determining the second depth information corresponding to each pixel point in the black-and-white image:
when a first sub-black-and-white image under a first eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the first eye;
And/or when a second sub-black-and-white image under the second eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the second eye.
The wearable device comprises a black-and-white image imaging device and a color camera, wherein the color camera corresponds to a first eye, the black-and-white image imaging device corresponds to a second eye, the color image is a color image shot by the color camera corresponding to the first eye, the black-and-white image is a black-and-white image output by the black-and-white image imaging device corresponding to the second eye, the first color image comprises a first sub-color image at a first eye viewpoint and a second sub-color image at a second eye viewpoint, the first black-and-white image comprises a first sub-black-and-white image at the first eye viewpoint and a second sub-black-and-white image at the second eye viewpoint, and the first pixel point is a pixel point in the second sub-black-and-white image at the second eye viewpoint;
The determining module 603 is specifically configured to, when determining the perspective image corresponding to the virtual viewpoint according to the fused luminance value of each first pixel point:
For any one first pixel point in a second sub-black-and-white image at the second eye viewpoint, extracting color information of a second pixel point corresponding to the first pixel point from a second sub-color image at the second eye viewpoint, and determining the extracted color information as the color information of the first pixel point;
and determining a perspective image corresponding to the second eye viewpoint according to the fusion brightness value of each first pixel point and the color information.
The image perspective device provided by the embodiment of the present invention can implement the image perspective method of the embodiment shown in fig. 2 and fig. 3, and its implementation principle and technical effects are similar, and are not repeated here.
Fig. 7 is a schematic hardware structure of an electronic device according to an embodiment of the present invention. As shown in fig. 7, the electronic device provided in this embodiment includes at least one processor 701 and a memory 702. The processor 701 and the memory 702 are connected by a bus 703.
In a specific implementation, at least one processor 701 executes computer-executable instructions stored in a memory 702, so that the at least one processor 701 performs the method in the above-described method embodiments.
The specific implementation process of the processor 701 can be referred to the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
In the embodiment shown in fig. 7, it should be understood that the Processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), other general purpose processors, digital signal Processor (english: DIGITAL SIGNAL Processor, abbreviated as DSP), application-specific integrated Circuit (english: application SPECIFIC INTEGRATED Circuit, abbreviated as ASIC), and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise high speed RAM memory or may further comprise non-volatile storage NVM, such as at least one disk memory.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or to one type of bus.
The invention further provides wearable equipment, which comprises a computing unit, a black-and-white image imaging device and at least one color camera, wherein the black-and-white image imaging device comprises a depth sensor and/or a black-and-white camera, and the computing unit is used for executing the method of the method embodiment.
The embodiment of the invention also provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and when the processor executes the computer execution instructions, the method of the method embodiment is realized.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the method of the above method embodiments.
The computer readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an Application SPECIFIC INTEGRATED Circuits (ASIC). The processor and the readable storage medium may reside as discrete components in a device.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (12)
1. An image perspective method, the method being applied to a wearable device, the method comprising:
Acquiring a plurality of images shot by the wearable equipment, wherein the images comprise a black-and-white image and at least one color image;
converting the at least one color image and the black-and-white image into a first color image and a first black-and-white image respectively, wherein the first color image and the first black-and-white image are the color image and the black-and-white image corresponding to the corresponding virtual view point;
for one virtual viewpoint, determining a perspective image corresponding to the virtual viewpoint according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to the virtual viewpoint.
2. The method of claim 1, wherein converting the at least one color image and the black-and-white image into a first color image and a first black-and-white image, respectively, comprises:
For any color image, determining first depth information corresponding to each pixel point in the color image, and converting the color image into the first color image according to the first depth information;
And determining second depth information corresponding to each pixel point in the black-and-white image aiming at the black-and-white image, and converting the black-and-white image into the first black-and-white image according to the second depth information.
3. The method of claim 1, wherein determining a perspective image corresponding to the virtual viewpoint from the luminance of the first color image corresponding to the virtual viewpoint and the luminance of the first black-and-white image comprises:
Determining the pixel correspondence of the first color image and the first black-and-white image through optical flow;
Determining a second pixel point corresponding to the first pixel point in the first black-and-white image according to the pixel corresponding relation aiming at any one first pixel point in the first color image;
Determining a first brightness value corresponding to the first pixel point and a second brightness value corresponding to the second pixel point, and determining a fusion brightness value of the first pixel point according to the first brightness value and the second brightness value;
And determining the perspective image corresponding to the virtual viewpoint according to the fusion brightness value of each first pixel point.
4. A method according to claim 3, wherein determining a first luminance value corresponding to the first pixel point and a second luminance value corresponding to the second pixel point comprises:
Determining the pyramid window size corresponding to the first pixel point according to the high-low frequency characteristic information of the first pixel point, wherein the high-low frequency characteristic information comprises high frequency or low frequency, and the pyramid window size corresponding to the high frequency is smaller than the pyramid window size corresponding to the low frequency;
And determining a first brightness value corresponding to the first pixel point and a second brightness value corresponding to the second pixel point according to the pyramid window size corresponding to the first pixel point.
5. A method according to claim 3, wherein determining a blended luminance value for the first pixel from the first luminance value and the second luminance value comprises:
Determining weight values respectively corresponding to the first brightness value and the second brightness value according to first shielding information corresponding to the first pixel point and second shielding information corresponding to the second pixel point for any one first pixel point, wherein the first shielding information represents shielding conditions of the first pixel point on the first color image;
And determining a fusion brightness value of the first pixel point according to the weight value, the first brightness value and the second brightness value.
6. The method of any of claims 2-5, wherein the wearable device comprises a black and white image imaging apparatus and a color camera, wherein the color camera is two, corresponding to a first eye and a second eye, respectively, wherein the black and white image imaging apparatus is disposed between the two color cameras, wherein the color image comprises a color image captured by the color camera corresponding to the first eye and a color image captured by the color camera corresponding to the second eye, wherein the virtual viewpoint comprises a first eye viewpoint and a second eye viewpoint, wherein the first color image comprises a first sub-color image at the first eye viewpoint and a second sub-color image at the second eye viewpoint, and wherein the first black and white image comprises a first sub-black and white image at the first eye viewpoint and a second sub-black and white image at the second eye viewpoint;
Determining first depth information corresponding to each pixel point in the color image, including:
Determining first depth information corresponding to each pixel point in the color image according to the color image and the black-and-white image;
Determining second depth information corresponding to each pixel point in the black-and-white image comprises the following steps:
when a first sub-black-and-white image under a first eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the first eye;
And/or when a second sub-black-and-white image under the second eye viewpoint is determined, determining second depth information corresponding to each pixel point in the black-and-white image according to the black-and-white image and a color image shot by a color camera corresponding to the second eye.
7. The method of claim 3, wherein the wearable device comprises a black-and-white image imaging device and a color camera, wherein the color camera is one, the color camera corresponds to a first eye, the black-and-white image imaging device corresponds to a second eye, the color image is a color image captured by the color camera corresponding to the first eye, the black-and-white image is a black-and-white image output by the black-and-white image imaging device corresponding to the second eye, the first color image comprises a first sub-color image at a first eye viewpoint and a second sub-color image at a second eye viewpoint, the first black-and-white image comprises a first sub-black-and-white image at the first eye viewpoint and a second sub-black-and-white image at the second eye viewpoint, and the first pixel point is a pixel point in the second sub-color image at the second eye viewpoint and the second pixel point is a pixel point in the second sub-color image at the second eye viewpoint;
Determining a perspective image corresponding to the virtual viewpoint according to the fusion brightness value of each first pixel point, including:
For any one first pixel point in a second sub-black-and-white image at the second eye viewpoint, extracting color information of a second pixel point corresponding to the first pixel point from a second sub-color image at the second eye viewpoint, and determining the extracted color information as the color information of the first pixel point;
and determining a perspective image corresponding to the second eye viewpoint according to the fusion brightness value of each first pixel point and the color information.
8. An image perspective device, the device being applied to a wearable apparatus, the device comprising:
the wearable device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a plurality of images shot by the wearable device, and the images comprise a black-and-white image and at least one color image;
The conversion module is used for converting the at least one color image and the black-and-white image into a first color image and a first black-and-white image respectively, wherein the first color image and the first black-and-white image are the color image and the black-and-white image corresponding to the corresponding virtual view point;
And the determining module is used for determining a perspective image corresponding to a virtual viewpoint according to the brightness of the first color image and the brightness of the first black-and-white image corresponding to the virtual viewpoint.
9. An electronic device comprising at least one processor and a memory;
The memory stores computer-executable instructions;
The at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method of any one of claims 1 to 7.
10. Wearable device, characterized in that it comprises a computing unit for performing the method according to any of claims 1 to 7, a black-and-white image imaging device being a depth sensor and/or a black-and-white camera, and at least one color camera.
11. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor implement the method of any of claims 1 to 7.
12. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311236070.2A CN119697353A (en) | 2023-09-22 | 2023-09-22 | Image perspective method, device, electronic device, wearable device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311236070.2A CN119697353A (en) | 2023-09-22 | 2023-09-22 | Image perspective method, device, electronic device, wearable device and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119697353A true CN119697353A (en) | 2025-03-25 |
Family
ID=95044065
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311236070.2A Pending CN119697353A (en) | 2023-09-22 | 2023-09-22 | Image perspective method, device, electronic device, wearable device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119697353A (en) |
-
2023
- 2023-09-22 CN CN202311236070.2A patent/CN119697353A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102278776B1 (en) | Image processing method, apparatus, and apparatus | |
| US20210365720A1 (en) | Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data | |
| EP3816929A1 (en) | Method and apparatus for restoring image | |
| US20160301868A1 (en) | Automated generation of panning shots | |
| EP2328125A1 (en) | Image splicing method and device | |
| CN106997579B (en) | Image splicing method and device | |
| KR102801383B1 (en) | Image restoration method and device | |
| CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
| CN107633497A (en) | A kind of image depth rendering intent, system and terminal | |
| CN114418897A (en) | Eye spot image restoration method and device, terminal equipment and storage medium | |
| CN115965531A (en) | Model training method, image generation method, device, equipment and storage medium | |
| CN113159229B (en) | Image fusion method, electronic equipment and related products | |
| CN114401362A (en) | Image display method and device and electronic equipment | |
| CN115004683A (en) | Imaging apparatus, imaging method, and program | |
| CN107633498B (en) | Image dark state enhancement method and device and electronic equipment | |
| CN107295261B (en) | Image dehazing processing method, device, storage medium and mobile terminal | |
| KR20110025083A (en) | 3D image display device and method in 3D image system | |
| CN118521745B (en) | A method, device, electronic device and medium for fusion of three-dimensional virtual and real video streams | |
| CN119815181A (en) | Panoramic image generation method, device, electronic device and storage medium | |
| CN119697353A (en) | Image perspective method, device, electronic device, wearable device and storage medium | |
| CN113706597B (en) | Video frame image processing method and electronic equipment | |
| CN116260927B (en) | Video processing method and related equipment thereof | |
| CN119559039B (en) | Processing method, device, terminal and medium for spherical screen display image | |
| CN120219161B (en) | Video image stitching method and device, terminal equipment and storage medium | |
| CN119295649B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |