[go: up one dir, main page]

CN118301471A - Image processing method, device, electronic device and computer-readable storage medium - Google Patents

Image processing method, device, electronic device and computer-readable storage medium Download PDF

Info

Publication number
CN118301471A
CN118301471A CN202410328949.8A CN202410328949A CN118301471A CN 118301471 A CN118301471 A CN 118301471A CN 202410328949 A CN202410328949 A CN 202410328949A CN 118301471 A CN118301471 A CN 118301471A
Authority
CN
China
Prior art keywords
image
depth
images
semantic segmentation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410328949.8A
Other languages
Chinese (zh)
Inventor
王远博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wingtech Electronic Technology Co Ltd
Original Assignee
Shanghai Wingtech Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wingtech Electronic Technology Co Ltd filed Critical Shanghai Wingtech Electronic Technology Co Ltd
Priority to CN202410328949.8A priority Critical patent/CN118301471A/en
Publication of CN118301471A publication Critical patent/CN118301471A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/676Bracketing for image capture at varying focusing conditions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及图像处理技术领域,提供了一种图像处理方法、装置、电子设备和计算机可读存储介质,所述方法包括:获取针对目标场景,以第一光圈拍摄的焦点位置不同的多个第一图像;对该多个第一图像进行焦点堆栈处理,得到宽景深图像;基于该多个第一图像和该宽景深图像中的至少一个图像,确定该宽景深图像中各个对象的深度信息;基于该深度信息,对该宽景深图像中目标对象所在的景深范围之外的像素进行虚化处理,得到以该景深范围为对焦距离的后对焦图像,该目标对象为用户选中的对焦对象。采用本方法能够在拍照后对画面焦点进行调整以得到浅景深效果图像。

The present application relates to the field of image processing technology, and provides an image processing method, device, electronic device and computer-readable storage medium, the method comprising: obtaining a plurality of first images with different focus positions taken with a first aperture for a target scene; performing focus stacking processing on the plurality of first images to obtain a wide depth of field image; determining the depth information of each object in the wide depth of field image based on the plurality of first images and at least one of the wide depth of field images; based on the depth information, blurring the pixels outside the depth of field range where the target object in the wide depth of field image is located to obtain a post-focus image with the depth of field range as the focus distance, and the target object is the focus object selected by the user. The present method can be used to adjust the focus of the picture after taking a picture to obtain an image with a shallow depth of field effect.

Description

Image processing method, apparatus, electronic device, and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer readable storage medium.
Background
With the advancement of optical technology, cameras with adjustable apertures, shutters, and even interchangeable lenses are becoming popular, and the functions of the cameras tend to be diversified. In order to highlight a subject in a captured image, a shooting technique called shallow depth of field (shallow depth offield) is generally used to focus a focal length of a camera lens on a specific distance range in the image, so that a target within the distance range can be clearly imaged, and a target outside the distance range is gradually blurred. The depth of field (Depth offield, DOF) is a distance range used to describe a clear image in space, i.e., the distance range is the depth of field range.
However, in some scenes, the focus of the image cannot be adjusted after photographing is finished, and a large amount of taken images may occur in the photographing process of the image with the effect of shallow depth of field, so that the photographing process is too time-consuming or the image with satisfactory effect cannot be photographed. Therefore, a method for adjusting the focus of a picture after photographing to obtain a shallow depth effect image is needed.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, electronic device, and computer-readable storage medium that can adjust the screen focus after photographing to obtain a shallow depth effect image.
The embodiment of the application provides an image processing method, which comprises the following steps:
Acquiring a plurality of first images with different focus positions, which are shot by a first aperture, aiming at a target scene;
carrying out focus stack processing on the plurality of first images to obtain a wide depth-of-field image;
Determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image;
and based on the depth information, blurring pixels outside a depth of field range of a target object in the wide depth of field image to obtain a rear focusing image taking the depth of field range as a focusing distance, wherein the target object is a focusing object selected by a user.
In one embodiment, the acquiring a plurality of first images of the target scene, which are taken with the first aperture and have different focus positions, includes:
Acquiring a plurality of first images with different focus positions photographed by a first aperture and a second image photographed by a second aperture for a target scene, the second aperture being larger than the first aperture;
the determining depth information for each object in the wide depth image based on at least one of the plurality of first images and the wide depth image includes:
The depth information is determined based on at least one of the plurality of first images and the wide depth image, and the second image.
In one embodiment, the determining the depth information based on at least one of the plurality of first images and the wide depth image, and the second image includes:
Carrying out semantic segmentation processing on the wide depth image to obtain a first semantic segmentation image;
Carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
the depth information is determined by comparing the sharpness of each object in the first semantically segmented image and the second semantically segmented image.
In one embodiment, the plurality of first images includes a third image having the same focus position as the second image; the determining the depth information based on at least one of the plurality of first images and the wide depth image, and the second image includes:
Carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
carrying out semantic segmentation processing on the third image to obtain a third semantic segmentation image;
the depth information is determined by comparing the sharpness of each object in the second and third semantically segmented images.
In one embodiment, the determining depth information for each object in the wide depth image based on at least one of the plurality of first images and the wide depth image comprises:
Respectively carrying out semantic segmentation processing on the plurality of first images to obtain a plurality of fourth semantic segmentation images;
the depth information is determined by comparing the sharpness of each object in the plurality of fourth semantically segmented images.
In one embodiment, the blurring processing of pixels in the wide depth image outside the depth range of the target object based on the depth information includes:
Determining a target blurring degree corresponding to a target pixel based on the depth information, wherein the target pixel is any pixel outside the depth range in the wide depth image;
and blurring the target pixel based on the target blurring degree.
In one embodiment, the determining the target blurring degree corresponding to the target pixel based on the depth information includes:
determining a depth value of the target pixel based on the depth information,
And determining the blurring degree of the target based on the difference value between the depth value of the target pixel and the depth range.
An embodiment of the present application provides an image processing apparatus including:
the acquisition module is used for acquiring a plurality of first images with different focus positions, which are shot by a first aperture, aiming at a target scene;
The stack module is used for carrying out focus stack processing on the plurality of first images to obtain a wide depth image;
A determining module for determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image;
And the blurring module is used for blurring pixels outside the depth-of-field range of the target object in the wide depth-of-field image based on the depth information to obtain a post-focusing image taking the depth-of-field range as a focusing distance, wherein the target object is a focusing object selected by a user.
In one embodiment, the obtaining module is specifically configured to:
Acquiring a plurality of first images with different focus positions photographed by a first aperture and a second image photographed by a second aperture for a target scene, the second aperture being larger than the first aperture;
The determining module is specifically configured to:
The depth information is determined based on at least one of the plurality of first images and the wide depth image, and the second image.
In one embodiment, the determining module is specifically configured to:
Carrying out semantic segmentation processing on the wide depth image to obtain a first semantic segmentation image;
Carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
the depth information is determined by comparing the sharpness of each object in the first semantically segmented image and the second semantically segmented image.
In one embodiment, the plurality of first images includes a third image having the same focus position as the second image; the determining module is specifically configured to:
Carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
carrying out semantic segmentation processing on the third image to obtain a third semantic segmentation image;
the depth information is determined by comparing the sharpness of each object in the second and third semantically segmented images.
In one embodiment, the determining module is specifically configured to:
Respectively carrying out semantic segmentation processing on the plurality of first images to obtain a plurality of fourth semantic segmentation images;
the depth information is determined by comparing the sharpness of each object in the plurality of fourth semantically segmented images.
In one embodiment, the blurring module is specifically configured to:
Determining a target blurring degree corresponding to a target pixel based on the depth information, wherein the target pixel is any pixel outside the depth range in the wide depth image;
and blurring the target pixel based on the target blurring degree.
In one embodiment, the blurring module is specifically configured to:
determining a depth value of the target pixel based on the depth information,
And determining the blurring degree of the target based on the difference value between the depth value of the target pixel and the depth range.
The embodiment of the application provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the image processing method provided by any embodiment of the application when executing the computer program.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method provided by any embodiment of the present application.
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium, wherein a plurality of first images with different focus positions and shot by a first aperture aiming at a target scene are obtained; carrying out focus stack processing on the plurality of first images to obtain a wide depth-of-field image; determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image; based on the depth information, blurring processing is performed on pixels, which are located outside a depth of field range of a target object (the target object is a focusing object selected by a user) in the wide depth of field image, so as to obtain a post-focusing image taking the depth of field range as a focusing distance. Therefore, the scheme does not need to adjust the focusing position after photographing through the light field camera, does not need to photograph through a large-aperture lens with huge volume and high price, and can obtain the back focusing image taking the depth of field range of the target object as the focusing distance by processing a plurality of first images with different focusing positions photographed by the first aperture aiming at the target scene, so that the adjustment of the picture focus after photographing is realized, the adjusted picture has a shallow depth of field effect, the efficiency of obtaining the image with the shallow depth of field effect is improved, and the cost of obtaining the image with the shallow depth of field effect is reduced.
Drawings
FIG. 1 is a flow chart of an image processing method in one embodiment;
FIG. 2 is a flow chart of an image processing method according to another embodiment;
FIG. 3 is a flow chart of an image processing method according to still another embodiment;
FIG. 4 is a block diagram showing the structure of an image processing apparatus in one embodiment;
fig. 5 is an internal structural diagram of an electronic device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The electronic device in the embodiment of the application can be a mobile electronic device or a non-mobile electronic device. The mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or the like; the non-mobile electronic device may be a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-help machine, or the like; the embodiments of the present disclosure are not particularly limited.
The following is a first explanation of some of the terms or expressions referred to in the claims and specification of the present invention.
Camera: a CAMERA (CAMERA or WEBCAM) is also called a computer CAMERA, a computer eye, an electronic eye, etc., and is a video input device widely used in video conference, telemedicine, real-time monitoring, etc. Ordinary persons can also talk and communicate with each other in a network with images and sounds through the camera. In addition, people can also use the video processing device for various popular digital images, video processing and the like.
Camera is the widest configuration of the intelligent terminal at present. In effect, CMOS converts optical signals into electrical signals for reception processing by the associated processing algorithms supporting its technology. The whole signal processing forms a stream, which is called pepline, and the output of the same camera sensor forms different camera IQ effects through different IQ algorithms.
Feature extraction: and extracting the features which have a certain physical meaning or statistical meaning from the original data which cannot be directly extracted through machine learning. The feature extraction can reduce the data storage capacity, reduce the data bandwidth, reduce the data redundancy and improve the data quality.
And (5) rear focusing: the user can adjust the focus of the photograph after taking.
Focal stack: is a technique to extend the depth of field by taking a series of pictures with different focus settings and then combining them together using the focus areas in each image.
An aperture: is a device for controlling the light quantity of light transmitted through the lens and entering the photosensitive surface in the machine body.
Depth of field (DOF): refers to the range of distances between the front and back of a subject measured at the front of a camera lens or other imager to enable clear images to be obtained. The depth of field is related to the aperture, the smaller the aperture (the larger the value, e.g., the smaller the aperture of f16 than the aperture of f 11), the greater the depth of field; the larger the aperture (the smaller the value, e.g., f2.8 aperture greater than f 5.6) the smaller the depth of field.
Convolutional Neural Network (CNN): the convolutional neural network CNN, most excellent in picture processing, has two characteristics: firstly, the dimension of the picture with large data volume can be effectively reduced into small data volume. Secondly, the picture characteristics can be effectively reserved, and the principle of picture processing is met. Basic principle of CNN: a typical CNN consists of at least three parts, a convolutional layer, a pooling layer, a fully-connected layer. The convolution layer extracts local characteristics in the picture through the filtering of the convolution kernel, the pooling layer is simply referred to as downsampling, the dimension reduction operand of the data can be greatly reduced, and overfitting can be avoided. The fully connected layer resembles a portion of a conventional neural network for outputting desired results.
Cell phone iris diaphragm: the iris diaphragm of mobile phone is a technology for adjusting the light entering quantity of camera lens by changing the diaphragm size of the lens through a group of rotatable or movable blades, which can make the user take better pictures under different illumination conditions.
Semantic segmentation: semantic segmentation is localized in the computer vision field of artificial intelligence deep learning, and related tasks include target detection, image classification, instance segmentation, pose estimation, and the like.
Blurring technique: blurring is the lightening of the depth of field, focusing the focus on the subject.
The principle of the focal stack technique is to "stack" images together to obtain a larger depth of field, which will have different focal points.
Light field photography is a technique that enables post-adjustment of focus and depth of field by capturing all ray information in a photograph.
The iPhone 15 uses advanced depth sensors and algorithms to perform depth analysis on each pixel in the photograph and captures the depth data of the photograph. Thus, when the user performs focus adjustment after shooting, the mobile phone can redefine the focus of the photo according to the data.
The execution body of the image processing method provided by the embodiment of the application may be the above-mentioned electronic device (including mobile electronic device and non-mobile electronic device), or may be a functional module and/or a functional entity capable of implementing the image processing method in the electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present disclosure is not limited.
In one embodiment, as shown in FIG. 1, an image processing method is provided. The method may include steps 101 to 104 described below.
Step 101, acquiring a plurality of first images of a target scene, the first images being captured by a first diaphragm and having different focal positions.
It will be understood that, since the plurality of first images are all captured for the target scene, the picture contents corresponding to the plurality of first images should be similar or identical, and the plurality of first images are all images of the focal positions captured with the first aperture, so that the ranges of depths of field corresponding to the plurality of first images are different, but spans of the ranges of depths of field corresponding to the plurality of first images (the spans of the ranges of depths of field are determined by the aperture) are identical.
In some embodiments of the present application, if the camera of the electronic device is a fixed aperture, the first aperture is the fixed aperture of the camera, and if the camera is an iris aperture, the first aperture may be any aperture corresponding to the camera. When the aperture of the camera is the first aperture, a first image is obtained through shooting, then the focal position is changed through adjusting the lens position, at least one first image is obtained through shooting, and each time the lens position is adjusted, one first image is shot, so that a plurality of first images are obtained.
And 102, carrying out focal stack processing on the plurality of first images to obtain a wide depth image.
The wide depth image may also be referred to as a panoramic depth image, specifically, an image in which each object in the image is clear.
The focal stack processing may refer to the related art, and is not limited herein.
Step 103, determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image.
In some embodiments of the present application, the depth information may be determined based on at least two images of the plurality of first images and the wide depth image, or may be determined based on at least one image of the plurality of first images and the wide depth image, and other images, and may be specifically determined according to practical situations, which is not limited herein.
The depth information is illustratively determined based on the sharpness of the same object in at least two of the plurality of first images and the wide depth image.
The depth information is illustratively determined based on the sharpness of at least one of the plurality of first images and the wide depth image, as well as the same object in the other images.
In some embodiments of the present application, the depth information may be determined based on at least one of the plurality of first images and the wide depth image by other methods of calculating the depth information, which is not limited herein. For example, the depth information may be determined based on at least one of the plurality of first images and the wide depth image by a pre-trained depth neural network model.
It should be noted that, in the embodiment of the present application, the depth information of each object in the wide depth image is the relative depth information of each object.
In the embodiment of the present application, the depth information may also be relative depth information of each pixel point in the wide depth image, which is determined based on at least one image in the plurality of first images and the wide depth image.
And 104, based on the depth information, performing blurring processing on pixels outside a depth of field range of a target object in the wide depth of field image to obtain a post-focusing image taking the depth of field range as a focusing distance, wherein the target object is a focusing object selected by a user.
The target object may be determined by a triggering operation of the user on any one of the plurality of first images and the wide-depth image, may be determined by identifying a voice command of the user, may be determined based on a received text input of the user, may be determined by other manners, and may be specifically determined according to an actual situation, which is not limited herein.
The blurring process may refer to the related art, and is not described herein. The blurring process with the same blurring degree may be performed on the pixels outside the depth range, or blurring processes with different blurring degrees may be performed on the pixels outside the depth range, which is not limited herein.
In the embodiment of the application, a plurality of first images with different focus positions, which are shot by a first aperture, are acquired for a target scene; then carrying out focus stack processing on the plurality of first images to obtain a wide depth-of-field image; determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image; based on the depth information, blurring processing is performed on pixels, which are located outside a depth of field range of a target object (the target object is a focusing object selected by a user) in the wide depth of field image, so as to obtain a post-focusing image taking the depth of field range as a focusing distance. Therefore, the scheme does not need to adjust the focusing position after photographing through the light field camera, does not need to photograph through a large-aperture lens with huge volume and high price, and can obtain the rear focusing image taking the depth of field range of the target object as the focusing distance, so that the adjustment of the picture focus after photographing is realized, the adjusted picture has a shallow depth of field effect, the efficiency of obtaining the image with the shallow depth of field effect is improved, and the cost of obtaining the image with the shallow depth of field effect is reduced.
In one embodiment, as shown in fig. 2 in conjunction with fig. 1, the above step 101 may be specifically implemented by the following step 101a, and the above step 102 may be specifically implemented by the following step 102 a.
Step 101a, acquiring a plurality of first images with different focus positions photographed by a first aperture and a second image photographed by a second aperture for a target scene, wherein the second aperture is larger than the first aperture.
The camera of the electronic device is an iris diaphragm, and the first diaphragm and the second diaphragm can be determined according to actual conditions, which is not limited herein.
In some embodiments, when the aperture of the camera is the first aperture, a first image is obtained by shooting, then the focal position is changed by adjusting the lens position, at least one first image is obtained by shooting, and each time the lens position is adjusted, one first image is shot, so as to obtain a plurality of first images; and then adjusting the aperture of the camera from the first aperture to the second aperture, and shooting a second image.
In some embodiments, when the aperture of the camera is the second aperture, the second image may be photographed after focusing, then the aperture of the camera is adjusted from the second aperture to the first aperture, one first image is photographed after focusing, then the aperture of the camera is kept to be the first aperture, the focal position is changed by adjusting the lens position, at least one first image is photographed, and each time the lens position is adjusted, one first image is photographed, so as to obtain a plurality of first images.
Step 102a, determining the depth information based on at least one of the plurality of first images and the wide depth image, and a second image.
Wherein the depth information is determined for a sharpness based on at least one of the plurality of first images and the wide depth image, and the same object in the second image; the depth information may also be determined based on at least one of the plurality of first images and the wide depth image, and a second image by a pre-trained deep neural network model.
In the embodiment of the application, since the second image is an image shot with a larger aperture, the second image has a shallow depth effect relative to any first image, and therefore, the depth information can be more accurately and rapidly determined by combining at least one image of the plurality of first images and the wide depth image with the second image, so that the efficiency of obtaining the image with the shallow depth effect can be improved.
According to the embodiment of the application, the characteristic that the depth of field ranges of different apertures are different is utilized, and a focal stack technology is further combined to obtain a wide depth of field image, and the relative depth information of each object in the image is determined later by utilizing depth of field information (the object in the depth of field has high definition, the object outside the depth of field is far from the depth of field, the definition is lower) and picture semantic segmentation information.
In one embodiment, as shown in fig. 3 in conjunction with fig. 2, the above step 102a may be specifically implemented by the following steps 102a1 to 102a 3.
Step 102a1, performing semantic segmentation processing on the wide-depth image to obtain a first semantic segmentation image.
Step 102a2, performing semantic segmentation processing on the second image to obtain a second semantic segmentation image.
Step 102a3, determining the depth information by comparing the sharpness of each object in the first semantically segmented image and the second semantically segmented image.
In the embodiment of the application, the first semantic segmentation image and the second semantic segmentation image are obtained by respectively carrying out semantic segmentation processing on the wide depth image and the second image, and then the depth information can be rapidly and accurately determined by comparing the definition degree of each object in the first semantic segmentation image and the second semantic segmentation image, so that the efficiency of obtaining the image with the shallow depth effect can be improved.
In one embodiment, the plurality of first images includes a third image having the same focus position as the second image; the above step 102a may be specifically realized by the following steps 102a4 to 102a 6.
And 102a4, carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image.
And 102a5, performing semantic segmentation processing on the third image to obtain a third semantic segmentation image.
Step 102a6, determining the depth information by comparing the sharpness of each object in the second semantically segmented image and the third semantically segmented image.
In the embodiment of the application, the focus positions of the third image and the second image are the same, but the depth of field range is different due to different aperture sizes, so that the definition of the object in the third image and the second image is different. The second image and the third image are subjected to semantic segmentation processing respectively to obtain the second semantic segmentation image and the third semantic segmentation image, and then the definition degree of each object in the second semantic segmentation image and the third semantic segmentation image is compared, so that the calculated amount is small, the depth information can be rapidly and accurately determined, and the efficiency of obtaining the image with the shallow depth effect can be improved.
In one embodiment, the above step 102 may be specifically implemented by the following steps 102b and 102 c.
Step 102b, performing semantic segmentation processing on the plurality of first images to obtain a plurality of fourth semantic segmentation images.
Step 102c, determining the depth information by comparing the sharpness of each object in the plurality of fourth semantically segmented images.
It will be appreciated that the solution for obtaining depth information in step 102b and step 102c described above is not limited by whether the camera of the electronic device is a fixed aperture or a variable aperture, and that more scenes may be used.
In the embodiment of the application, the plurality of first images are subjected to semantic segmentation processing to obtain the plurality of fourth semantic segmentation images, and then the depth information can be rapidly and accurately determined by comparing the definition degree of each object in the plurality of fourth semantic segmentation images, so that the efficiency of obtaining the image with the shallow depth of field effect can be improved.
It should be noted that, in the embodiment of the present application, a specific method for performing semantic segmentation processing on an image is not limited, and for example, the semantic segmentation processing may be performed on the image through a convolutional neural network trained in advance.
In one embodiment, the step 104 may be specifically implemented by the following steps 104a and 104 b.
Step 104a, determining a target blurring degree corresponding to a target pixel based on the depth information, wherein the target pixel is any pixel outside the depth range in the wide depth image.
Step 104b, blurring the target pixel based on the target blurring degree.
In the embodiment of the application, the target blurring degree corresponding to the target pixel is determined based on the depth information, and then blurring processing is performed on the target pixel based on the target blurring degree. Therefore, blurring processing with different blurring degrees can be performed on pixels in different depth ranges, blurring of the pixels in the target image can be finer, and the target image with better blurring effect and shallow depth effect can be obtained.
In one embodiment, the step 104a may be specifically implemented by the following steps 104a1 to 104a 2.
Step 104a1, determining a depth value of the target pixel based on the depth information.
Step 104a2, determining the target blurring degree based on the difference value between the depth value of the target pixel and the depth range.
In the embodiment of the application, the depth value of the target pixel is determined based on the depth information, and then the target blurring degree is determined based on the difference value between the depth value of the target pixel and the depth range, so that the blurring degree of the pixel which is closer to the depth range where the target object is selected by the user is weaker, the blurring degree of the pixel which is farther from the depth range where the target object is located is stronger, and the blurring effect of the target image is more real.
In some embodiments, before the step 104, the image processing method provided by the embodiment of the present application may further include the following steps 105 and 106.
Step 105, displaying a fourth image, wherein the fourth image is any one of a plurality of first images, second images and wide depth images.
And step 106, responding to the triggering operation of the user on the fourth image, and determining an object corresponding to the triggering operation as the target object.
In the embodiment of the application, the efficiency of determining the target object can be improved by displaying the fourth image and then determining the target object in response to the triggering operation of the user on the fourth image.
In the embodiment of the application, a plurality of first images, second images and wide depth images can be stored according to the use requirement, so that the depth information can be directly used later.
It should be understood that, although the steps in the flowcharts of fig. 1-3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-3 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or phases are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or phases of other steps or other steps.
In one embodiment, as shown in fig. 4, there is provided an image processing apparatus including: an acquisition module 401, a stack module 402, a determination module 403, and an blurring module 404, wherein:
An acquisition module 401, configured to acquire a plurality of first images with different focus positions captured with a first aperture for a target scene;
a stacking module 402, configured to perform focal stacking processing on the plurality of first images to obtain a wide depth image;
A determining module 403, configured to determine depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image;
And the blurring module 404 is configured to perform blurring processing on pixels, which are outside a depth of field range where a target object in the wide depth of field image is located, based on the depth information, to obtain a post-focusing image with the depth of field range as a focusing distance, where the target object is a focusing object selected by a user.
In one embodiment, the obtaining module 401 is specifically configured to:
Acquiring a plurality of first images with different focus positions photographed by a first aperture and a second image photographed by a second aperture for a target scene, the second aperture being larger than the first aperture;
The determining module 403 is specifically configured to:
The depth information is determined based on at least one of the plurality of first images and the wide depth image, and the second image.
In one embodiment, the determining module 403 is specifically configured to:
Carrying out semantic segmentation processing on the wide depth image to obtain a first semantic segmentation image;
Carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
the depth information is determined by comparing the sharpness of each object in the first semantically segmented image and the second semantically segmented image.
In one embodiment, the plurality of first images includes a third image having the same focus position as the second image; the determining module 403 is specifically configured to:
Carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
carrying out semantic segmentation processing on the third image to obtain a third semantic segmentation image;
the depth information is determined by comparing the sharpness of each object in the second and third semantically segmented images.
In one embodiment, the determining module 403 is specifically configured to:
Respectively carrying out semantic segmentation processing on the plurality of first images to obtain a plurality of fourth semantic segmentation images;
the depth information is determined by comparing the sharpness of each object in the plurality of fourth semantically segmented images.
In one embodiment, the blurring module 404 is specifically configured to:
Determining a target blurring degree corresponding to a target pixel based on the depth information, wherein the target pixel is any pixel outside the depth range in the wide depth image;
and blurring the target pixel based on the target blurring degree.
In one embodiment, the blurring module 404 is specifically configured to:
determining a depth value of the target pixel based on the depth information,
And determining the blurring degree of the target based on the difference value between the depth value of the target pixel and the depth range.
In the embodiment of the application, a plurality of first images with different focus positions, which are shot by a first aperture, are acquired for a target scene; then carrying out focus stack processing on the plurality of first images to obtain a wide depth-of-field image; determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image; based on the depth information, blurring processing is performed on pixels, which are located outside a depth of field range of a target object (the target object is a focusing object selected by a user) in the wide depth of field image, so as to obtain a post-focusing image taking the depth of field range as a focusing distance. Therefore, the scheme does not need to adjust the focusing position after photographing through the light field camera, does not need to photograph through a large-aperture lens with huge volume and high price, and can obtain the rear focusing image taking the depth of field range of the target object as the focusing distance, so that the adjustment of the picture focus after photographing is realized, the adjusted picture has a shallow depth of field effect, the efficiency of obtaining the image with the shallow depth of field effect is improved, and the cost of obtaining the image with the shallow depth of field effect is reduced.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, and no further description is given here. The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided, which may be a server, and the internal structure of which may be as shown in fig. 5. The electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the electronic device is for storing image processing data. The network interface of the electronic device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
In one embodiment, an electronic device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 5. The electronic device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the electronic device is used for conducting wired or wireless communication with an external terminal, and the wireless communication can be achieved through WIFI, an operator network, near Field Communication (NFC) or other technologies. The computer program is executed by a processor to implement an image processing method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the image processing apparatus provided by the present application may be implemented in the form of a computer program that is executable on an electronic device as shown in fig. 5. The memory of the electronic device may store therein various program modules constituting the image processing apparatus, such as an acquisition module, a stack module, a determination module, and an blurring module shown in fig. 4. The computer program constituted by the respective program modules causes the processor to execute the steps in the image processing method of the respective embodiments of the present application described in the present specification.
For example, the electronic apparatus shown in fig. 5 may perform acquisition of a plurality of first images different in focus position taken at the first diaphragm for the target scene by the acquisition module in the image processing apparatus shown in fig. 4. The electronic device can execute focal stack processing on the plurality of first images through the stack module to obtain a wide depth image. The electronic device may determine, by the determination module, depth information for each object in the wide depth image based on the plurality of first images and at least one image in the wide depth image. The electronic equipment can execute blurring processing on pixels outside a depth-of-field range of a target object in the wide depth-of-field image based on the depth information through the blurring module to obtain a rear focusing image taking the depth-of-field range as a focusing distance, wherein the target object is a focusing object selected by a user.
In one embodiment, an electronic device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of: acquiring a plurality of first images with different focus positions, which are shot by a first aperture, aiming at a target scene; carrying out focus stack processing on the plurality of first images to obtain a wide depth-of-field image; determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image; and based on the depth information, blurring pixels outside a depth of field range of a target object in the wide depth of field image to obtain a rear focusing image taking the depth of field range as a focusing distance, wherein the target object is a focusing object selected by a user.
In one embodiment, the processor when executing the computer program further performs the steps of: carrying out semantic segmentation processing on the wide depth image to obtain a first semantic segmentation image; carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image; the depth information is determined by comparing the sharpness of each object in the first semantically segmented image and the second semantically segmented image.
In one embodiment, the processor when executing the computer program further performs the steps of: carrying out semantic segmentation processing on the wide depth image to obtain a first semantic segmentation image; carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image; the depth information is determined by comparing the sharpness of each object in the first semantically segmented image and the second semantically segmented image.
In one embodiment, the plurality of first images includes a third image having the same focus position as the second image; the processor, when executing the computer program, also embodies the following steps: carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image; carrying out semantic segmentation processing on the third image to obtain a third semantic segmentation image; the depth information is determined by comparing the sharpness of each object in the second and third semantically segmented images.
In one embodiment, the processor when executing the computer program further performs the steps of: respectively carrying out semantic segmentation processing on the plurality of first images to obtain a plurality of fourth semantic segmentation images; the depth information is determined by comparing the sharpness of each object in the plurality of fourth semantically segmented images.
In one embodiment, the processor when executing the computer program further performs the steps of: determining a target blurring degree corresponding to a target pixel based on the depth information, wherein the target pixel is any pixel outside the depth range in the wide depth image; and blurring the target pixel based on the target blurring degree.
In one embodiment, the processor when executing the computer program further performs the steps of: and determining a depth value of the target pixel based on the depth information, and determining the target blurring degree based on a difference value between the depth value of the target pixel and the depth range.
In the embodiment of the application, the adjustment of the focusing position after photographing is not needed by a light field camera, and photographing is also not needed by a large-size and expensive large-aperture lens, so that a rear focusing image taking the depth of field range of a target object as the focusing distance can be obtained, the adjustment of the picture focus after photographing is realized, the adjusted picture has a shallow depth of field effect, the efficiency of obtaining the image with the shallow depth of field effect is improved, and the cost of obtaining the image with the shallow depth of field effect is reduced.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a plurality of first images with different focus positions, which are shot by a first aperture, aiming at a target scene; carrying out focus stack processing on the plurality of first images to obtain a wide depth-of-field image; determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image; and based on the depth information, blurring pixels outside a depth of field range of a target object in the wide depth of field image to obtain a rear focusing image taking the depth of field range as a focusing distance, wherein the target object is a focusing object selected by a user.
In one embodiment, the computer program when executed by a processor performs the steps of: acquiring a plurality of first images with different focus positions photographed by a first aperture and a second image photographed by a second aperture for a target scene, the second aperture being larger than the first aperture; the depth information is determined based on at least one of the plurality of first images and the wide depth image, and the second image.
In one embodiment, the computer program when executed by the processor further embodies the steps of: carrying out semantic segmentation processing on the wide depth image to obtain a first semantic segmentation image; carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image; the depth information is determined by comparing the sharpness of each object in the first semantically segmented image and the second semantically segmented image.
In one embodiment, the plurality of first images includes a third image having the same focus position as the second image; the computer program when executed by the processor further embodies the steps of: carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image; carrying out semantic segmentation processing on the third image to obtain a third semantic segmentation image; the depth information is determined by comparing the sharpness of each object in the second and third semantically segmented images.
In one embodiment, the computer program when executed by the processor further embodies the steps of: respectively carrying out semantic segmentation processing on the plurality of first images to obtain a plurality of fourth semantic segmentation images; the depth information is determined by comparing the sharpness of each object in the plurality of fourth semantically segmented images.
In one embodiment, the computer program when executed by the processor further embodies the steps of: determining a target blurring degree corresponding to a target pixel based on the depth information, wherein the target pixel is any pixel outside the depth range in the wide depth image; and blurring the target pixel based on the target blurring degree.
In one embodiment, the computer program when executed by the processor further embodies the steps of: and determining a depth value of the target pixel based on the depth information, and determining the target blurring degree based on a difference value between the depth value of the target pixel and the depth range.
In the embodiment of the application, the adjustment of the focusing position after photographing is not needed by a light field camera, and photographing is also not needed by a large-size and expensive large-aperture lens, so that a rear focusing image taking the depth of field range of a target object as the focusing distance can be obtained, the adjustment of the picture focus after photographing is realized, the adjusted picture has a shallow depth of field effect, the efficiency of obtaining the image with the shallow depth of field effect is improved, and the cost of obtaining the image with the shallow depth of field effect is reduced.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (RandomAccess Memory, RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as static random access memory (Static RandomAccess Memory, SRAM), dynamic random access memory (Dynamic RandomAccess Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. An image processing method, the method comprising:
Acquiring a plurality of first images with different focus positions, which are shot by a first aperture, aiming at a target scene;
carrying out focus stack processing on the plurality of first images to obtain a wide depth-of-field image;
determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image;
And carrying out blurring processing on pixels outside a depth-of-field range of a target object in the wide depth-of-field image based on the depth information to obtain a post-focusing image taking the depth-of-field range as a focusing distance, wherein the target object is a focusing object selected by a user.
2. The method of claim 1, wherein the acquiring a plurality of first images of the target scene taken at different focus positions with the first aperture comprises:
Acquiring a plurality of first images with different focal positions photographed with a first aperture and a second image photographed with a second aperture for a target scene, the second aperture being larger than the first aperture;
The determining depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image includes:
the depth information is determined based on at least one of the plurality of first images and the wide depth image, and the second image.
3. The method of claim 2, wherein the determining the depth information based on at least one of the plurality of first images and the wide depth image, and the second image comprises:
carrying out semantic segmentation processing on the wide depth image to obtain a first semantic segmentation image;
carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
And determining the depth information by comparing the definition degree of each object in the first semantic segmentation image and the second semantic segmentation image.
4. The method of claim 2, wherein the plurality of first images includes a third image of the same focal position as the second image; the determining the depth information based on at least one of the plurality of first images and the wide depth image, and the second image includes:
carrying out semantic segmentation processing on the second image to obtain a second semantic segmentation image;
carrying out semantic segmentation processing on the third image to obtain a third semantic segmentation image;
and determining the depth information by comparing the definition degree of each object in the second semantic segmentation image and the third semantic segmentation image.
5. The method of claim 1, wherein the determining depth information for each object in the wide depth image based on at least one of the plurality of first images and the wide depth image comprises:
respectively carrying out semantic segmentation processing on the plurality of first images to obtain a plurality of fourth semantic segmentation images;
determining the depth information by comparing the sharpness of each object in the plurality of fourth semantically segmented images.
6. The method according to any one of claims 1 to 5, wherein blurring pixels in the wide depth image outside a depth range in which the target object is located based on the depth information includes:
determining a target blurring degree corresponding to a target pixel based on the depth information, wherein the target pixel is any pixel outside the depth range in the wide depth image;
and blurring the target pixel based on the target blurring degree.
7. The method of claim 6, wherein determining the target blurring degree corresponding to the target pixel based on the depth information comprises:
determining a depth value of the target pixel based on the depth information,
And determining the target blurring degree based on the difference value between the depth value of the target pixel and the depth range.
8. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a plurality of first images with different focus positions, which are shot by a first aperture, aiming at a target scene;
the stack module is used for carrying out focus stack processing on the plurality of first images to obtain a wide depth image;
a determining module configured to determine depth information of each object in the wide depth image based on at least one image of the plurality of first images and the wide depth image;
and the blurring module is used for blurring pixels outside the depth-of-field range of the target object in the wide depth-of-field image based on the depth information to obtain a post-focusing image taking the depth-of-field range as a focusing distance, wherein the target object is a focusing object selected by a user.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410328949.8A 2024-03-21 2024-03-21 Image processing method, device, electronic device and computer-readable storage medium Pending CN118301471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410328949.8A CN118301471A (en) 2024-03-21 2024-03-21 Image processing method, device, electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410328949.8A CN118301471A (en) 2024-03-21 2024-03-21 Image processing method, device, electronic device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN118301471A true CN118301471A (en) 2024-07-05

Family

ID=91685263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410328949.8A Pending CN118301471A (en) 2024-03-21 2024-03-21 Image processing method, device, electronic device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN118301471A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118784991A (en) * 2024-09-06 2024-10-15 荣耀终端有限公司 Image processing method and related device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118784991A (en) * 2024-09-06 2024-10-15 荣耀终端有限公司 Image processing method and related device

Similar Documents

Publication Publication Date Title
CN111669493B (en) A shooting method, device and equipment
CN110493525B (en) Zoom image determination method and device, storage medium and terminal
EP3793188A1 (en) Image processing method, electronic device, and computer readable storage medium
CN108076278B (en) A kind of automatic focusing method, device and electronic equipment
CN112887602B (en) Camera switching method, device, storage medium and electronic device
CN111726521B (en) Terminal photographing method, photographing device and terminal
JP2010525667A (en) Simulating shallow depth of field to maximize privacy in videophones
CN110324532A (en) Image blurring method and device, storage medium and electronic equipment
CN113099122A (en) Shooting method, shooting device, shooting equipment and storage medium
CN106231200B (en) A kind of photographic method and device
CN112634160A (en) Photographing method and device, terminal and storage medium
CN110248101A (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113810590A (en) Image processing method, electronic device, medium and system
CN111968052A (en) Image processing method, image processing apparatus, and storage medium
CN108259767A (en) Image processing method, image processing device, storage medium and electronic equipment
CN113807124B (en) Image processing method, device, storage medium and electronic equipment
CN118301471A (en) Image processing method, device, electronic device and computer-readable storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN115170581A (en) Portrait segmentation model generation method, portrait segmentation model and portrait segmentation method
CN113507549A (en) Camera, photographing method, terminal and storage medium
CN115393182A (en) Image processing method, device, processor, terminal and storage medium
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN115086558B (en) Focusing method, camera equipment, terminal equipment and storage media
KR101567668B1 (en) Smartphones camera apparatus for generating video signal by multi-focus and method thereof
CN115348390A (en) Shooting method and shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 200331 Shanghai City Putuo District Qilian Mountain South Road 2891 Lane 228 No. 1 Building 8th Floor

Applicant after: Shanghai Lixun Electronic Technology Co.,Ltd.

Address before: Room h115, 6th floor, district H (East Block), 666 Beijing East Road, Huangpu District, Shanghai 200001

Applicant before: SHANGHAI WINGTECH ELECTRONICS TECHNOLOGY Co.,Ltd.

Country or region before: China