[go: up one dir, main page]

CN115690130A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN115690130A
CN115690130A CN202211714136.XA CN202211714136A CN115690130A CN 115690130 A CN115690130 A CN 115690130A CN 202211714136 A CN202211714136 A CN 202211714136A CN 115690130 A CN115690130 A CN 115690130A
Authority
CN
China
Prior art keywords
target
image
picture
target area
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211714136.XA
Other languages
Chinese (zh)
Other versions
CN115690130B (en
Inventor
张伟
张靖
崔涛
贺杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yongliu Technology Co ltd
Original Assignee
Hangzhou Yongliu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yongliu Technology Co ltd filed Critical Hangzhou Yongliu Technology Co ltd
Priority to CN202211714136.XA priority Critical patent/CN115690130B/en
Publication of CN115690130A publication Critical patent/CN115690130A/en
Application granted granted Critical
Publication of CN115690130B publication Critical patent/CN115690130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device. The image processing method comprises the following steps: carrying out target area segmentation on the obtained picture to obtain a target area of the picture; extracting target detail texture of the picture to obtain target detail texture of a target area; superposing and fusing the target area and the target detail texture to obtain a fused image; and obtaining a photographing posture template by adjusting the transparency of the fused image. The scheme provided by the invention can help the user accurately and quickly take pictures.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to an image processing method and apparatus.
Background
In related application programs, a user needs to take pictures according to certain fixed postures, so that the effectiveness of image taking is guaranteed, human body contour lines are preset to help the user align with the posture of the user, the rough contour lines cannot guarantee that the user posture and the preset contour lines are perfectly superposed, and the user needs to adjust the posture and the shooting angle for many times.
Aiming at the problem that the photographing efficiency is low due to the fact that the user posture cannot be guaranteed to be perfectly coincided with the preset contour line in the prior art, an effective solution is not provided at present.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention are expected to provide an image processing method and an image processing apparatus, so as to at least solve the problem of low photographing efficiency caused by the fact that it is not guaranteed in the prior art that the user gesture and the preset contour line are perfectly overlapped.
The technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including: carrying out target area segmentation on the obtained picture to obtain a target area of the picture; extracting target detail texture of the picture to obtain target detail texture of a target area; superposing and fusing the target area and the target detail texture to obtain a fused image; and obtaining a photographing posture template by adjusting the transparency of the fused image.
Optionally, performing target region segmentation on the obtained picture, and obtaining the target region of the picture includes: acquiring a picture in a user terminal under the condition that a target area comprises a human body part; predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and extracting the region where the pixel meeting the probability of the human body part is located to obtain a target region.
Further, optionally, extracting a region where a pixel meeting the probability of the human body part is located, and obtaining the target region includes: extracting the region where the pixel meeting the probability of the human body part is located to obtain a probability map; carrying out gray level conversion on the probability map, and carrying out Gaussian processing on the probability map subjected to gray level conversion to obtain a noise-reduced gray level map; and carrying out binarization threshold processing on the denoised gray level image to obtain a target area.
Optionally, extracting the target detail texture of the picture to obtain the target detail texture of the target region includes: converting the picture into a grey scale map; respectively carrying out derivation on the gray-scale image in the x-axis direction and the y-axis direction through an edge detection algorithm to obtain a derivative image in the x-axis direction and a derivative image in the y-axis direction; superposing and fusing the derivative image in the x-axis direction and the derivative image in the y-axis direction to obtain a fused image; and processing the fused image through a preset algorithm to obtain the target detail texture.
Further, optionally, the overlaying and fusing the target region and the target detail texture to obtain a fused image includes: and overlapping and fusing the target area, the target contour edge line and the target detail texture to obtain a fused image.
Optionally, the overlapping and fusing the target area, the target contour edge line, and the target detail texture to obtain a fused image includes: taking the target area as a mask, and extracting target area information from the target detail texture; performing binarization processing on the target area to obtain a target contour line; and overlapping and fusing the target contour line and the target detail texture to obtain a fused image.
Further, optionally, the obtaining of the photographing posture template by adjusting the transparency of the fused image includes: setting the foreground part pixels of the fused image as white and setting the image transparency as full transparency; setting the background part pixels of the fused image to be zero, and setting the transparency of the image to be semi-transparent; and determining the fused image with the transparency adjusted as a photographing posture template.
Optionally, the method further includes: reducing the picture size of the fused image to a fixed value to obtain a reduced fused image; converting the texture map of the fused image into a map with a first channel number, and converting the map with the first channel number into a map with a second channel number; traversing the image with the second channel number, wherein if the designated pixel in the image with the second channel number is smaller than a preset value, the designated pixel is set as a first value, and the transparency is modified into a second value; if the designated pixel in the graph with the second channel number is larger than the preset value, the pixel value of the designated pixel is unchanged, and the transparency is modified into a third value; and amplifying the traversed image of the second channel number to the size of the fused image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the segmentation module is used for segmenting the target area of the acquired picture to obtain the target area of the picture; the extraction module is used for extracting the target detail texture of the picture to obtain the target detail texture of the target area; the fusion module is used for superposing and fusing the target area and the target detail texture to obtain a fused image; and the adjusting module is used for adjusting the transparency of the fused image to obtain the photographing posture template.
Optionally, the segmentation module includes: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring pictures in the user terminal under the condition that a target area comprises a human body part; the prediction unit is used for predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and the segmentation unit is used for extracting the region where the pixels meeting the probability of the human body part are located to obtain a target region.
The embodiment of the invention provides an image processing method and device. Obtaining a target area of the picture by performing target area segmentation on the obtained picture; extracting target detail texture of the picture to obtain target detail texture of a target area; superposing and fusing the target area and the target detail texture to obtain a fused image; the photographing posture template is obtained by adjusting the transparency of the fused image, so that the technical effect of accurately and quickly photographing and taking images of a user can be helped.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of overlay fusion in an image processing method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another image processing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an image processing apparatus according to a second embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first", "second", and the like in the description and claims of the present invention and the accompanying drawings are used for distinguishing different objects, and are not used for limiting a specific order.
It should be noted that the following embodiments of the present invention may be implemented individually, or may be implemented in combination with each other, and the embodiments of the present invention are not limited in this respect.
Example one
In a first aspect, an embodiment of the present invention provides an image processing method, and fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention; as shown in fig. 1, an image processing method provided in an embodiment of the present application includes:
step S102, carrying out target area segmentation on the obtained picture to obtain a target area of the picture;
optionally, in step S102, performing target region segmentation on the acquired picture, and obtaining the target region of the picture includes: acquiring a picture in a user terminal under the condition that a target area comprises a human body part; predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and extracting the region where the pixels meeting the probability of the human body part are located to obtain a target region.
Further, optionally, extracting a region where a pixel meeting the probability of the human body part is located, and obtaining the target region includes: extracting the region where the pixel meeting the probability of the human body part is located to obtain a probability map; carrying out gray level conversion on the probability map, and carrying out Gaussian processing on the probability map subjected to gray level conversion to obtain a noise-reduced gray level map; and carrying out binarization threshold processing on the denoised gray level image to obtain a target area.
Specifically, in the embodiment of the present application, a portrait is taken as an example for explanation, in the embodiment of the present application, the target area may be a human body area, a user first takes a picture including a human body part and uploads the picture, the human body area in the picture is extracted as a foreground, and a part outside the human body area is used as a background. The specific implementation is as follows:
(1) Firstly, a user is required to shoot a picture containing any human body part, the picture is uploaded as an original picture and then a human body region is extracted, specifically, a segmentation algorithm (deep lab V3 +) is adopted to predict the probability of whether each pixel is a human body (the distribution range of a probability map is between 0 and 1, the value closer to 1 indicates that the probability of the pixel being a human body is greater, and the value closer to 0 indicates that the probability of the pixel being a non-human body is greater) (namely, the region where the pixel meeting the probability of the human body part is located is extracted in the embodiment of the application to obtain the probability map);
(2) Converting the probability map into 0-255, and performing gaussian blurring (the gaussian kernel size is set to 5*5) processing (that is, performing gray scale conversion on the probability map and performing gaussian processing on the probability map after the gray scale conversion in the embodiment of the application) to reduce the influence of noise;
(3) Carrying out binarization threshold processing on the de-noised gray level image, wherein the threshold value is set to 125 (corresponding to the probability value of 0.5 in the step (1)), and the image passing through the binarization threshold value is an area for separating people from the background;
(4) The human body area in the picture is reserved as the foreground, and the non-human body area is filtered out as the background.
Step S104, extracting the target detail texture of the picture to obtain the target detail texture of the target area;
optionally, the step S104 of extracting the target detail texture from the picture to obtain the target detail texture of the target region includes: converting the picture into a grey scale map; respectively carrying out derivation on the gray-scale image in the x-axis direction and the y-axis direction through an edge detection algorithm to obtain a derivative image in the x-axis direction and a derivative image in the y-axis direction; superposing and fusing the derivative image in the x-axis direction and the derivative image in the y-axis direction to obtain a fused image; and processing the fused image through a preset algorithm to obtain the target detail texture.
Specifically, after a human body part is extracted as a foreground, an edge detection algorithm (Sobel) is used to extract an edge of the foreground part, and the extracted edge is used as a detail texture of the human body (namely, a target detail texture in the embodiment of the present application); the concrete implementation is as follows:
(1) Firstly, converting an original image uploaded by a user into a gray scale image;
(2) Respectively carrying out first-order derivation on the x direction and the y direction of the gray-scale image by utilizing an edge detection algorithm (Sobel); wherein, first order derivation is respectively carried out on the x direction and the y direction, so that two first order derivation graphs can be obtained;
(3) Then, overlapping and fusing the images subjected to the first-order derivation; wherein, the two first-order guide graphs are overlapped and fused;
(4) The fused image is a detail texture image obtained by the Sobel algorithm, that is, the detail texture at this time includes the detail texture of the human body and the texture of the background.
S106, overlapping and fusing the target area and the target detail texture to obtain a fused image;
optionally, the step S106 of performing overlay fusion on the target region and the target detail texture to obtain a fused image includes: and overlapping and fusing the target area, the target contour edge line and the target detail texture to obtain a fused image.
Optionally, the overlapping and fusing the target area, the target contour edge line, and the target detail texture to obtain a fused image includes: taking the target area as a mask, and extracting target area information from the target detail texture; performing binarization processing on the target area to obtain a target contour line; and overlapping and fusing the target contour line and the target detail texture to obtain a fused image.
Specifically, fig. 2 is a schematic diagram of superposition and fusion in an image processing method according to an embodiment of the present invention, as shown in fig. 2, a portrait picture used in fig. 2 is an image of a public database, and information of the public database is shown in table 1:
Figure 365551DEST_PATH_IMAGE001
it should be noted that the portrait image shown in fig. 2 provided in the embodiment of the present application is only used to illustrate a process of superimposition and fusion in the image processing method provided in the embodiment of the present application, so as to implement the image processing method provided in the embodiment of the present application, and is not limited specifically.
As shown in fig. 2, the human body region obtained by the segmentation algorithm, the human body contour line (i.e., the target contour line in the embodiment of the present application) and the human body detail texture line obtained by the edge detection algorithm (i.e., the target detail texture in the embodiment of the present application) are superimposed, so that the rough human body contour and the fine human body texture are integrated into a whole; the concrete implementation is as follows:
(1) According to the obtained original (the original may be a color picture), taking the binarized picture in step S102 as a mask, extracting human body region information (i.e., target region information in the embodiment of the present application) from the detail texture map in step S104, after this step, only the detail texture of the human body region is retained, and the background region is filtered;
(2) Performing binarization processing on the binarized picture in the step S102 again to obtain the contour of the human body (note that the human body contour at the position is different from the human body region in the step S102, namely the human body region is a region slice of the picture occupied by the human body, and the human body contour is only the external contour line of the human body in the image);
(3) The human body contour lines (rough texture) and the human body texture lines (fine texture) are superposed to obtain the overall texture representation of the human body.
And step S108, obtaining a photographing posture template by adjusting the transparency of the fused image.
Optionally, the step S108 of obtaining the photographing posture template by adjusting the transparency of the fused image includes: setting the foreground part pixels of the fused image as white and setting the image transparency as full transparency; setting the background part pixels of the fused image to be zero, and setting the transparency of the image to be semi-transparent; and determining the fused image with the transparency adjusted as a photographing posture template.
Optionally, the image processing method provided in the embodiment of the present application further includes: reducing the picture size of the fused image to a fixed value to obtain a reduced fused image; converting the texture map of the fused image into a map with a first channel number, and converting the map with the first channel number into a map with a second channel number; traversing the image with the second channel number, wherein if the designated pixel in the image with the second channel number is smaller than a preset value, the designated pixel is set as a first value, and the transparency is modified into a second value; if the designated pixel in the graph with the second channel number is larger than the preset value, the pixel value of the designated pixel is unchanged, and the transparency is modified into a third value; and amplifying the traversed image of the second channel number to the size of the fused image.
Specifically, the foreground part pixels are set to be white, the image transparency is set to be fully transparent, the background part pixels are set to be zero and set to be semi-transparent, and the image is used as the personal and customized human body outline of the user and is used as the posture template of the user during subsequent photographing. The method comprises the following concrete steps:
(1) The picture size is reduced to a fixed size 480 x 640 (i.e., a fixed value in the embodiment of the present application) to reduce the subsequent pixel traversal time;
(2) Converting the texture map (grayscale map) fused in step S106 into a map of RGB 3 channels (i.e., a map of the first number of channels in the embodiment of the present application), and then converting the 3-channel map into a 4-channel map including a transparency channel (i.e., a map of the second number of channels in the embodiment of the present application); wherein, the conversion code: image _ edge. Convert ('RGBA'); the 3 channel is an RGB channel, the 4 channel is an RGBA channel, the transparency alpha channel is increased by the 4 channel compared with the 3 channel, and the subsequent image fusion is operated based on the alpha channel.
(3) Traversing the converted picture:
a. if a certain pixel value is less than 50 (i.e., a preset value in the embodiment of the present application), the pixel value is set to 255 (i.e., a first value in the embodiment of the present application), and the transparency is modified to 125 (i.e., a second value in the embodiment of the present application);
b. if a certain pixel value is greater than 50, the pixel value is kept unchanged, and the transparency is modified to be 255 (i.e., a third value in the embodiment of the present application);
the purpose of traversal in the embodiment of the present application is to operate on the transparency value and the pixel value of each pixel.
(4) And after traversing is finished, re-enlarging the transparency graph to the original size so as to ensure that the transparency graph and the original graph can be completely overlapped. Wherein the original size is the original size before being compressed to 480 × 640.
As shown in fig. 2, the photographing gesture template may be finally used in a photographing function (or photographing APP) of the mobile terminal.
In summary, with reference to steps S102 to S108, fig. 3 is a schematic flowchart of another image processing method according to an embodiment of the present invention, and as shown in fig. 3, the image processing method according to the embodiment of the present invention specifically includes the following steps:
s1, photographing and uploading by a user; in the embodiment of the present application, the picture used for image processing may be a picture that is uploaded by a user after the user takes a picture, or may be a picture that is taken by the user in real time. The description is given by taking the image after the user has taken the image and uploaded the image as an example, and the detailed description is omitted for the purpose of implementing the image processing method provided in the embodiment of the present application.
S2, segmenting the human body region based on a deep learning algorithm (deep Lab V3 +);
s3, extracting the detail texture of the human body based on an edge detection algorithm (Sobel);
s4, overlapping and fusing the human body contour and the detail texture;
s5, setting the transparency of the image in the human body area to be fully transparent, and setting pixels outside the human body area to be zero and to be semitransparent;
and S6, taking the image with the transparency adjusted as a subsequent photographing posture template.
The image processing method provided by the embodiment of the application is not only effective under natural light, but also can correctly segment the human body region and extract the human detail texture under specific illumination (such as wood lamp). Different from the mode of presetting a human body outline frame, the human body outline obtained by the image processing method provided by the embodiment of the application is a user private and customized outline frame, so that the outline frame obtained by the method can be 100% coincident with the user posture, and the outline also contains the detail texture information of the human body, so that the human body outline is more complete, and the user can align more easily; the human body outline template of each user is customized, so that the human body outline template can be quickly aligned with the posture of the template without adjusting the posture and the shooting angle for many times; different from the prior art that auxiliary photographing is only performed under the natural light condition, the image processing method provided by the embodiment of the application can perform auxiliary photographing under any light condition, and can simultaneously segment the human body contour and extract the human body detail texture, so that the image processing method provided by the embodiment of the application is more universal in practical use, and not only can be used under the natural light condition, but also can be used for auxiliary photographing under the special light (such as a wood lamp) condition.
The embodiment of the invention provides an image processing method. Obtaining a target area of the picture by performing target area segmentation on the obtained picture; extracting target detail texture of the picture to obtain target detail texture of a target area; superposing and fusing the target area and the target detail texture to obtain a fused image; the photographing posture template is obtained by adjusting the transparency of the fused image, so that the technical effect of accurately and quickly photographing and taking images of a user can be helped.
Example two
In a second aspect, an embodiment of the present invention provides an image processing apparatus, and fig. 4 is a schematic diagram of an image processing apparatus according to a second embodiment of the present invention; as shown in fig. 4, an image processing apparatus according to an embodiment of the present application includes: the segmentation module 42 is configured to perform target region segmentation on the acquired picture to obtain a target region of the picture; the extraction module 44 is configured to perform target detail texture extraction on the picture to obtain a target detail texture of the target region; a fusion module 46, configured to perform superposition fusion on the target region and the target detail texture to obtain a fused image; and the adjusting module 48 is used for obtaining the photographing posture template by adjusting the transparency of the fused image.
Optionally, the segmentation module 42 includes: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring pictures in the user terminal under the condition that a target area comprises a human body part; the prediction unit is used for predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and the segmentation unit is used for extracting the region where the pixels meeting the probability of the human body part are located to obtain a target region.
The embodiment of the invention provides an image processing device. Obtaining a target area of the picture by performing target area segmentation on the obtained picture; extracting target detail texture of the picture to obtain target detail texture of a target area; superposing and fusing the target area and the target detail texture to obtain a fused image; the photographing posture template is obtained by adjusting the transparency of the fused image, so that the technical effect of accurately and quickly photographing and taking images of a user can be helped.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (10)

1. An image processing method, comprising:
carrying out target area segmentation on the obtained picture to obtain a target area of the picture;
extracting target detail texture of the picture to obtain target detail texture of the target area;
superposing and fusing the target area and the target detail texture to obtain a fused image;
and obtaining a photographing attitude template by adjusting the transparency of the fused image.
2. The image processing method according to claim 1, wherein the performing target region segmentation on the acquired picture to obtain the target region of the picture comprises:
acquiring a picture in a user terminal under the condition that the target area comprises a human body part;
predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm;
and extracting the region where the pixel meeting the probability of the human body part is located to obtain the target region.
3. The image processing method according to claim 2, wherein the extracting the region where the pixel satisfying the probability of the human body part is located to obtain the target region comprises:
extracting the region where the pixel meeting the probability of the human body part is located to obtain a probability map;
carrying out gray level conversion on the probability map, and carrying out Gaussian processing on the probability map after the gray level conversion to obtain a gray level map after noise reduction;
and carrying out binarization threshold processing on the denoised grey-scale image to obtain the target area.
4. The image processing method according to any one of claims 1 to 3, wherein the performing target detail texture extraction on the picture to obtain the target detail texture of the target region comprises:
converting the picture into a grey-scale map;
respectively carrying out derivation on the gray-scale image in the x-axis direction and the y-axis direction through an edge detection algorithm to obtain a derivative image in the x-axis direction and a derivative image in the y-axis direction;
superposing and fusing the derivative image in the x-axis direction and the derivative image in the y-axis direction to obtain a fused image;
and processing the fused image through a preset algorithm to obtain the target detail texture.
5. The image processing method according to claim 4, wherein the overlaying and fusing the target region and the target detail texture to obtain a fused image comprises:
and overlapping and fusing the target area, the target contour edge line and the target detail texture to obtain the fused image.
6. The image processing method according to claim 5, wherein the performing the overlay fusion on the target region, the target contour edge line, and the target detail texture to obtain the fused image comprises:
taking the target area as a mask, and extracting target area information from the target detail texture;
performing binarization processing on the target area to obtain the target contour line;
and overlapping and fusing the target contour line and the target detail texture to obtain the fused image.
7. The image processing method according to claim 5, wherein the obtaining of the photographing posture template by adjusting the transparency of the fused image comprises:
setting the foreground part pixels of the fused image to be white, and setting the image transparency to be fully transparent;
setting the background part pixels of the fused image to be zero, and setting the image transparency to be semi-transparent;
and determining the fused image with the transparency adjusted as the photographing posture template.
8. The image processing method according to claim 7, further comprising:
reducing the picture size of the fused image to a fixed value to obtain a reduced fused image;
converting the texture map of the fused image into a map with a first channel number, and converting the map with the first channel number into a map with a second channel number;
traversing the graph with the second channel number, wherein if a specified pixel in the graph with the second channel number is smaller than a preset value, the specified pixel is set as a first value, and the transparency is modified into a second value; if the designated pixel in the image of the second channel number is larger than a preset value, the pixel value of the designated pixel is unchanged, and the transparency is changed into a third value;
and amplifying the traversed image of the second channel number to the size of the fused image.
9. An image processing apparatus characterized by comprising:
the segmentation module is used for carrying out target area segmentation on the obtained picture to obtain a target area of the picture;
the extraction module is used for extracting the target detail texture of the picture to obtain the target detail texture of the target area;
the fusion module is used for superposing and fusing the target area and the target detail texture to obtain a fused image;
and the adjusting module is used for adjusting the transparency of the fused image to obtain a photographing posture template.
10. The image processing apparatus according to claim 9, wherein the segmentation module comprises:
the acquisition unit is used for acquiring a picture in the user terminal under the condition that the target area comprises a human body part;
the prediction unit is used for predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm;
and the segmentation unit is used for extracting the region where the pixels meeting the probability of the human body part are located to obtain the target region.
CN202211714136.XA 2022-12-30 2022-12-30 Image processing method and device Active CN115690130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211714136.XA CN115690130B (en) 2022-12-30 2022-12-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211714136.XA CN115690130B (en) 2022-12-30 2022-12-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN115690130A true CN115690130A (en) 2023-02-03
CN115690130B CN115690130B (en) 2023-06-27

Family

ID=85057576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211714136.XA Active CN115690130B (en) 2022-12-30 2022-12-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN115690130B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095299A (en) * 2023-10-18 2023-11-21 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080088718A1 (en) * 2006-10-17 2008-04-17 Cazier Robert P Template Creator For Digital Cameras
US20080285887A1 (en) * 2007-05-16 2008-11-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
CN101841642A (en) * 2010-04-22 2010-09-22 南京航空航天大学 Edge detection method based on fractional-order signal processing
CN104184934A (en) * 2013-05-23 2014-12-03 北京千橡网景科技发展有限公司 Method and apparatus for providing auxiliary reference for shooting
CN106202086A (en) * 2015-05-04 2016-12-07 阿里巴巴集团控股有限公司 A kind of picture processing, acquisition methods, Apparatus and system
CN107358593A (en) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 Imaging method and device
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN109842757A (en) * 2017-11-29 2019-06-04 中兴通讯股份有限公司 Image processing method and device
CN110335277A (en) * 2019-05-07 2019-10-15 腾讯科技(深圳)有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN110544258A (en) * 2019-08-30 2019-12-06 北京海益同展信息科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111862110A (en) * 2020-06-30 2020-10-30 辽宁向日葵教育科技有限公司 Green curtain image matting method, system, equipment and readable storage medium
CN112330688A (en) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 Image processing method and device based on artificial intelligence and computer equipment
CN112866577A (en) * 2021-01-20 2021-05-28 腾讯科技(深圳)有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113393430A (en) * 2021-06-09 2021-09-14 东方电气集团科学技术研究院有限公司 Thermal imaging image enhancement training method and device for fan blade defect detection
CN113569790A (en) * 2019-07-30 2021-10-29 北京市商汤科技开发有限公司 Image processing method and device, processor, electronic device and storage medium
US20220130025A1 (en) * 2019-01-15 2022-04-28 Kandao Technology Co., Ltd. Picture optimization method device, terminal and corresponding storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080088718A1 (en) * 2006-10-17 2008-04-17 Cazier Robert P Template Creator For Digital Cameras
US20080285887A1 (en) * 2007-05-16 2008-11-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
CN101841642A (en) * 2010-04-22 2010-09-22 南京航空航天大学 Edge detection method based on fractional-order signal processing
CN104184934A (en) * 2013-05-23 2014-12-03 北京千橡网景科技发展有限公司 Method and apparatus for providing auxiliary reference for shooting
CN106202086A (en) * 2015-05-04 2016-12-07 阿里巴巴集团控股有限公司 A kind of picture processing, acquisition methods, Apparatus and system
CN107358593A (en) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 Imaging method and device
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN109842757A (en) * 2017-11-29 2019-06-04 中兴通讯股份有限公司 Image processing method and device
US20220130025A1 (en) * 2019-01-15 2022-04-28 Kandao Technology Co., Ltd. Picture optimization method device, terminal and corresponding storage medium
CN110335277A (en) * 2019-05-07 2019-10-15 腾讯科技(深圳)有限公司 Image processing method, device, computer readable storage medium and computer equipment
WO2020224424A1 (en) * 2019-05-07 2020-11-12 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN113569790A (en) * 2019-07-30 2021-10-29 北京市商汤科技开发有限公司 Image processing method and device, processor, electronic device and storage medium
CN110544258A (en) * 2019-08-30 2019-12-06 北京海益同展信息科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111862110A (en) * 2020-06-30 2020-10-30 辽宁向日葵教育科技有限公司 Green curtain image matting method, system, equipment and readable storage medium
CN112330688A (en) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 Image processing method and device based on artificial intelligence and computer equipment
CN112866577A (en) * 2021-01-20 2021-05-28 腾讯科技(深圳)有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113393430A (en) * 2021-06-09 2021-09-14 东方电气集团科学技术研究院有限公司 Thermal imaging image enhancement training method and device for fan blade defect detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095299A (en) * 2023-10-18 2023-11-21 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area
CN117095299B (en) * 2023-10-18 2024-01-26 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Also Published As

Publication number Publication date
CN115690130B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US11156564B2 (en) Dirt detection on screen
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN106447638A (en) Beauty treatment method and device thereof
CN107862663A (en) Image processing method, device, readable storage medium and computer equipment
CN107172354B (en) Video processing method, device, electronic device and storage medium
CN107368806B (en) Image rectification method, image rectification device, computer-readable storage medium and computer equipment
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113808027B (en) Human body image processing method and device, electronic equipment and storage medium
CN106846271B (en) Method for removing reticulate pattern in identity card photo
CN115690130B (en) Image processing method and device
CN107977648B (en) A method and system for identifying the clarity of an ID card based on face recognition
Zheng et al. Windowing decomposition convolutional neural network for image enhancement
CN110930358A (en) Solar panel image processing method based on self-adaptive algorithm
Han et al. Locally adaptive contrast enhancement using convolutional neural network
CN112581481A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113298829A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP5969105B1 (en) Imaging apparatus and imaging method
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
CN117475421A (en) License plate exposure compensation method and device and electronic equipment
CN113744141B (en) Image enhancement method and device and automatic driving control method and device
CN116309133A (en) Low-light image enhancement method for local and global self-adaptive contrast correction
CN108629329A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN113436106A (en) Underwater image enhancement method and device and computer storage medium
JP2000268172A (en) Picture normalizing device
CN112164012B (en) Method and system for realizing portrait color relief effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Wei

Inventor after: Zhang Jing

Inventor after: Cui Tao

Inventor after: He Yang

Inventor before: Zhang Wei

Inventor before: Zhang Jing

Inventor before: Cui Tao

Inventor before: He Yang

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An image processing method and device

Granted publication date: 20230627

Pledgee: Hangzhou branch of Bank of Nanjing Co.,Ltd.

Pledgor: Hangzhou Yongliu Technology Co.,Ltd.

Registration number: Y2024980015252