[go: up one dir, main page]

CN110992283A - Image processing method, image processing apparatus, electronic device, and readable storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
CN110992283A
CN110992283A CN201911205189.7A CN201911205189A CN110992283A CN 110992283 A CN110992283 A CN 110992283A CN 201911205189 A CN201911205189 A CN 201911205189A CN 110992283 A CN110992283 A CN 110992283A
Authority
CN
China
Prior art keywords
image
preset
face
definition
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911205189.7A
Other languages
Chinese (zh)
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911205189.7A priority Critical patent/CN110992283A/en
Publication of CN110992283A publication Critical patent/CN110992283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The image processing method comprises the following steps: when the definition of the face in the image is smaller than a first preset definition, cutting the face to obtain a face image; acquiring a preset user portrait and a preset standard portrait both having the definition greater than a second preset definition; when the similarity between the human face and the human face in the preset user portrait is greater than the preset similarity, taking the preset user portrait as a reference image; when the similarity is smaller than the preset similarity, taking a preset standard portrait as a reference image; and processing the face image according to the reference image to obtain a repaired image. The image processing method, the image processing device, the electronic equipment and the computer readable storage medium utilize the preset user portrait with the definition higher than the second preset definition and the fuzzy human face image shot by the preset standard human image to process, so that the relatively clear human face image is obtained.

Description

Image processing method, image processing apparatus, electronic device, and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
When an image is shot, the shot face image may be blurred due to the existence of influence factors such as camera motion and subject motion. How to solve the problem of face image blurring becomes a technical problem in the field.
Disclosure of Invention
Embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
The image processing method of the embodiment of the application comprises the following steps: when the definition of a face in an image to be processed is smaller than a first preset definition, cutting the face to obtain a face image; acquiring a preset user portrait and a preset standard portrait, wherein the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition; when the similarity between the human face and the human face in the preset user portrait is larger than the preset similarity, taking the preset user portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image; and when the similarity between the human face and the human face in the preset user portrait is smaller than a preset similarity, taking the preset standard portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image.
The image processing device of the embodiment of the application comprises a first processing module, a first obtaining module, a second processing module and a third processing module. The first processing module is used for cutting out the face to obtain a face image when the definition of the face in the image to be processed is smaller than a first preset definition. The first acquisition module is used for acquiring a preset user portrait and a preset standard portrait, and the definition of the preset user portrait and the definition of the preset standard portrait are both greater than the second preset definition. And the second processing module is used for taking the preset user portrait as a reference image when the similarity between the human face and the human face in the preset user portrait is greater than the preset similarity, and processing the human face image according to the reference image to obtain a repaired image. And the third processing module is used for taking the preset standard portrait as a reference image when the similarity between the human face and the human face in the preset user portrait is smaller than a preset similarity, and processing the human face image according to the reference image to obtain a repaired image.
The electronic equipment of the embodiment of the application comprises a shell, an imaging device and a processor, wherein the imaging device and the processor are installed on the shell, and the imaging device is used for shooting images. The processor is configured to: when the definition of a face in an image to be processed is smaller than a first preset definition, cutting the face to obtain a face image; acquiring a preset user portrait and a preset standard portrait, wherein the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition; when the similarity between the human face and the human face in the preset user portrait is larger than the preset similarity, taking the preset user portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image; and when the similarity between the human face and the human face in the preset user portrait is smaller than a preset similarity, taking the preset standard portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image.
A computer-readable storage medium of an embodiment of the present application, having stored thereon a computer program that, when executed by a processor, implements: when the definition of a face in an image to be processed is smaller than a first preset definition, cutting the face to obtain a face image; acquiring a preset user portrait and a preset standard portrait, wherein the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition; when the similarity between the human face and the human face in the preset user portrait is larger than the preset similarity, taking the preset user portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image; and when the similarity between the human face and the human face in the preset user portrait is smaller than a preset similarity, taking the preset standard portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image.
The image processing method, the image processing device, the electronic equipment and the computer readable storage medium in the embodiment of the application utilize the preset user portrait with the definition higher than the second preset definition and the fuzzy human face image shot by the preset standard human face image to process, so that the relatively clear human face image is obtained.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application.
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 3 is a schematic view of an electronic device of some embodiments of the present application.
FIG. 4 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 5 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 6 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 7 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 8 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 9 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 10 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 11 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 12 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 13 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 14 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 15 is a schematic diagram of a second processing module of an image processing apparatus according to some embodiments of the present application.
FIG. 16 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 17 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 18 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application.
FIG. 19 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 20 is a schematic view of a scene of an image processing method according to some embodiments of the present application.
FIG. 21 is a schematic diagram of a connection between a computer-readable storage medium and an electronic device according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Referring to fig. 1, an image processing method according to an embodiment of the present application includes:
012: when the definition of the face in the image to be processed is smaller than a first preset definition, cutting the face to obtain a face image;
014: acquiring a preset user portrait and a preset standard portrait, wherein the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition;
016: when the similarity between the human face and the human face in the preset user portrait is larger than the preset similarity, taking the preset user portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image;
018: and when the similarity between the human face and the human face in the preset user portrait is smaller than the preset similarity, taking the preset standard portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image.
Referring to fig. 2, an image processing apparatus 100 according to an embodiment of the present disclosure includes a first processing module 12, a first obtaining module 14, a second processing module 16, and a third processing module 18. The image processing method according to the embodiment of the present application can be implemented by the image processing apparatus 100 according to the embodiment of the present application, wherein step 012 can be implemented by the first processing module 12, step 014 can be implemented by the first acquiring module 14, step 016 can be implemented by the second processing module 16, and step 018 can be implemented by the third processing module 18. That is, the first processing module 12 may be configured to cut the face to obtain the face image when the definition of the face in the image to be processed is smaller than a first preset definition; the first obtaining module 14 may be configured to obtain a preset user portrait and a preset standard portrait, where the definitions of the preset user portrait and the preset standard portrait are both greater than a second preset definition; the second processing module 16 may be configured to, when the similarity between the face and a face in the preset user portrait is greater than the preset similarity, use the preset user portrait as a reference image, and process the face image according to the reference image to obtain a restored image; the third processing module 18 may be configured to, when the similarity between the face and the face in the preset user portrait is smaller than the preset similarity, use the preset standard portrait as a reference image, and process the face image according to the reference image to obtain a restored image.
Referring to fig. 3, an electronic device 1000 according to an embodiment of the present disclosure includes a housing 200, an imaging device 300, and a processor 400, where the imaging device 300 and the processor 400 are both mounted on the housing 200, and the imaging device 300 is used to capture an image to be processed. The image processing method according to this embodiment may be implemented by the electronic device 1000 according to this embodiment, where step 012, step 014, step 016 and step 018 may all be implemented by the processor 400, that is, the processor 400 may be configured to: when the definition of the face in the image to be processed is smaller than a first preset definition, cutting the face to obtain a face image; acquiring a preset user portrait and a preset standard portrait, wherein the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition; when the similarity between the human face and the human face in the preset user portrait is larger than the preset similarity, taking the preset user portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image; and when the similarity between the human face and the human face in the preset user portrait is smaller than the preset similarity, taking the preset standard portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image.
The image processing method, the image processing apparatus 100 and the electronic device 1000 according to the embodiment of the application process the blurred face image shot by the preset user portrait with the definition higher than the second preset definition and the preset standard portrait, so as to obtain a clearer face image.
In the related art, in order to solve the problem of face image blurring, it is usually implemented by adding devices, for example, adding OIS (optical image stabilizer), which may result in an increase in the volume of the imaging apparatus, which is not favorable for miniaturization of the imaging apparatus. The image processing method, the image processing apparatus 100 and the electronic device 1000 according to the embodiment of the present application are used for processing a face image, and no new device needs to be added in the image processing process, which is beneficial to the miniaturization of an imaging apparatus.
The imaging device 300 according to the embodiment of the present disclosure may refer to a camera, for example, a front camera, a rear camera, or a front camera and a rear camera, and the number of the cameras may be one or more, and is not limited specifically herein. The electronic device 1000 may include a cell phone, a computer, a camera, etc.
Referring to fig. 4, in some embodiments, an image processing method is used in the imaging apparatus 300, and the image processing method further includes:
022: controlling the imaging device 300 to shoot the portrait of the user when the exposure time meets a preset exposure condition, the light sensitivity meets a preset light sensitivity condition, and the face frame area meets a preset face frame area condition;
024: and when the definition of the user portrait is greater than the second preset definition, saving the user portrait as a preset user portrait.
Referring to fig. 5, in some embodiments, the image processing apparatus 100 is used in an imaging apparatus 300, and the image processing apparatus 100 further includes a first control module 22 and a saving module 24. Step 022 may be implemented by the first control module 22 and step 024 may be implemented by the preservation module 24. That is, the first control module 22 is configured to control the imaging device 300 to capture the portrait of the user when the exposure duration satisfies the preset exposure condition, the sensitivity satisfies the preset sensitivity condition, and the face frame area satisfies the preset face frame area condition. The saving module 24 may be configured to save the user portrait as a preset user portrait when the definition of the user portrait is greater than a second preset definition.
Referring again to fig. 3, in some embodiments, both step 022 and step 024 may be implemented by processor 400, that is, processor 400 may also be configured to: controlling the imaging device 300 to shoot the portrait of the user when the exposure time meets a preset exposure condition, the light sensitivity meets a preset light sensitivity condition, and the face frame area meets a preset face frame area condition; and when the definition of the user portrait is greater than the second preset definition, saving the user portrait as a preset user portrait.
The preset user figure may be a user figure of which the definition is greater than a second preset definition, photographed under a preset condition. Specifically, the preset conditions may include a preset exposure condition, a preset sensitivity condition, and a preset face frame area condition. Here, the sensitivity may refer to a light-sensing sensitivity of a light-sensing element (e.g., an image sensor) in the imaging apparatus 300, and is expressed by ISO. Sensitivity may be inversely related to ambient brightness, e.g., the lower the ambient brightness, the greater the sensitivity; the higher the ambient brightness, the lower the sensitivity. In one example, the sensitivity may be acquired by: ambient brightness is detected by an ambient brightness detecting element (e.g., a light sensor), each ambient brightness may correspond to a sensitivity, and a comparison table of the ambient brightness and the sensitivities may be stored in a memory unit, and the corresponding sensitivities may be read out from the memory unit according to the ambient brightness. In another example, the sensitivity may be acquired by: when the imaging apparatus 300 captures an image to be processed with a sensitivity, the sensitivity is obtained by direct recording.
The preset exposure condition and the preset sensitivity condition can represent the ambient brightness, when the exposure time meets the preset exposure condition and the sensitivity meets the preset sensitivity condition, the environment brightness is considered to be proper, and the quality of the user portrait shot when the environment brightness is proper is higher. The preset sensitivity condition may be a preset sensitivity, which is used to determine whether the shooting environment is a low-brightness environment or a non-low-brightness environment (including a high-brightness environment and an environment with moderate brightness), for example, when the sensitivity is greater than the preset sensitivity, it may be determined that the brightness of the shooting environment at this time is low, and the current shooting environment belongs to the low-brightness environment; when the sensitivity is less than the preset sensitivity, determining that the brightness of the current shooting environment is not low, and the current shooting environment belongs to a non-low-brightness environment; when the sensitivity is equal to the preset sensitivity, the shooting environment at the moment is in the critical point of the low-brightness environment and the non-low-brightness environment, and the shooting environment can be classified as the low-brightness environment and the non-low-brightness environment.
The face frame area may refer to an area of a minimum circumscribed rectangular frame of the face. The preset face frame area condition can represent the shooting distance between the user and the imaging device, when the face frame area meets the preset face frame area condition, the shooting distance is considered to be moderate, and the quality of the user portrait shot when the shooting distance is moderate is higher. For example, when a front camera is used for shooting an image, because the front camera generally cannot automatically focus, when the shooting distance is long, the problem that the human face is easy to be out of focus occurs; when the shooting distance is moderate, the face can be in the focusing range of the front camera, and the quality of the shot user portrait is guaranteed. Therefore, the image quality of the preset user portrait can be improved through the preset conditions, and the image quality of the repaired image formed after processing can be higher due to the fact that the high-quality preset user portrait is used as the reference image to process the face image. The preset exposure condition, the preset sensitivity condition and the preset face frame area condition may be conditions obtained by summarizing after an experiment, or may be conditions set by a user according to habits and requirements of the user, and are not specifically limited herein. In one embodiment, the preset exposure condition may be that the exposure time is less than 33 milliseconds, the preset sensitivity condition may be that the sensitivity is less than 400, and the preset face frame area condition may be that the face frame area is greater than one sixteenth of the whole image to be processed.
In some embodiments, it is determined whether the definition of the portrait of the user is greater than a second preset definition, and when the definition of the portrait of the user is greater than the second preset definition, the preset portrait of the user may be directly stored in the imaging device 300 or the local storage unit of the electronic device 1000, so that the preset portrait of the user is conveniently read; the preset user portrait can be uploaded to the cloud for storage, so that occupation of the preset user portrait on the local storage space can be reduced.
When the exposure time does not meet the preset exposure condition, the light sensitivity does not meet the preset light sensitivity condition, or the face frame area does not meet the preset face frame area condition, the portrait of the user is not shot; and when the definition of the user portrait is less than the second preset definition, the user portrait is not stored. Therefore, the user portrait with poor quality can be prevented from being taken as the preset user portrait.
Referring to fig. 6, in some embodiments, the image processing method further includes:
026: when the imaging apparatus 300 is turned on for the first time, a prompt message is issued to prompt the shooting of a preset user portrait.
Referring to fig. 7, in some embodiments, the image processing apparatus 100 further includes a prompt module 26. Step 026 may be implemented by prompting module 26, that is, prompting module 26 may be configured to send a prompt message to prompt to capture a preset user portrait when imaging device 300 is turned on for the first time.
Referring again to fig. 3, in some embodiments, step 026 can be implemented by processor 400, that is, processor 400 can be further configured to send a prompt message to prompt the user to take a predetermined portrait when imaging device 300 is turned on for the first time.
Therefore, when the imaging device 300 is turned on for the first time, the prompt information is sent to prompt the user to shoot the preset user portrait, so that the user can select a scene with proper ambient brightness and moderate shooting distance to shoot the preset user portrait with higher definition. When the number of the imaging apparatuses 300 includes a plurality, the preset user portrait may be taken by using the imaging apparatus 300 having the highest resolution. For example, the imaging device 300 includes a front camera and a rear camera, and since the resolution of the rear camera is generally higher than that of the front camera, the preset user portrait can be obtained by shooting with the rear camera. It should be noted that the preset user portrait obtained by one imaging device 300 may be applicable to any one imaging device 300, for example, the preset user portrait obtained by shooting with the rear camera may be used to process a face image of a to-be-processed image shot with the rear camera, or may be used to process a face image of a to-be-processed image shot with the front camera.
Referring to fig. 8, in some embodiments, the image processing method further includes:
028: and controlling the imaging device 300 to shoot a preset user portrait when the input signal is a preset input signal.
Referring to fig. 9, in some embodiments, the image processing module further includes a second control module 28. Step 028 may be implemented by the second control module 28, that is, the second control module 28 may be configured to control the imaging apparatus 300 to capture the preset user portrait when the input signal is the preset input signal.
Referring again to fig. 3, in some embodiments, step 028 can be implemented by the processor 400, that is, the processor 400 can be further configured to control the imaging apparatus 300 to capture the preset user portrait when the input signal is the preset input signal.
Therefore, the preset user portrait can be updated at any time according to the requirements of the user. Specifically, when the user wants to update the preset user portrait, the user may search for a scene with appropriate ambient brightness and moderate shooting distance, and then generate an input signal through a touch screen, a key, voice input, and the like, and when the input signal is the preset input signal, the user may obtain a new preset user portrait by controlling the imaging device 300 to shoot, and then replace the old preset user portrait with the new preset user portrait to complete the update of the preset user portrait.
The preset standard portrait may be any high-definition portrait with definition greater than the second preset definition, and may be a high-definition poster, for example. The preset standard portrait may be stored in the imaging apparatus 300 or the local storage unit of the electronic device 1000 in advance, or may be downloaded from the cloud. The preset standard portrait can be one or more than one, and when the similarity between the human face and the human face in the preset user portrait is smaller than the preset similarity and the preset standard portrait is one, the preset standard portrait is the reference image; when the similarity between the face and the face in the preset user figure is smaller than the preset similarity and the number of the preset standard figures is multiple, the preset standard figure can be selected as the reference image according to the information of the face of the image to be processed and/or the shooting area, for example, the preset standard figure can be selected according to at least one of gender, age, skin color and the shooting area. In one embodiment, the preset standard figures are multiple, and when the similarity between the face and the face in the preset user figure is smaller than the preset similarity, after processing the face of the image to be processed, the information of the face is obtained as follows: for the male, a preset standard portrait of the male can be selected as a reference image. In one embodiment, the preset standard figures are multiple, and when the similarity between the face and the face in the preset user figure is smaller than the preset similarity, after processing the face of the image to be processed, the information of the face is obtained as follows: the image shooting area is Guangzhou city, Guangzhou, China, and the preset standard portrait of the 18 year-old female with the yellow skin, Guangzhou city, Guangzhou, China, can be selected as the reference image.
When the similarity between the face of the image to be processed and the face in the preset user portrait is greater than the preset similarity, the face of the image to be processed and the face in the preset user portrait can be regarded as the same person, and at the moment, the face image is processed by using the preset user portrait as a reference image, so that the definition and the authenticity of the processed restored image can be improved to a greater extent. When the similarity between the face of the image to be processed and the face in the preset user portrait is smaller than the preset similarity, the face of the image to be processed and the face in the preset user portrait can be considered as different persons, the face image is processed by using the preset standard portrait as a reference image, the shooting requirements of different persons can be met to a greater extent, and the definition of the processed restored image can be improved.
In some embodiments, the method for obtaining the similarity between the face of the image to be processed and the face in the preset user portrait may be: the method comprises the steps of firstly respectively obtaining face characteristic points in an image to be processed and face characteristic points in a preset user figure, and then comparing the face characteristic points of the two images to obtain the similarity of the face characteristic points of the two images.
In some embodiments, when only the preset user portrait is present and the preset standard portrait is absent, the preset user portrait may be simultaneously regarded as the preset user portrait and the preset standard portrait, and at this time, the comparison between the similarity between the face of the image to be processed and the face in the preset user portrait and the preset similarity may not be performed, that is, the preset user portrait may be used as the reference image no matter whether the similarity is greater than the preset similarity or less than the preset similarity. When only the preset standard portrait is available, but not the preset user portrait, the preset standard portrait can be simultaneously regarded as the preset user portrait and the preset standard portrait, and at this time, the comparison between the similarity between the face of the image to be processed and the face in the preset user portrait and the preset similarity may not be performed, that is, the preset standard portrait can be used as the reference image no matter whether the similarity is greater than the preset similarity or less than the preset similarity.
Referring to fig. 10, in some embodiments, the image processing method further includes:
032: acquiring the gradient of the pixel value of a face in an image to be processed;
034: and judging whether the definition of the face in the image to be processed is smaller than a first preset definition or not according to the gradient.
Referring to fig. 11, in some embodiments, the image processing apparatus 100 further includes a second obtaining module 32 and a determining module 34. Step 032 may be implemented by the second obtaining module 32, and step 034 may be implemented by the determining module 34. That is, the second obtaining module 32 may be configured to obtain a gradient of pixel values of a face in the image to be processed. The determining module 34 may be configured to determine whether the sharpness of the face in the image to be processed is smaller than a first preset sharpness according to the gradient.
Referring again to fig. 3, in some embodiments, step 032 and step 034 may both be implemented by the processor 400, that is, the processor 400 may further be configured to: acquiring the gradient of the pixel value of a face in an image to be processed; and judging whether the definition of the face in the image to be processed is smaller than a first preset definition or not according to the gradient.
Specifically, the gradient of the pixel values may be obtained by calculating a deviation between the pixel value of the current pixel and the pixel values of the surrounding pixels, for example, the pixel value of the current pixel is 110, the pixel value of the right pixel of the current pixel is 120, and the gradient of the pixel values of the current pixel may be |110 | -120| — 10; for another example, if the pixel value of the current pixel is 110 and the pixel value of the right pixel of the current pixel is 120, the gradient of the pixel value of the current pixel can be |110-2100. Then, the average value of the gradients of all pixels of the face or the sum value of the gradients of all pixels can be taken as the gradient of the whole face, whether the gradient of the whole face is smaller than a first preset gradient or not is judged, if the gradient of the whole face is smaller than the first preset gradient, the definition of the face can be considered to be smaller than a first preset definition, and if the gradient of the whole face is larger than the first preset gradient, the definition of the face can be considered to be larger than the first preset definition. Therefore, whether the definition of the face is smaller than the first preset definition can be judged quickly and accurately through the gradient of the pixel value of the face.
Referring to fig. 12, in some embodiments, the image processing method further includes:
036: and carrying out secondary classification on the image to be processed by utilizing the deep learning network so as to judge whether the definition of the face in the image to be processed is smaller than a first preset definition.
Referring to fig. 13, in some embodiments, the image processing apparatus 100 further includes a fourth processing module 36. Step 036 may be implemented by the fourth processing module 36, that is, the fourth processing module 36 may be configured to perform a second classification on the image to be processed by using the deep learning network to determine whether the sharpness of the face in the image to be processed is smaller than the first preset sharpness.
Referring again to fig. 3, in some embodiments, step 036 may be implemented by the processor 400, that is, the processor 400 may be further configured to perform a second classification on the image by using a deep learning network to determine whether the sharpness of the face in the image is less than a first preset sharpness.
Specifically, the deep learning network may be generated by training using training images, where the training images may include a training image that is calibrated and has a definition smaller than a first preset definition and a training image that is calibrated and has a definition not smaller than the first preset definition. Through the deep learning network generated by the training images, the following can be learned: what feature the image with the definition smaller than the first preset definition has and what feature the image with the definition not smaller than the first preset definition has. Therefore, after the image to be processed is input into the deep learning network, the deep learning network can perform two classifications on the image to be processed according to the feature information of the image to be processed, namely, the following judgment results are obtained: the image to be processed belongs to an image with definition smaller than first preset definition or an image with definition not smaller than the first preset definition. Therefore, whether the definition of the face is smaller than the first preset definition or not can be accurately and quickly judged through the deep learning network.
In some embodiments, the ratio of the number of pixels of the high-frequency information in the face to all pixels of the whole face may be obtained, and the definition of the face may be characterized by the ratio, where the higher the ratio is, the higher the definition of the image is, and when the ratio is smaller than a predetermined ratio, it is determined that the definition of the face is smaller than a first preset definition. In one example, the face is first processed by shaping low pass filtering to obtain a filtered image. Then, obtaining high-frequency information according to the face and the filtering image, which may be specifically: and subtracting the filtering image from the face to obtain high-frequency information. And finally, counting the proportion of the number of the pixels of the high-frequency information in all the pixels of the face. For example, if the number of pixels of the high-frequency information in the face is 20% of the number of all pixels of the face, the sharpness of the face may be represented by a percentage of 20%.
The method for determining whether the definition of the portrait of the user is greater than the second preset definition may also adopt at least one of the pixel number ratios of the gradient, the deep learning network, and the high-frequency information in the above embodiment, and details thereof are not repeated herein.
Referring to fig. 14, in some embodiments, the processing of the face image according to the reference image in steps 016 and 018 to obtain a repaired image includes:
0161: acquiring a first feature map of a face image after up-sampling;
0162: acquiring a second feature map of the reference image after up-sampling and down-sampling;
0163: acquiring a third feature map of the reference image without up-sampling and down-sampling;
0164: acquiring a feature of the second feature map, wherein the similarity of the feature of the second feature map and the first feature map exceeds a first preset similarity to serve as a reference feature;
0165: acquiring the feature of which the similarity with the reference feature exceeds a second preset similarity in the third feature map to obtain an exchange feature map;
0166: merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram;
0167: amplifying the fourth feature map by a preset multiple to obtain a fifth feature map;
0168: and taking the fifth feature map as a face image and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification factor, and taking the fifth feature map with the target magnification factor as a repair image.
Referring to fig. 15, in some embodiments, the second processing module 16 and the third processing module 18 may each include a first obtaining unit 161, a second obtaining unit 162, a third obtaining unit 163, a fourth obtaining unit 164, a fifth obtaining unit 165, a first processing unit 166, a second processing unit 167, and a third processing unit 168. Step 0161 may be implemented by first acquiring unit 161, step 0162 may be implemented by second acquiring unit 162, step 0163 may be implemented by third acquiring unit 163, step 0164 may be implemented by fourth acquiring unit 164, step 0165 may be implemented by fifth acquiring unit 165, step 0166 may be implemented by first processing unit 166, step 0167 may be implemented by second processing unit 167, and step 0168 may be implemented by third processing unit 168. That is, the first obtaining unit 161 may be configured to obtain the first feature map of the face image after upsampling. The second obtaining unit 162 may be configured to obtain a second feature map of the reference image after performing up-sampling and down-sampling. The third obtaining unit 163 may be configured to obtain a third feature map of the reference image without performing upsampling and downsampling. The fourth obtaining unit 164 is configured to obtain, as a reference feature, a feature in the second feature map, which has a similarity exceeding a first preset similarity with respect to the first feature map. The fifth obtaining unit 165 may be configured to obtain a feature of the third feature map, where the similarity with the reference feature exceeds a second preset similarity, to obtain an exchange feature map. The first processing unit 166 may be configured to combine the exchange feature map and the first feature map to obtain a fourth feature map. The second processing unit 167 may be configured to amplify the fourth feature map by a predetermined factor to obtain a fifth feature map. The third processing unit 168 may be configured to use the fifth feature map as a face image and perform the above steps in a loop until the obtained fifth feature map is at the target magnification, and the fifth feature map with the target magnification is used as a repaired image.
Referring again to fig. 3, in some embodiments, step 0161, step 0162, step 0163, step 0164, step 0165, step 0166, step 0167, and step 0168 may be implemented by processor 400. That is, processor 400 may be configured to: acquiring a first feature map of a face image after up-sampling; acquiring a second feature map of the reference image after up-sampling and down-sampling; acquiring a third feature map of the reference image without up-sampling and down-sampling; acquiring a feature of the second feature map, wherein the similarity of the feature of the second feature map and the first feature map exceeds a first preset similarity to serve as a reference feature; acquiring the feature of which the similarity with the reference feature exceeds a second preset similarity in the third feature map to obtain an exchange feature map; merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram; amplifying the fourth feature map by a preset multiple to obtain a fifth feature map; and taking the fifth feature map as a face image and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification factor, and taking the fifth feature map with the target magnification factor as a repair image.
Specifically, the up-sampling may be understood as performing an enlargement process on the face image or the reference image, and the down-sampling may be understood as performing a reduction process on the reference image.
More specifically, step 0161 may include: up-sampling the face image; and inputting the face image subjected to the up-sampling into a convolutional neural network for feature extraction to obtain a first feature map. Step 0162 may include: down-sampling the reference image; up-sampling the down-sampled reference image; and inputting the up-sampled reference image into a convolutional neural network for feature extraction to obtain a second feature map. Step 0163 may include: and inputting the reference image into a convolutional neural network for feature extraction to obtain a third feature map.
By performing up-sampling (amplification) processing on the face image, the number of pixels of the face image is close to or equal to that of the reference image, so that the subsequent comparison of similarity is facilitated to acquire the reference feature. The up-sampled face image is input into a convolutional neural network for feature extraction to obtain a first feature map, the first feature map can be understood as a feature map extracted from the enlarged face image, and the first feature map comprises various features in the face image, such as five sense organs, skin types, hair, contours and the like. Because the definition of the amplified face image is low and the definition of the reference image is high, the reference image needs to be downsampled (reduced) first and then upsampled to realize the fuzzification processing of the reference image, thereby improving the similarity between the second feature map and the first feature map. Features such as facial features, skin, hair, contours, etc. may also be included in the second profile. And directly inputting the reference image into a convolutional neural network for feature extraction to obtain a third feature map. It should be noted that the convolutional neural network is a network after deep learning, and can perform high-accuracy feature extraction on an input image.
More specifically, the features in the second feature map and the features in the first feature map are compared, the similarity between the two features is determined, the similarity is compared with a first preset similarity, and if the similarity is greater than or equal to the first preset similarity, the feature in the second feature map is similar to the corresponding feature in the first feature map, so that the feature in the second feature map can be used as a reference feature. And comparing the third feature graph with the reference feature, judging the similarity of the third feature graph and the reference feature, comparing the similarity with a second preset similarity, and if the similarity is greater than or equal to the second preset similarity, obtaining a corresponding exchange feature graph. And merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram, and amplifying the fourth characteristic diagram by a preset multiple to obtain a fifth characteristic diagram. And judging the magnification of the fifth feature map, and if the magnification is equal to the target magnification, taking the fifth feature map as a repaired image.
By processing the face image in steps 0161 to 0168, the texture information of the face in the reference image can be migrated into the face image, so that the processed restored image has clearer texture information of the face compared with the face image before processing.
Referring to fig. 16, in some embodiments, the image processing method further includes:
038: calculating the areas of all human faces in each image to be processed;
step 012 includes:
0122: and when the definition of the face in the image to be processed is smaller than a first preset definition and the area of the face is larger than a preset area, cutting the face to obtain a face image.
Referring to fig. 17, in some embodiments, the image processing apparatus 100 further includes a calculation module 38. Step 038 may be implemented by the calculation module 38 and step 0122 may be implemented by the first processing module 12. That is, the calculation module 38 may be used to calculate the area of all faces in each image to be processed. The first processing module 12 may be configured to cut the human face to obtain the human face image when the definition of the human face in the image to be processed is smaller than a first preset definition and the area of the human face is larger than a predetermined area.
Referring again to fig. 3, in some embodiments, both step 038 and step 0122 can be implemented by processor 400. That is, processor 400 may be configured to: calculating the areas of all human faces in each image to be processed; and when the definition of the face in the image to be processed is smaller than a first preset definition and the area of the face is larger than a preset area, cutting the face to obtain a face image.
Specifically, the area of the face may be detected first, and then whether the sharpness of the face is detected is determined according to the size of the area of the face. When the area of the face is smaller than the predetermined area, it is described that the corresponding face is not a main object for shooting, for example, the face is a background face, so that the definition of the face with the area smaller than the predetermined area may not be obtained, and the face may not be processed, that is, the face does not need to be cut out. When the area of the face is larger than the preset area, whether the definition of the face in the image to be processed is smaller than a first preset definition or not can be further detected, and when the definition of the face in the image to be processed is smaller than the first preset definition and the area of the face is larger than the preset area, the face of a main object in the image to be processed is not clear, so that the face can be cut out to obtain a face image, and the face image can be repaired.
In some embodiments, the proportion of the face in the image to be processed can be calculated according to the area of the face, and when the proportion of the face in the image to be processed is greater than the predetermined proportion, the area of the face is determined to be greater than the predetermined area; and when the ratio of the face in the image to be processed is smaller than the preset ratio, determining that the area of the face is smaller than the preset area.
In some embodiments, when the definition of the face in the image to be processed is greater than the first preset definition, it indicates that the face of the main object in the image to be processed is relatively clear, and therefore the image to be processed may not be processed.
Referring to fig. 18, in some embodiments, the image processing method further includes:
042: acquiring background images except for human faces in an image to be processed;
044: and fusing the background image and the restored image to obtain a target image.
Referring to fig. 19, in some embodiments, the image processing apparatus 100 further includes a third obtaining module 42 and a fifth processing module 44. Step 042 may be implemented by the third obtaining module 42, and step 044 may be implemented by the fifth processing module 44. That is, the third obtaining module 42 may be configured to obtain a background image except for the human face in the image to be processed. The fifth processing module 44 may be configured to fuse the background image and the restored image to obtain the target image.
Referring again to fig. 3, in some embodiments, step 042 and step 044 may be implemented by processor 400. That is, processor 400 may be configured to: acquiring background images except for human faces in an image to be processed; and fusing the background image and the restored image to obtain a target image.
Specifically, the image to be processed is cut into a face image and a background image, the face image is processed to obtain a restored image, and then the restored image and the background image are fused together to form a complete image serving as a target image. The fusion of the restored image and the background image can be directly splicing the restored image and the background image together. In order to avoid unnatural transition between the processed restored image and the background image, feathering may be performed on the boundary portion of the restored image.
Referring to fig. 20, in an embodiment, the to-be-processed image I1 is obtained by shooting, and after the to-be-processed image I1 is processed, it is determined that the sharpness of the face in the to-be-processed image I1 is less than the first preset sharpness, so that the face is cut out to obtain a face image I2 and obtain a background image I3. The face image I2 is processed to increase the texture detail of the face image I2 and obtain a restored image I4, so that the sharpness of the restored image I4 is high. The background image I3 and the repaired image I4 are fused together, and the target image I5 with higher definition can be obtained.
Referring to fig. 21, a computer readable storage medium 500 of the present application stores a computer program 510 thereon, and the computer program 510 is executed by the processor 400 to implement the image processing method of any one of the above embodiments.
For example, the computer program 510, when executed by the processor 400, implements the steps of the following image processing method:
012: when the definition of the face in the image to be processed is smaller than a first preset definition, cutting the face to obtain a face image;
014: acquiring a preset user portrait and a preset standard portrait, wherein the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition;
016: when the similarity between the human face and the human face in the preset user portrait is larger than the preset similarity, taking the preset user portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image;
018: and when the similarity between the human face and the human face in the preset user portrait is smaller than the preset similarity, taking the preset standard portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image.
The computer-readable storage medium 500 may be disposed in the image processing apparatus 100 or the electronic device 1000, or disposed in the cloud server, and at this time, the image processing apparatus 100 or the electronic device 1000 can communicate with the cloud server to obtain the corresponding computer program 510.
It will be appreciated that the computer program 510 comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium 500 may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
Processor 400 may be referred to as a driver board. The driver board may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc.
In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (13)

1. An image processing method, characterized in that the image processing method comprises:
when the definition of a face in an image to be processed is smaller than a first preset definition, cutting the face to obtain a face image;
acquiring a preset user portrait and a preset standard portrait, wherein the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition;
when the similarity between the human face and the human face in the preset user portrait is larger than the preset similarity, taking the preset user portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image;
and when the similarity between the human face and the human face in the preset user portrait is smaller than a preset similarity, taking the preset standard portrait as a reference image, and processing the human face image according to the reference image to obtain a repaired image.
2. The image processing method according to claim 1, wherein the image processing method is used for an imaging apparatus, the image processing method further comprising:
when the exposure time meets a preset exposure condition, the light sensitivity meets a preset light sensitivity condition, and the face frame area meets a preset face frame area condition, controlling the imaging device to shoot the user portrait;
and when the definition of the user portrait is greater than the second preset definition, saving the user portrait as the preset user portrait.
3. The image processing method according to claim 2, characterized in that the image processing method further comprises:
and sending prompt information to prompt the shooting of the portrait of the preset user when the imaging device is opened for the first time.
4. The image processing method according to claim 2, characterized in that the image processing method further comprises:
and controlling the imaging device to shoot the preset user portrait when the input signal is a preset input signal.
5. The image processing method according to any one of claims 1 to 4, characterized in that the image processing method further comprises:
acquiring the gradient of the pixel value of the face in the image to be processed;
and judging whether the definition of the face in the image to be processed is smaller than the first preset definition or not according to the gradient.
6. The image processing method according to any one of claims 1 to 4, characterized in that the image processing method further comprises:
and carrying out secondary classification on the image to be processed by utilizing a deep learning network so as to judge whether the definition of the face in the image to be processed is smaller than the first preset definition.
7. The image processing method according to any one of claims 1 to 4, wherein the processing the face image according to the reference image to obtain a restored image comprises:
acquiring a first feature map of the face image after up-sampling;
acquiring a second feature map of the reference image after up-sampling and down-sampling;
acquiring a third feature map of the reference image without up-sampling and down-sampling;
acquiring a feature of the second feature map, wherein the similarity of the feature of the second feature map and the first feature map exceeds a first preset similarity to serve as a reference feature;
acquiring the feature of which the similarity with the reference feature exceeds a second preset similarity in the third feature map to obtain an exchange feature map;
merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram;
amplifying the fourth feature map by a preset multiple to obtain a fifth feature map;
and taking the fifth feature map as the face image and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification factor, and taking the fifth feature map with the target magnification factor as the repaired image.
8. The image processing method according to any one of claims 1 to 4, characterized in that the image processing method further comprises:
and calculating the areas of all the human faces in each image to be processed.
9. The image processing method according to claim 8, wherein when the sharpness of the face in the image to be processed is less than a first preset sharpness, cutting out the face to obtain a face image comprises:
and when the definition of the face in the image to be processed is smaller than a first preset definition and the area of the face is larger than a preset area, cutting the face to obtain the face image.
10. The image processing method according to any one of claims 1 to 4, characterized in that the image processing method further comprises:
acquiring a background image except the face in the image to be processed;
and fusing the background image and the restored image to obtain a target image.
11. An image processing apparatus characterized by comprising:
the first processing module is used for cutting out the face to obtain a face image when the definition of the face in the image to be processed is smaller than a first preset definition;
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a preset user portrait and a preset standard portrait, and the definition of the preset user portrait and the definition of the preset standard portrait are both greater than a second preset definition;
the second processing module is used for taking the preset user portrait as a reference image when the similarity between the human face and the human face in the preset user portrait is greater than the preset similarity, and processing the human face image according to the reference image to obtain a repaired image;
and the third processing module is used for taking the preset standard portrait as a reference image when the similarity between the human face and the human face in the preset user portrait is smaller than a preset similarity, and processing the human face image according to the reference image to obtain a repaired image.
12. An electronic device, characterized in that the electronic device comprises a housing, an imaging device and a processor, wherein the imaging device and the processor are both mounted on the housing, the imaging device is used for taking an image to be processed, and the processor is used for implementing the image processing method according to any one of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 10.
CN201911205189.7A 2019-11-29 2019-11-29 Image processing method, image processing apparatus, electronic device, and readable storage medium Pending CN110992283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911205189.7A CN110992283A (en) 2019-11-29 2019-11-29 Image processing method, image processing apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911205189.7A CN110992283A (en) 2019-11-29 2019-11-29 Image processing method, image processing apparatus, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
CN110992283A true CN110992283A (en) 2020-04-10

Family

ID=70088587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911205189.7A Pending CN110992283A (en) 2019-11-29 2019-11-29 Image processing method, image processing apparatus, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN110992283A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099217A (en) * 2020-08-18 2020-12-18 宁波永新光学股份有限公司 Automatic focusing method for microscope
CN112598580A (en) * 2020-12-29 2021-04-02 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040218676A1 (en) * 2003-05-01 2004-11-04 Samsung Electronics Co., Ltd. Method of determining reference picture, method of compensating for motion and apparatus therefor
US20110235939A1 (en) * 2010-03-23 2011-09-29 Raytheon Company System and Method for Enhancing Registered Images Using Edge Overlays
CN104462381A (en) * 2014-12-11 2015-03-25 北京中细软移动互联科技有限公司 Trademark image retrieval method
CN104915351A (en) * 2014-03-12 2015-09-16 华为技术有限公司 Picture sorting method and terminal
CN105117724A (en) * 2015-07-30 2015-12-02 北京邮电大学 License plate positioning method and apparatus
US20170142439A1 (en) * 2015-11-18 2017-05-18 Canon Kabushiki Kaisha Encoding apparatus, encoding method, and storage medium
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107945107A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN108022207A (en) * 2017-11-30 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108109115A (en) * 2017-12-07 2018-06-01 深圳大学 Enhancement Method, device, equipment and the storage medium of character image
CN108133695A (en) * 2018-01-02 2018-06-08 京东方科技集团股份有限公司 A kind of method for displaying image, device, equipment and medium
CN108197250A (en) * 2017-12-29 2018-06-22 深圳云天励飞技术有限公司 Picture retrieval method, electronic equipment and storage medium
CN108230255A (en) * 2017-09-19 2018-06-29 北京市商汤科技开发有限公司 It is used to implement the method, apparatus and electronic equipment of image enhancement
CN108446692A (en) * 2018-06-08 2018-08-24 南京擎华信息科技有限公司 Face comparison method, device and system
CN108734126A (en) * 2018-05-21 2018-11-02 深圳市梦网科技发展有限公司 A kind of U.S.'s face method, U.S. face device and terminal device
CN108921806A (en) * 2018-08-07 2018-11-30 Oppo广东移动通信有限公司 Image processing method, image processing device and terminal equipment

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040218676A1 (en) * 2003-05-01 2004-11-04 Samsung Electronics Co., Ltd. Method of determining reference picture, method of compensating for motion and apparatus therefor
US20110235939A1 (en) * 2010-03-23 2011-09-29 Raytheon Company System and Method for Enhancing Registered Images Using Edge Overlays
CN104915351A (en) * 2014-03-12 2015-09-16 华为技术有限公司 Picture sorting method and terminal
CN104462381A (en) * 2014-12-11 2015-03-25 北京中细软移动互联科技有限公司 Trademark image retrieval method
CN105117724A (en) * 2015-07-30 2015-12-02 北京邮电大学 License plate positioning method and apparatus
US20170142439A1 (en) * 2015-11-18 2017-05-18 Canon Kabushiki Kaisha Encoding apparatus, encoding method, and storage medium
CN108230255A (en) * 2017-09-19 2018-06-29 北京市商汤科技开发有限公司 It is used to implement the method, apparatus and electronic equipment of image enhancement
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN108022207A (en) * 2017-11-30 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN107945107A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN108109115A (en) * 2017-12-07 2018-06-01 深圳大学 Enhancement Method, device, equipment and the storage medium of character image
CN108197250A (en) * 2017-12-29 2018-06-22 深圳云天励飞技术有限公司 Picture retrieval method, electronic equipment and storage medium
CN108133695A (en) * 2018-01-02 2018-06-08 京东方科技集团股份有限公司 A kind of method for displaying image, device, equipment and medium
CN108734126A (en) * 2018-05-21 2018-11-02 深圳市梦网科技发展有限公司 A kind of U.S.'s face method, U.S. face device and terminal device
CN108446692A (en) * 2018-06-08 2018-08-24 南京擎华信息科技有限公司 Face comparison method, device and system
CN108921806A (en) * 2018-08-07 2018-11-30 Oppo广东移动通信有限公司 Image processing method, image processing device and terminal equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099217A (en) * 2020-08-18 2020-12-18 宁波永新光学股份有限公司 Automatic focusing method for microscope
CN112598580A (en) * 2020-12-29 2021-04-02 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo
CN112598580B (en) * 2020-12-29 2023-07-25 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo

Similar Documents

Publication Publication Date Title
CN110910330B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN108335279B (en) Image fusion and HDR imaging
EP1800259B1 (en) Image segmentation method and system
US7903168B2 (en) Camera and method with additional evaluation image capture based on scene brightness changes
US7995116B2 (en) Varying camera self-determination based on subject motion
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US20070237514A1 (en) Varying camera self-determination based on subject motion
CN107358593B (en) Image forming method and apparatus
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN107172354B (en) Video processing method, device, electronic device and storage medium
CN111031241B (en) Image processing method and device, terminal and computer readable storage medium
Kinoshita et al. Automatic exposure compensation using an image segmentation method for single-image-based multi-exposure fusion
CN105635575A (en) Imaging method, imaging device and terminal
CN112446241B (en) Method, device and electronic device for obtaining characteristic information of target object
CN111105370B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN110992283A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111062904B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN110992284A (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
CN111242843A (en) Image blurring method, image blurring device, image blurring equipment and storage device
CN111083359B (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN111105369B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111010509B (en) Image processing method, terminal, image processing system, and computer-readable storage medium
CN111080543B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113870300B (en) Image processing method, device, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240802