[go: up one dir, main page]

WO2023043458A1 - Artifacts corrections in images - Google Patents

Artifacts corrections in images Download PDF

Info

Publication number
WO2023043458A1
WO2023043458A1 PCT/US2021/050969 US2021050969W WO2023043458A1 WO 2023043458 A1 WO2023043458 A1 WO 2023043458A1 US 2021050969 W US2021050969 W US 2021050969W WO 2023043458 A1 WO2023043458 A1 WO 2023043458A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
artifact
electronic device
processor
eyeglasses
Prior art date
Application number
PCT/US2021/050969
Other languages
French (fr)
Inventor
King Sui KEI
Pei Hsuan Li
Yun David TANG
Yi Hsien Lin
Guoxing Yang
Alan Man Pan TAM
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2021/050969 priority Critical patent/WO2023043458A1/en
Publication of WO2023043458A1 publication Critical patent/WO2023043458A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • Electronic devices such as desktops, laptops, notebooks, tablets, and smartphones include image sensors that enable the electronic devices to capture and transmit images.
  • Images captured by an image sensor may include artifacts that partially or fully obscure objects within the image.
  • An artifact, as used herein, is a distortion of features of an image.
  • the artifact is a result of a light source of a physical environment of the image sensor, for instance.
  • the light source may be a directional light source, a display device, sunlight, or a combination thereof, for instance.
  • the light source may hinder the image sensor from capturing features of an object located within a proximity of the light source and within the field of view of the image sensor.
  • FIG. 1 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
  • FIG. 2 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples.
  • FIG. 3 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
  • FIGS. 4A and 4B are examples showing an electronic device correcting artifacts in images, in accordance with various examples.
  • FIG. 5 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples.
  • FIG. 6 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
  • FIG. 7 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples.
  • FIG. 8 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples.
  • FIG. 9 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
  • electronic devices include image sensors that enable the electronic devices to capture and transmit images.
  • An image is captured and transmitted by an electronic device during a virtual meeting that enables a user of the electronic device to interact with an audience, for instance.
  • the image may include an artifact (e.g., a glare, a reflection, or a combination thereof) that obscures a feature of an object within the image captured by the image sensor.
  • an artifact e.g., a glare, a reflection, or a combination thereof
  • the user of the electronic device wears a pair of eyeglasses, and light striking the pair of eyeglasses results in an artifact that obscures a facial feature of the user within the image captured by the image sensor.
  • the artifact may distract the user, the audience, or a combination thereof.
  • the artifact may detract from the user appearance in the image, reduce the user’s confidence, and interfere with communication between the user and the audience by creating a perceived barrier that blocks an appearance of eye-to-eye contact.
  • the user may attempt to remove or reduce the artifact by rearranging elements (e.g. , the electronic device, a light source, the image sensor, the pair of eyeglasses) of a physical environment of the image sensor.
  • the user’s attempts may be disruptive to the virtual meeting and interfere with the user participating in the virtual meeting, thereby impacting user productivity.
  • This description describes examples of an electronic device to detect and correct artifacts that are located on a pair of eyeglasses of a user in an image.
  • the image may be a frame of a video signal captured via an image sensor.
  • the electronic device corrects the image by removing the artifacts and enhancing a visibility of facial features of the user.
  • the electronic device receives a video signal via the image sensor.
  • the electronic device analyzes an image of the video signal to determine whether the user wears the pair of eyeglasses. Responsive to a determination that the user wears the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark and an artifact.
  • the eye landmark is a facial feature that indicate a location of an eye within the image.
  • the eye landmark may include an eyebrow, an upper eye lid, a lower eye lid, an iris, a pupil, an inside corner of the eye, an outside corner of the eye, or a combination thereof.
  • the electronic device determines a seventy of the artifact.
  • the seventy quantifies an impact of the artifact on a visibility of an object within an area of the artifact.
  • the electronic device determines the seventy by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof.
  • the electronic device Responsive to the seventy of the artifact exceeding a seventy threshold, the electronic device corrects the image utilizing image processing techniques.
  • the image processing techniques removes the artifact, reduces a seventy of the artifact, enhances an area of the image, or a combination thereof.
  • the area of the image enhanced is defined by the artifact, the eye landmark, the pair of eyeglasses, or a combination thereof.
  • the electronic device determines a type of the artifact (e.g., glare, reflection) to determine an image processing technique to utilize to generate the corrected image.
  • the electronic device determines an image quality of the corrected image.
  • the electronic device Responsive to the image quality exceeding a quality threshold, the electronic device causes the corrected image to be displayed, transmitted, or a combination thereof.
  • the electronic device utilizes a machine learning technique to determine whether the user wears the pair of eyeglasses, to detect the eye landmark, to detect the artifact, to determine the type of the artifact, to correct the image, to determine the image quality, or a combination thereof.
  • the electronic device By correcting the artifacts in the video signal and transm itting the corrected video signal, the electronic device enhances a visibility of the eyes of the user without the user taking corrective actions. Additionally, the user and audience experiences are enhanced by removing awkwardness that occurs while the user takes the corrective action and by removing the perceived barrier that blocks the appearance of eye-to-eye contact. Automatically correcting the artifacts and enhancing the user experience reduces non-productive time of the user, thereby enhancing user productivity.
  • an electronic device includes an image sensor and a processor to determine that a pair of eyeglasses is in an image received via the image sensor. In response to the determination, the processor is to identify an artifact in the image. In response to identifying that the artifact satisfies a criterion, the processor is to generate a corrected image. The corrected image includes a mitigated appearance of the artifact. The processor is to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
  • an electronic device includes an image sensor and a processor to determine that a pair of eyeglasses is in an image received via the image sensor. In response to the determination, the processor is to identify an eye landmark and an artifact in the image. In response to identifying that the artifact overlaps the eye landmark, the processor is to generate a corrected image. The corrected image includes a mitigated appearance of the artifact. The processor is to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
  • a non- transitory machine-readable medium storing machine-readable instructions.
  • the machine-readable instructions When executed by a processor, the machine-readable instructions cause the processor to, utilizing a first machine learning technique, monitor a video signal for an image that includes a pair of eyeglasses. The video signal is received via an image sensor.
  • the machine-readable instructions when executed by the processor, cause the processor to identify an artifact in the image.
  • the machine-readable instructions when executed by the processor, cause the processor to determine a type of the artifact.
  • the machine-readable instructions In response to a determination that the type of the artifact is indicative of a reflection, the machine-readable instructions, when executed by the processor, cause the processor to generate a corrected image utilizing a second machine learning technique. In response to a determination that the type of the artifact is indicative of a glare, the machine-readable instructions, when executed by the processor, cause the processor to generate the corrected image utilizing an image processing technique. The corrected image includes a mitigated appearance of the reflection.
  • the machine-readable instructions when executed by the processor, cause the processor to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
  • FIG. 1 a schematic diagram depicting an electronic device 104 for correcting artifacts in images is provided, in accordance with various examples.
  • a user 100 wearing a pair of eyeglasses 102 faces the electronic device 104.
  • the electronic device 104 includes a chassis 106, a display device 108 and an image sensor 110.
  • the electronic device 104 is a desktop, a laptop, a notebook, a tablet, a smartphone, or any other suitable computing device for receiving and processing images.
  • the chassis 106 houses components of the electronic device 104.
  • the components include the display device 108 and the image sensor 110.
  • the display device 108 is a liquid crystal display (LCD), a light-emitting diode (LED) display, a plasma display, a quantum dot (QD) LED display, or any suitable device for displaying data of the electronic device 104 for viewing.
  • the image sensor 110 is an internal camera, an external camera, or any other suitable device for capturing an image, recording a video signal, or a combination thereof.
  • the chassis 106 houses network interfaces, video adapters, sound cards, local buses, input/output devices (e.g., a keyboard, a mouse, a touchpad, a speaker, a microphone), storage devices, wireless transceivers, connectors, or a combination thereof.
  • input/output devices e.g., a keyboard, a mouse, a touchpad, a speaker, a microphone
  • storage devices e.g., a keyboard, a mouse, a touchpad, a speaker, a microphone
  • wireless transceivers e.g., a combination thereof.
  • the image sensor 110 is shown as an integrated image sensor of the electronic device 104, in other examples, the image sensor 110 couples to any suitable connection for enabling communications between the electronic device 104 and the image sensor 110.
  • the connection may be via a wired connection (e.g., a Universal Serial Bus (USB)) or via a wireless connection (e.g., BLUETOOTH®, WI-FI®).
  • the display device 108 is shown as an integrated display device of the electronic device 104, in other examples, the display device 108 is coupled to the electronic device 104 via a wired connection (e.g., USB, Video Graphics Array (VGA), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI)) or a stand-alone display device coupled to the electronic device 104 via a wireless connection(e.g., BLUETOOTH®, WI-FI®).
  • a wired connection e.g., USB, Video Graphics Array (VGA), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI)
  • a stand-alone display device coupled to the electronic device 104 via a wireless connection(e.g., BLUETOOTH®, WI-FI®).
  • electronic device 104 utilizes a facial detection technique to detect the user 100 in the image of the video signal.
  • the facial detection technique determines whether a face of the user 100 is in the image.
  • the facial detection technique may be an appearance-based model that utilize statistics, machine learning techniques, or a combination thereof, a knowledge-based model that utilizes a set of rules, a feature-based model that extracts features of the image, a template-based model that correlates features of the image to templates of faces, or a combination thereof.
  • the electronic device 104 Responsive to a determination that the face of the user 100 is in the image, the electronic device 104 analyzes the image to determine whether the user 100 wears the pair of eyeglasses 102, as described below with respect to FIG. 2. Responsive to a determination that the user 100 is not wearing the pair of eyeglasses 102, the electronic device 104 causes the display device 108 to display the video signal received via the image sensor 110, a network interface (not explicitly shown) to transmit the video signal, or a combination thereof. In some examples, the electronic device 104 monitors the video signal to determine whether the user 100 wears the pair of eyeglasses 102 in subsequent images of the video signal.
  • the electronic device 104 Responsive to a determination that the user 100 is wearing the pair of eyeglasses 102, the electronic device 104 corrects the image of the video signal as described below with respect to FIGS. 2 - 10 and cause the display device 108 to display the corrected image, the network interface (not explicitly shown) to transmit the corrected image, or a combination thereof. (For an example of the electronic device 104 causing the display of a corrected image, refer to FIG. 4 below.) In some examples, as described below with respect to FIGS. 9 - 10, the electronic device 104 corrects subsequent images of the video signal and cause the display device 108 to display the corrected subsequent images of the video signal, the network interface (not explicitly shown) to transmit the corrected subsequent images of the video signal, or a combination thereof.
  • the method 200 includes a start point 202 during which the electronic device starts processing an image.
  • the image may be captured by an image sensor (e.g., the image sensor 110) of the electronic device.
  • the electronic device determines whether the image includes a pair of eyeglasses (e.g., the pair of eyeglasses 102). Responsive to a determination that the image includes the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark at a detect process 206 of the method 200.
  • the electronic device analyzes the image to detect an artifact.
  • the electronic device determines whether the artifact is severe. Responsive to a determination that the artifact is not severe, the electronic device returns to the start point 202 to start processing another image. Responsive to a determination that the artifact is severe, the electronic device corrects the artifact at a correct process 212 of the method 200.
  • the electronic device determines whether the correction introduces another artifact. Responsive to a determination that the correction introduces another artifact, the electronic device returns to the correct process 212 to correct the another artifact. Responsive to a determination that the correction did not introduce another artifact, the electronic device causes a display of the corrected image during a display process 216.
  • the electronic device may store the image from the image sensor to a storage device of the electronic device.
  • the electronic device analyzes the image to determine whether the image includes a feature of the pair of eyeglasses at the decision point 204.
  • the feature of the pair of eyeglasses may be a frame, an arm, a lens, a rim, a nose pad, a bridge, or a combination thereof. Responsive to a determination that the image includes the feature of the pair of eyeglasses, the electronic device determines that the image includes the pair of eyeglasses.
  • the electronic device analyzes the image utilizing a computer vision technique, a machine learning technique, or a combination thereof.
  • the computer vision technique identifies a feature of the image, classify the feature, compare the feature to multiple templates (e.g., images of pairs eyeglasses), or a combination thereof.
  • the computer vision technique identifies an H-shaped feature of the image, classifies the H-shaped feature as a bridge of a pair of eyeglasses, compares the H-shaped feature to multiple templates of pairs of eyeglasses in different perspectives within a field of view of the image sensor, or a combination thereof. Responsive to a determination that the H-shaped feature indicates the pair of eyeglasses, the electronic device determines that the image includes the pair of eyeglasses.
  • the electronic device utilizes a machine learning technique to determine whether a feature or a combination of features indicates a pair of eyeglasses.
  • the machine learning technique compares the feature or the combination of features to multiple templates to determine that the feature or the combination of features indicates that the image includes the pair of eyeglasses.
  • the electronic device utilizes a machine learning technique that implements a convolution neural network (CNN) to determine whether the image includes the pair of eyeglasses.
  • CNN convolution neural network
  • the electronic device may utilize the CNN trained with a training set that includes multiple images of multiple users. A subset of the multiple images may include users wearing pairs of eyeglasses and another subset of the multiple images may include users not wearing pairs of eyeglasses.
  • the electronic device identifies multiple features of the image, classify the features, and determines whether the image includes the pair of eyeglasses.
  • the CNN implements a Visual Geometry Group (VGG) network, a Residual Network (ResNet) network, a SqueezeNet network, or an AlexNet network.
  • VCG Visual Geometry Group
  • Residual Network Residual Network
  • AlexNet AlexNet network
  • the electronic device responsive to a determination that the image includes the pair of eyeglasses and a determination that a user (e.g., the user 100) is not wearing the pair of eyeglasses, the electronic device causes the display of the image. For example, the electronic device causes a display device (e.g., the display device 108) to display the image.
  • the electronic device Responsive to a determination that the image includes the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark during the detect process 206 and an artifact during the detect process 208.
  • the eye landmark may include an eyebrow, an upper eye lid, a lower eye lid, an iris, a pupil, an inside corner of the eye, an outside corner of the eye, or a combination thereof.
  • the electronic device utilizes an eye detection technique. For example, during the detect process 206, the electronic device analyzes a region of the image that includes the pair of eyeglasses for the eye landmark. The electronic device determines that the region includes multiple eye landmarks.
  • the electronic device determines that the artifact is located in the region based on an inability to locate an eye landmark. For example, the electronic device determines that the region includes two eyebrows, one upper eye lid, one lower eye lid, an iris, and a pupil and determine, during the detect process 208, that, based on a presence of the two eyebrows, an artifact or multiple artifacts obscure another upper eye lid, another lower eye lid, another iris, and another pupil. For another example, refer to FIG. 4A below.
  • the electronic device utilizes a machine learning technique to determine whether the region of the image that includes the pair of eyeglasses includes the eye landmark. For instance, the machine learning technique compares a feature of the region of the image that includes the pair of eyeglasses to multiple templates (e.g., images of users wearing pairs of eyeglasses) to determine that the feature is the eye landmark.
  • a machine learning technique compares a feature of the region of the image that includes the pair of eyeglasses to multiple templates (e.g., images of users wearing pairs of eyeglasses) to determine that the feature is the eye landmark.
  • the electronic device utilizes a CNN to perform the detect process 206 and the detect process 208 simultaneously to determine whether the region of the image that includes the pair of eyeglasses includes the eye landmark and the artifact.
  • the electronic device utilizes the CNN trained with a training set that includes multiple images of multiple users. The multiple images may include different perspectives of users wearing pairs of eyeglasses.
  • the electronic device identifies multiple features of the region of the image that includes the pair of eyeglasses, classify the multiple features, and determine that a feature of the multiple features is an eye landmark and that another feature of the multiple features is an artifact.
  • the electronic device determines that the multiple features include multiple eye landmarks, multiple artifacts, or a combination thereof.
  • the electronic device detects the artifact by analyzing the region of the image that includes the pair of eyeglasses. In various examples, the electronic device detects the artifact by analyzing a sub-region of the region, where the sub-region includes the eye landmark. To detect the artifact, the electronic device utilizes an image processing technique such as histogram calculation, image thresholding, edge detection, or a combination thereof. For example, the electronic device performs a histogram calculation for pixels of the region or sub-region.
  • an image processing technique such as histogram calculation, image thresholding, edge detection, or a combination thereof. For example, the electronic device performs a histogram calculation for pixels of the region or sub-region.
  • the electronic device determines that the pixel is an artifact.
  • the electronic device determines that an intensity of another subset of pixels of the pixels that are contiguous with the pixel are within another intensity threshold of the intensity of the pixel and determine that the artifact includes the another subset of the pixels.
  • the electronic device performs image thresholding to convert the region or the sub-region to a grayscale.
  • the electronic device determines that the artifact is a pixel or multiple pixels of the region or the subregion that have a highest tonal value (e.g., white).
  • the electronic device performs edge detection on the image to detect an area of the region or the sub-region where a discontinuity of the image occurs.
  • the discontinuity may include a boundary between objects of the image, where an object of the objects is the eye landmark and another object of the objects may be the artifact.
  • the electronic device determines a severity of the artifact by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof. For example, the electronic device compares the color of the artifact to a color of another object that is contiguous to the artifact. Responsive to a determination that the color of the artifact differs from the color of the another object by more than a color threshold, the electronic device determines that the artifact is severe. In another example, the electronic device compares the brightness of the artifact to a brightness of the another object.
  • the electronic device determines that the artifact is severe.
  • the electronic device determines that the size of the artifact exceeds a size threshold that indicates the artifact is severe.
  • the electronic device determines a distance in pixels from the artifact to the eye landmark. Responsive to a determination that the distance is less than a distance threshold, the electronic device determines that the artifact is severe.
  • the electronic device calculates a weight based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact in relation to a center of the eye landmark, or a combination thereof. Responsive to a determination that the weight is below a weight threshold, the electronic device determines that the artifact is not severe. In some examples, responsive to a determination that the artifact is not severe, the electronic device causes the display device to display the video signal.
  • the electronic device Responsive to a determination that the artifact is severe at the decision point 210, the electronic device corrects the image utilizing image processing techniques.
  • the image processing techniques removes the artifact, reduces a severity of the artifact, enhances an area of the image, or a combination thereof.
  • the area of the image enhanced is defined by the artifact, the eye landmark, the pair of eyeglasses, or a combination thereof.
  • the electronic device removes the artifact or reduces the severity of the artifact utilizing the techniques described below with respect to FIGS. 5 - 9.
  • the electronic device performs a tone mapping technique.
  • the tone mapping technique adjusts a tonal value of a pixel so that the tonal values are between one and 255.
  • the electronic device performs the tone mapping technique on the image or on the area of the image. For example, the electronic device enhances the area of the image defined by the eye landmark. In other examples, as described below with respect to FIGS. 7 — 9, the electronic device performs other image processing techniques to enhance the area of the image.
  • the electronic device determines whether the corrected image includes another artifact introduced by the image processing technique. Responsive to a determination that the corrected image does include the another artifact, the electronic device returns to the decision point 210 to determine a seventy of the another artifact, in some examples. Responsive to a determination that the severity of the another artifact does not exceed the severity, the electronic device causes the display of the corrected image during the display process 216. Does not exceed, as used herein, indicates that a value is equivalent to a threshold or below the threshold. Responsive to a determination that the severity of the another artifact does exceed the severity threshold, the electronic device corrects the another artifact during the correct process 212. In some examples, responsive to a determination that the correction does not introduce yet another artifact, the electronic device causes the display device to display the corrected image.
  • the electronic device performs the decision point 204 and the detect processes 206, 208, sequentially, in other examples, the electronic device performs the decision point 204 and the detect processes 206, 208 simultaneously. In other examples the electronic device performs the detect processes 206, 208 simultaneously after the decision point 204.
  • the electronic device 300 may be the electronic device 104.
  • the electronic device 300 includes a processor 302, an image sensor 304, a network interface 306, a display device 308, and a storage device 310.
  • the processor 302 is a microprocessor, a microcomputer, a microcontroller, or another suitable processor or controller for managing operations of the electronic device 300.
  • the processor 302 is a central processing unit (CPU), graphics processing unit (GPU), system on a chip (SoC), image signal processor (ISP), or a field programmable gate array (FPGA), for example.
  • the image sensor 304 may be the image sensor 110.
  • the network interface 306 enables communication over a network.
  • the network interface 306 may include a wired connection (e.g., Ethernet, USB) or a wireless connection (e.g., WI-FI®, BLUETOOTH®).
  • the display device 308 may be the display device 108.
  • the storage device 310 may include a hard drive, solid state drive (SSD), flash memory, random access memory (RAM), or other suitable memory for storing data or executable code of the electronic device 300.
  • the processor 302 couples to the image sensor 304, the network interface 306, the display device 308, and the storage device 310.
  • the storage device 310 stores machine-readable instructions which, when executed by the processor 302, cause the processor 302 to perform some or all of the actions attributed herein to the processor 302.
  • the machine-readable instructions are the machine-readable instructions 312, 314, 316, 318.
  • the machine-readable instructions 312, 314, 316, 318 when executed by the processor 302, cause the processor 302 to correct artifacts of images.
  • the machine-readable instruction 312, when executed by the processor 302, causes the processor 302 to detect a pair of eyeglasses in an image received via the image sensor 304.
  • the machine-readable instruction 314, when executed by the processor 302, causes the processor 302 to identify an artifact in the image.
  • the machine-readable instruction 316 when executed by the processor 302, causes the processor 302 to generate a corrected image.
  • the corrected image includes a mitigated appearance of the artifact.
  • the mitigated appearance includes a reduction in a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact in relation to a center of a lens of the pair of eyeglasses, or a combination thereof, a removal of the artifact from the image, an enhancement of objects located behind the pair of eyeglasses in the image, or a combination thereof.
  • the machine-readable instruction 318 when executed by the processor 302, causes the processor 302 to cause the display device 308 to display the corrected image, the network interface 306 to transmit the corrected image, or a combination thereof.
  • the machine-readable instruction 312 when executed by the processor 302, causes the processor 302 to detect a user in the image and determine that the user wears the pair of eyeglasses.
  • the machine-readable instruction 312, when executed by the processor 302, causes the processor 302 to analyze the image utilizing a computer vision technique, a machine learning technique, or a combination thereof.
  • the processor 302 analyzes the image utilizing a computer vision technique to identify a feature of the pair of eyeglasses, to classify the feature, to compare the feature to multiple templates, or a combination thereof.
  • the machine-readable instruction 31 when executed by the processor 302, causes the processor 302 to analyze a region of the image that includes the pair of eyeglasses to identify the artifact.
  • the processor 302 determines that the region includes the artifact by determining that an eye landmark is obscured, for example.
  • a machine-readable instruction when executed by the processor 302, causes the processor 302 to analyze a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact in relation to a center of a lens of the pair of eyeglasses, or a combination thereof, as described above with respect to FIG. 2.
  • the processor 302 utilizes image processing techniques to remove the artifact, reduce a severity of the artifact, enhance an area of the image, or a combination thereof.
  • the processor 302 may remove the artifact, reduce a severity of the artifact, or a combination thereof, as described below with respect to FIGS. 5 - 9.
  • the processor 302 may enhance the area of the image utilizing tone mapping, as described above with respect to FIG. 2, or utilizing other image processing techniques as described below with respect to FIGS. 7 - 9.
  • FIGS. 4A and 4B examples of an electronic device (e.g. , the electronic device 104, 300) correcting artifacts 412, 414 in images 400, 416 is provided, in accordance with various examples.
  • FIG. 4A shows the image 400.
  • the image 400 includes a pair of eyeglasses 402, an eyebrow 404, an outer corner of an eye 406, an iris 408, a pupil 410, and artifacts 412, 414.
  • the eyebrow 404, the eye 406, the iris 408, the pupil 410, or a combination thereof are referred to as eye landmarks herein.
  • FIG. 4B shows the image 416.
  • the image 416 includes a pair of eyeglasses 418, an eyebrow 420, an outer corner of an eye 422, an iris 424, and a pupil 426.
  • the image 400 is an image before processing by the electronic device and the image 416 is the image after processing by the electronic device.
  • the pair of eyeglasses 402 may be the pair of eyeglasses 418.
  • the eyebrow 404 may be the eyebrow 420.
  • the outer corner of the eye 406 may be the outer corner of the eye 422.
  • the image 400 includes the artifacts 412, 414. After processing by the electronic device, the artifacts 412, 414 are removed, as illustrated by the image 416.
  • a processor e.g. , the processor 302 of the electronic device detects the pair of eyeglasses 402 in the image 400 utilizing a computer vision technique, a machine learning technique, or a combination thereof.
  • the processor detects the eyebrow 404, outer corner of the eye 406, the iris 408, and the pupil 410.
  • the processor detects the artifact 412 by determining that a pupil is obscured and detects the artifact 414 by determining that an inner corner of an eye is obscured.
  • the processor determines that a severity of the artifacts 412, 414, respectively, exceed a seventy threshold.
  • the processor Responsive to the determination that the severity of the artifacts 412, 414, respectively, exceeds the seventy threshold, the processor corrects the image 400 to generate the image 416.
  • the image 416 shows the artifacts 412, 414 removed and an enhancement of the eyes located behind the pair of eyeglasses 418. For example, the iris 424 and the pupil 426 are fully visible in the image 416.
  • the electronic device By correcting the artifacts 412, 414 in the video signal and transm itting the corrected video signal, the electronic device enhances a visibility of the eyes of the user without the user taking corrective actions. Additionally, the user and audience experiences are enhanced by removing awkwardness that occurs while the user takes the corrective action and by removing the perceived barrier that blocks the appearance of eye-to-eye contact. Automatically correcting the artifacts and enhancing the user experience reduces non-productive time of the user, thereby enhancing user productivity.
  • FIG. 5 a flow diagram depicting a method 500 for an electronic device (e.g., the electronic device 104, 300) to correct artifacts (e.g., the artifacts 412, 414) in images (e.g., the image 400, 416) is provided, in accordance with various examples.
  • the method 500 includes a start point 502 during which the electronic device starts processing an image.
  • the electronic device receives an image in real-time.
  • Real-time is a time at which the image is captured by an image sensor (e.g., the image sensor 110, 304).
  • the electronic device determines whether the image includes a pair of eyeglasses (e.g., the pair of eyeglasses 402, 418). Responsive to a determination that the image does not include the pair of eyeglasses, the electronic device returns to the receive process 504 to receive another image. Responsive to a determination that the image includes the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark (e.g., the eyebrow 404, 420, the outer corner of the eye 406, 422, the iris 408, 424, the pupil 410, 426) during a detect process 508 of the method 500 and to detect an artifact during a detect process 510 of the method 500.
  • an eye landmark e.g., the eyebrow 404, 420, the outer corner of the eye 406, 422, the iris 408, 424, the pupil 410, 426
  • the electronic device determines whether the artifact overlaps the eye landmark. Responsive to a determination that the artifact overlaps the eye landmark, the electronic device determines whether the overlap is severe at a decision point 514 of the method 500. Responsive to a determination that the overlap is severe, the electronic device determines whether the artifact is a reflection at a decision point 516 of the method 500 and determines whether the artifact is a glare at a decision point 520. Responsive to a determination that the artifact is a reflection, the electronic device corrects the reflection at a correct process 518 of the method 500.
  • the electronic device Responsive to a determination that the artifact is a glare, the electronic device corrects the glare at a correct process 522 of the method 500. At a decision point 524 of the method 500, the electronic device determines whether a quality of the corrected image is satisfactory. Responsive to a determination that the quality of the corrected image is satisfactory, the electronic device causes a display of the corrected image during a display process 526 of the method 500.
  • the electronic device may not be able to detect the eye landmark.
  • the electronic device may determine that the pair of eyeglasses have tinted lenses and block a view of the eye landmark. Responsive to a determination that the pair of eyeglasses have tinted lenses, the electronic device causes the display of the image and return to the receive process 504 to receive another image.
  • the electronic device determines a location of the eye landmark based on a location of a feature of the pair of eyeglasses.
  • the electronic device determines a location of a pupil based on a location of a frame of the pair of eyeglasses. In various examples, at the decision point 514, the electronic device determines that the artifact detected during the detect process 510 obscures the unlocated eye landmark and that the overlap is severe based on the inability to locate the eye landmark.
  • the electronic device may not be able to detect the artifact.
  • the electronic device causes the display of the image and return to the receive process 504 to receive another image.
  • the electronic device displays the image and return to the receive process 504 to receive another image.
  • the electronic device displays the image and return to the receive process 504 to receive another image.
  • the electronic device determines a seventy of the overlap by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof.
  • the electronic device may calculate a weight based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact, or a combination thereof. Responsive to a determination that the weight is below a weight threshold, the electronic device determines that the overlap is not severe. Responsive to the seventy exceeding an overlap threshold, the electronic device determines that the overlap is severe.
  • the electronic device utilizes a two or a three-layer machine learning technique to determine whether an artifact is a reflection or a glare.
  • the machine learning technique may implement a supportvector machine (SVM), a logistic regression, or a combination thereof to classify the artifact.
  • SVM is a supervised machine learning technique that analyzes training sets to determine classifications and utilizes regression analysis to determine a class of the artifact.
  • Logistic regression is a statistical technique that predicts a likelihood that the artifact belongs to a first class or to a second class.
  • the electronic device calculates a weight to determine whether the artifact is a reflection or a glare.
  • the weight may be based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact, or a combination thereof.
  • the electronic device performs the decision points 516, 520 sequentially. For example, the electronic device determines whether the artifact is a reflection at the decision point 516. Responsive to a determination that the artifact is not a reflection, the electronic device determines whether the artifact is a glare at the decision point 520. Responsive to a determination that the artifact is a glare, the electronic device corrects the glare at the correct process 522. The electronic device may correct the glare utilizing the image processing technique described below with respect to FIG. 8.
  • the electronic device responsive to a determination that the artifact is a reflection, corrects the reflection at the correct process 518 and determines whether the quality of the corrected image is satisfactory.
  • the electronic device corrects the reflection utilizing the image processing technique described below with respect to FIG. 7, for example. Responsive to a determination that the quality of the corrected image is satisfactory, the electronic device determines whether the corrected image includes a portion of the artifact. Responsive to a determination that the corrected image includes the portion of the artifact, the electronic device determines whether the portion of the artifact is a glare at the decision point 520.
  • the electronic device determines whether the artifact is a reflection or a glare by analyzing the image to determine an angle, an intensity, or a combination thereof of a light scattered by the artifact. For example, responsive to a determination that the angle is greater than a threshold angle, the intensity is greater than a threshold intensity, or a combination thereof, the electronic device determines that the artifact is a reflection. In another example, responsive to a determination that the angle is less than or equivalent to the threshold angle, the intensity is less than or equivalent to the threshold intensity, or a combination thereof, the electronic device determines that the artifact is a glare.
  • the electronic device determines whether the artifact is a reflection or a glare by analyzing a contrast of the artifact to another object within a region of the pair of eyeglasses such as an eye landmark, a feature of the pair of eyeglasses, a skin tone, or a combination thereof.
  • the electronic device prior to determining whether the quality of the corrected image is satisfactory at the decision point 524, fuses a first corrected image including the reflection corrected during the correct process 518 and a second corrected image including the glare corrected during the correct process 522. For example, responsive to a determination that the image includes a reflection and a glare, the electronic device performs the correct processes 518, 522 concurrently. The electronic device detects an eye landmark in the image corrected for the reflection and the eye landmark in the image corrected for the glare. The electronic device aligns the eye landmark in the corrected images and fuse the aligned images. In various examples, the electronic device fuses a portion of the aligned images. For example, the electronic device fuses a region of the aligned images defined by the pair of eyeglasses. In some examples, the electronic device determines whether the fused image includes another artifact, as described above with respect to FIG. 2
  • the electronic device determines whether colors of the image satisfy a criterion, whether a noise ratio of the image satisfies a criterion, whether a range of brightness of the image satisfies a criterion, whether a range of contrast of the image satisfies a criterion, or a combination thereof.
  • the criterions are settings of an executable code, for example.
  • the colors of the image satisfy the criterion responsive to the colors of the image having values within lower and upper color settings.
  • the noise ratio of the image satisfies the criterion responsive to the noise ratio having a value that does not exceed a noise setting.
  • the range of brightness of the image satisfies the criterion responsive to the range of brightness having values within lower and upper brightness settings.
  • the range of contrast of the image satisfies the criterion responsive to the range of contrast having values within lower and upper contrast settings.
  • the electronic device 600 may be the electronic device 104, 300.
  • the electronic device 600 includes a processor 602, an image sensor 604, a network interface 606, a display device 608, and a storage device 610.
  • the processor 602 may be the processor 302.
  • the image sensor 604 may be the image sensor 110, 304.
  • the network interface 606 may be the network interface 306.
  • the display device 608 may be the display device 108, 308.
  • the storage device 610 may be the storage device 310.
  • the processor 602 couples to the image sensor 604, the network interface 606, the display device 608, and the storage device 610.
  • the storage device 610 stores machine-readable instructions which, when executed by the processor 602, cause the processor 602 to perform some or all of the actions attributed herein to the processor 602.
  • the machine-readable instructions are the machine-readable instructions 612, 614, 616, 618.
  • the machine-readable instructions 612, 614, 616, 618 when executed by the processor 602, cause the processor 602 to correct artifacts of images.
  • the machine-readable instruction 612 when executed by the processor 602, causes the processor 602 to detect a pair of eyeglasses (e.g., the pair of eyeglasses 402, 418) in an image received via the image sensor 604.
  • the machine-readable instruction 614 when executed by the processor 602, causes the processor 602 to identify an eye landmark (e.g., the eyebrow 404, 420, the outer corner of the eye 406, 422, the iris 408, 424, the pupil 410, 426) and an artifact in the image.
  • the machine-readable instruction 616 when executed by the processor 602, causes the processor 602 to generate a corrected image.
  • the machine-readable instruction 618 when executed by the processor 602, causes the processor 602 to cause the display device 608 to display the corrected image, the network interface 606 to transmit the corrected image, or a combination thereof.
  • the machine-readable instruction 612 when executed by the processor 602, causes the processor 602 to detect a user in the image and determine that the user wears the pair of eyeglasses.
  • the machine-readable instruction 612 when executed by the processor 602, causes the processor 602 to analyze the image utilizing a computer vision technique, a machine learning technique, or a combination thereof.
  • a machine-readable instruction when executed by the processor 602, causes the processor 602 to analyze, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof, as described above with respect to FIGS. 2 and 3.
  • a machine-readable instruction when executed by the processor 602, causes the processor 602 to determine a seventy of the overlap by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof. Responsive to a determination that the severity of the overlap exceeds the overlap threshold, the processor 602 is to generate the corrected image.
  • the machine-readable instruction 616 when executed by the processor 602, causes the processor 602 to utilize image processing techniques to remove the artifact, reduce a severity of the artifact, enhance an area of the image, or a combination thereof.
  • the processor 602 reduces a color of the artifact, a brightness of the artifact, a size of the artifact, or a combination thereof.
  • the processor 602 removes the artifact, reduce the severity of the artifact, or a combination thereof, utilizing the image processing techniques described above with respect to FIG. 5 or below with respect to FIGS. 7 - 9.
  • the processor 602 enhances the area of the image utilizing tone mapping, as described above with respect to FIG. 2, or utilizing other image processing techniques as described below with respect to FIGS. 7 - 9.
  • FIG. 7 a flow diagram depicting a method 700 for an electronic device (e.g., the electronic device 104, 300, 600) to correct artifacts (e.g., the artifacts 412, 414) in images (e.g., the images 400, 416) is provided, in accordance with various examples.
  • the electronic device performs the method 700 to mitigate a reflection within an image, for example.
  • the electronic device receives the image.
  • the electronic device may receive the image from an image sensor (e.g., the image sensor 110, 304, 604).
  • the electronic device stores the image during a store process 704 of the method 700.
  • the electronic device stores the image to a storage device (e.g., the storage device 310, 610).
  • a storage device e.g., the storage device 310, 610.
  • the electronic device uses a neural network to isolate the reflection.
  • the electronic device generates a corrected image a generate process 708 of the method 700.
  • the electronic device utilizes a neural network to decompose the image to separate a transmission layer and a reflection layer.
  • the neural network may implement ReflectNet, a Siamese Dense Network (SDN), or a combination thereof to decompose the image.
  • the transmission layer includes objects hidden by the reflection and the reflection layer includes objects reflected by the reflection.
  • the electronic device generates the corrected image utilizing the transmission layer during the generate process 708.
  • the neural network is a neural network trained to reduce a loss function between the transmission layer and the reflection layer so that a noise of the corrected image is below a noise threshold.
  • a flow diagram depicting a method 800 for an electronic device e.g., the electronic device 104, 300, 600 to correct artifacts in images (e.g., the images 400, 416) is provided, in accordance with various examples.
  • the electronic device performs the method 800 to mitigate a glare within an image, for example.
  • the electronic device receives the image.
  • the electronic device may receive the image from an image sensor (e.g., the image sensor 110, 304, 604).
  • the electronic device stores the image during a store process 804 of the method 800.
  • the electronic device stores the image to a storage device (e.g., the storage device 310, 610).
  • the electronic device may lower an exposure of the image sensor.
  • the electronic device fuses another image captured by the image sensor utilizing the lower exposure and the stored image.
  • the electronic device determines whether the glare is reduced in the fused image during a decision point 810 of the method 800. Responsive to a determination that the glare is reduced, the electronic device enhances the fused image during an enhance process 812 of the method 800.
  • the electronic device utilizes the enhanced fused image as a corrected image.
  • the electronic device detects an eye landmark in the image captured by the image sensor utilizing the lower exposure.
  • the electronic device aligns the eye landmark with the eye landmark of the image stored during the store process 804.
  • the electronic device fuses the aligned images.
  • the electronic device fuses a portion of the aligned images.
  • the electronic device fuses a region of the aligned images defined by the pair of eyeglasses.
  • the electronic device utilizes a neural network, as described above with respect to FIG. 7, to remove a reflection layer of the fused image.
  • the electronic device calculates a weight based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact in relation to a center of the eye landmark, or a combination thereof. Responsive to a determination that the weight is below a weight threshold, the electronic device determines that the glare is reduced. Responsive to a determination that the glare is not reduced, the electronic device may repeat the method 800.
  • FIG. 9 a schematic diagram depicting an electronic device 900 for correcting artifacts (e.g., the artifacts 412, 414) in images (e.g., the images 400, 416) is provided, in accordance with various examples.
  • the electronic device 900 may be the electronic device 104, 300, 600.
  • the electronic device 900 comprises a processor 902 and a non-transitory machine-readable medium 904.
  • the processor 902 may be the processor 302, 602.
  • the non-transitory machine- readable medium 904 may be the storage device 310, 610.
  • the term “non- transitory” does not encompass transitory propagating signals.
  • the electronic device 900 comprises the processor 902 coupled to the non-transitory machine-readable medium 904.
  • the non- transitory machine-readable medium 904 stores machine-readable instructions.
  • the machine-readable instructions are the machine-readable instructions 906, 908, 910, 912, 914.
  • the machine-readable instructions 906, 908, 910, 912, 914 when executed by the processor 902, cause the processor 902 to perform some or all of the actions attributed herein to the processor 902.
  • the machine- readable instructions 906, 908, 910, 912, 914 when executed by the processor 902, the machine- readable instructions 906, 908, 910, 912, 914 cause the processor 902 to correct artifacts in images.
  • the machine-readable instruction 906 causes the processor 902 to monitor a video signal for an image that includes a pair of eyeglasses.
  • the video signal may be received via an image sensor (e.g., the image sensor 110, 304, 604).
  • the machine-readable instruction 908 causes the processor 902 to identify an artifact in the image that includes a pair of eyeglasses (e.g., the pair of eyeglasses 402, 418).
  • the machine-readable instruction 910 causes the processor 902 to determine a type of the artifact.
  • the machine-readable instruction 912 causes the processor 902 to generate a corrected image.
  • the machine-readable instruction 914 causes the processor 902 to cause a display device (e.g., the display device 108, 308, 608) to display the corrected image, a network interface (e.g., the network interface 306, 606) to transmit the corrected image, or a combination thereof.
  • the processor 902 may store a number of images of the video signal for processing.
  • the number of images that the processor 902 stores may be a multiplier of a refresh rate of the display device. For example, responsive to the refresh rate of 60 Hertz (Hz), the processor 902 stores 20 to 30 images for processing.
  • the processor 902 By storing the number of images that is a multiplier of the refresh rate of the display device, the processor 902 reduces a delay of the display, the transmission, or a combination thereof of the number of images to a rate that is below what a user perceives. For example, to process the 20 to 30 images takes less than half of a second.
  • the machine- readable instruction 906 when executed by the processor 902, causes the processor 902 to, utilizing a machine learning technique, monitor the video signal for the image that includes the pair of eyeglasses.
  • the machine learning technique may be a machine learning technique described above with respect to FIGS. 1 , 3, and 6.
  • the machine-readable instruction 908, when executed by the processor 902, causes the processor 902 to utilize an image processing technique, a machine learning technique, or a combination thereof, as described above with respect to FIGS. 2 and 3.
  • the machine-readable instruction 912 in response to a determination that the type of the artifact is indicative of a reflection, causes the processor 902 to generate a corrected image utilizing a second machine learning technique.
  • the processor 902 utilizes the technique described above with respect to FIG. 7, for example.
  • the machine- readable instruction 912 when executed by the processor 902, causes the processor to generate the corrected image utilizing an image processing technique.
  • the processor 902 utilizes the techniques described above with respect to FIG. 8, for example.
  • a machine-readable instruction when executed by the processor 902, causes the processor 902 to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof responsive to an image quality of the corrected image exceeding a quality threshold.
  • the processor 902 determines the image quality utilizing the techniques described above with respect to FIG. 5, for example.
  • the processor 902 determines whether colors of the corrected image satisfy a criterion, whether a noise ratio of the corrected image satisfies a criterion, whether a range of brightness of the corrected image satisfies a criterion, whether a range of contrast of the corrected image satisfies a criterion, or a combination thereof.
  • the method 200, 500, 700, 800 is implemented by machine-readable instructions stored to a storage device (e.g., the storage device 310, 610, the non-transitory machine-readable medium 904) of an electronic device (e.g., the electronic device 104, 300, 600, 900).
  • a processor e.g., the processor 302, 602, 902 of the electronic device executes the machine-readable instructions to perform the method 200, 500, 700, 800, for example.
  • a process refers to operations performed by execution of machine-readable instructions by the processor.
  • a decision point, as used herein, refers to operations performed by execution of machine-readable instructions by the processor.
  • some or all of the blocks (e.g., process, decision point) of the method 200, 500, 700, 800 may be performed concurrently or in different sequences.
  • the processor performs a block that occurs responsive to a command sequential to the block describing the command.
  • the processor performs a block that depends upon a state of a component after the state of the component is enabled.
  • initial values for the thresholds and settings are determined during a manufacture process.
  • an executable code may provide a GUI to enable a user of an electronic device (e.g., the electronic device 104, 300, 600, 900) to adjust the thresholds and settings.
  • the thresholds and settings may be stored to a storage device (e.g., the storage device 310, 610, the non-transitory machine-readable medium 904) of the electronic device.
  • the term “comprising” is used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to... .”
  • the term “couple” or “couples” is intended to be broad enough to encompass both direct and indirect connections. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices, components, and connections.
  • the word “or” is used in an inclusive manner. For example, “A or B” means any of the following: “A” alone, “B” alone, or both “A” and “B.”

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

In some examples in accordance with the present description, an electronic device is provided. The example electronic device includes an image sensor and a processor to determine that a pair of eyeglasses is in an image received via the image sensor. In response to the determination, the processor is to identify an artifact in the image. In response to identifying that the artifact satisfies a criterion, the processor is to generate a corrected image. The corrected image includes a mitigated appearance of the artifact. The processor is to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.

Description

ARTIFACTS CORRECTIONS IN IMAGES
BACKGROUND
[0001] Electronic devices such as desktops, laptops, notebooks, tablets, and smartphones include image sensors that enable the electronic devices to capture and transmit images. Images captured by an image sensor may include artifacts that partially or fully obscure objects within the image. An artifact, as used herein, is a distortion of features of an image. The artifact is a result of a light source of a physical environment of the image sensor, for instance. The light source may be a directional light source, a display device, sunlight, or a combination thereof, for instance. The light source may hinder the image sensor from capturing features of an object located within a proximity of the light source and within the field of view of the image sensor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Various examples are described below referring to the following figures.
[0003] FIG. 1 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
[0004] FIG. 2 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples.
[0005] FIG. 3 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
[0006] FIGS. 4A and 4B are examples showing an electronic device correcting artifacts in images, in accordance with various examples.
[0007] FIG. 5 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples.
[0008] FIG. 6 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
[0009] FIG. 7 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples.
[0010] FIG. 8 is a flow diagram depicting a method for an electronic device to correct artifacts in images, in accordance with various examples. [0011] FIG. 9 is a schematic diagram depicting an electronic device for correcting artifacts in images, in accordance with various examples.
DETAILED DESCRIPTION
[0012] As described above, electronic devices include image sensors that enable the electronic devices to capture and transmit images. An image is captured and transmitted by an electronic device during a virtual meeting that enables a user of the electronic device to interact with an audience, for instance. The image may include an artifact (e.g., a glare, a reflection, or a combination thereof) that obscures a feature of an object within the image captured by the image sensor. For instance, the user of the electronic device wears a pair of eyeglasses, and light striking the pair of eyeglasses results in an artifact that obscures a facial feature of the user within the image captured by the image sensor.
[0013] The artifact may distract the user, the audience, or a combination thereof. The artifact may detract from the user appearance in the image, reduce the user’s confidence, and interfere with communication between the user and the audience by creating a perceived barrier that blocks an appearance of eye-to-eye contact. The user may attempt to remove or reduce the artifact by rearranging elements (e.g. , the electronic device, a light source, the image sensor, the pair of eyeglasses) of a physical environment of the image sensor. However, the user’s attempts may be disruptive to the virtual meeting and interfere with the user participating in the virtual meeting, thereby impacting user productivity.
[0014] This description describes examples of an electronic device to detect and correct artifacts that are located on a pair of eyeglasses of a user in an image. The image may be a frame of a video signal captured via an image sensor. The electronic device corrects the image by removing the artifacts and enhancing a visibility of facial features of the user. During a virtual meeting, the electronic device receives a video signal via the image sensor. The electronic device analyzes an image of the video signal to determine whether the user wears the pair of eyeglasses. Responsive to a determination that the user wears the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark and an artifact. The eye landmark, as used herein, is a facial feature that indicate a location of an eye within the image. The eye landmark may include an eyebrow, an upper eye lid, a lower eye lid, an iris, a pupil, an inside corner of the eye, an outside corner of the eye, or a combination thereof. The electronic device determines a seventy of the artifact. The seventy, as used herein, quantifies an impact of the artifact on a visibility of an object within an area of the artifact. The electronic device determines the seventy by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof.
[0015] Responsive to the seventy of the artifact exceeding a seventy threshold, the electronic device corrects the image utilizing image processing techniques. The image processing techniques removes the artifact, reduces a seventy of the artifact, enhances an area of the image, or a combination thereof. The area of the image enhanced is defined by the artifact, the eye landmark, the pair of eyeglasses, or a combination thereof. In some examples, the electronic device determines a type of the artifact (e.g., glare, reflection) to determine an image processing technique to utilize to generate the corrected image. In some examples, the electronic device determines an image quality of the corrected image. Responsive to the image quality exceeding a quality threshold, the electronic device causes the corrected image to be displayed, transmitted, or a combination thereof. In various examples, the electronic device utilizes a machine learning technique to determine whether the user wears the pair of eyeglasses, to detect the eye landmark, to detect the artifact, to determine the type of the artifact, to correct the image, to determine the image quality, or a combination thereof.
[0016] By correcting the artifacts in the video signal and transm itting the corrected video signal, the electronic device enhances a visibility of the eyes of the user without the user taking corrective actions. Additionally, the user and audience experiences are enhanced by removing awkwardness that occurs while the user takes the corrective action and by removing the perceived barrier that blocks the appearance of eye-to-eye contact. Automatically correcting the artifacts and enhancing the user experience reduces non-productive time of the user, thereby enhancing user productivity.
[0017] In examples in accordance with the present description, an electronic device is provided. The electronic device includes an image sensor and a processor to determine that a pair of eyeglasses is in an image received via the image sensor. In response to the determination, the processor is to identify an artifact in the image. In response to identifying that the artifact satisfies a criterion, the processor is to generate a corrected image. The corrected image includes a mitigated appearance of the artifact. The processor is to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
[0018] In some examples in accordance with the present description, an electronic device is provided. The electronic device includes an image sensor and a processor to determine that a pair of eyeglasses is in an image received via the image sensor. In response to the determination, the processor is to identify an eye landmark and an artifact in the image. In response to identifying that the artifact overlaps the eye landmark, the processor is to generate a corrected image. The corrected image includes a mitigated appearance of the artifact. The processor is to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
[0019] In other examples in accordance with the present description, a non- transitory machine-readable medium storing machine-readable instructions is provided. When executed by a processor, the machine-readable instructions cause the processor to, utilizing a first machine learning technique, monitor a video signal for an image that includes a pair of eyeglasses. The video signal is received via an image sensor. In response to receiving the image that includes the pair of eyeglasses, the machine-readable instructions, when executed by the processor, cause the processor to identify an artifact in the image. In response to identifying that the artifact satisfies a criterion, the machine-readable instructions, when executed by the processor, cause the processor to determine a type of the artifact. In response to a determination that the type of the artifact is indicative of a reflection, the machine-readable instructions, when executed by the processor, cause the processor to generate a corrected image utilizing a second machine learning technique. In response to a determination that the type of the artifact is indicative of a glare, the machine-readable instructions, when executed by the processor, cause the processor to generate the corrected image utilizing an image processing technique. The corrected image includes a mitigated appearance of the reflection. The machine-readable instructions, when executed by the processor, cause the processor to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
[0020] Referring now to FIG. 1 , a schematic diagram depicting an electronic device 104 for correcting artifacts in images is provided, in accordance with various examples. A user 100 wearing a pair of eyeglasses 102 faces the electronic device 104. The electronic device 104 includes a chassis 106, a display device 108 and an image sensor 110. The electronic device 104 is a desktop, a laptop, a notebook, a tablet, a smartphone, or any other suitable computing device for receiving and processing images. The chassis 106 houses components of the electronic device 104. The components include the display device 108 and the image sensor 110. The display device 108 is a liquid crystal display (LCD), a light-emitting diode (LED) display, a plasma display, a quantum dot (QD) LED display, or any suitable device for displaying data of the electronic device 104 for viewing. The image sensor 110 is an internal camera, an external camera, or any other suitable device for capturing an image, recording a video signal, or a combination thereof.
[0021] While not explicitly shown, the chassis 106 houses network interfaces, video adapters, sound cards, local buses, input/output devices (e.g., a keyboard, a mouse, a touchpad, a speaker, a microphone), storage devices, wireless transceivers, connectors, or a combination thereof. While the image sensor 110 is shown as an integrated image sensor of the electronic device 104, in other examples, the image sensor 110 couples to any suitable connection for enabling communications between the electronic device 104 and the image sensor 110. The connection may be via a wired connection (e.g., a Universal Serial Bus (USB)) or via a wireless connection (e.g., BLUETOOTH®, WI-FI®). While the display device 108 is shown as an integrated display device of the electronic device 104, in other examples, the display device 108 is coupled to the electronic device 104 via a wired connection (e.g., USB, Video Graphics Array (VGA), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI)) or a stand-alone display device coupled to the electronic device 104 via a wireless connection(e.g., BLUETOOTH®, WI-FI®). [0022] As described above, during a virtual meeting, the electronic device 104 receives a video signal via the image sensor 110. In various examples, to determine whether an image of the video signal includes a pair of eyeglasses, electronic device 104 utilizes a facial detection technique to detect the user 100 in the image of the video signal. The facial detection technique determines whether a face of the user 100 is in the image. The facial detection technique may be an appearance-based model that utilize statistics, machine learning techniques, or a combination thereof, a knowledge-based model that utilizes a set of rules, a feature-based model that extracts features of the image, a template-based model that correlates features of the image to templates of faces, or a combination thereof.
[0023] Responsive to a determination that the face of the user 100 is in the image, the electronic device 104 analyzes the image to determine whether the user 100 wears the pair of eyeglasses 102, as described below with respect to FIG. 2. Responsive to a determination that the user 100 is not wearing the pair of eyeglasses 102, the electronic device 104 causes the display device 108 to display the video signal received via the image sensor 110, a network interface (not explicitly shown) to transmit the video signal, or a combination thereof. In some examples, the electronic device 104 monitors the video signal to determine whether the user 100 wears the pair of eyeglasses 102 in subsequent images of the video signal.
[0024] Responsive to a determination that the user 100 is wearing the pair of eyeglasses 102, the electronic device 104 corrects the image of the video signal as described below with respect to FIGS. 2 - 10 and cause the display device 108 to display the corrected image, the network interface (not explicitly shown) to transmit the corrected image, or a combination thereof. (For an example of the electronic device 104 causing the display of a corrected image, refer to FIG. 4 below.) In some examples, as described below with respect to FIGS. 9 - 10, the electronic device 104 corrects subsequent images of the video signal and cause the display device 108 to display the corrected subsequent images of the video signal, the network interface (not explicitly shown) to transmit the corrected subsequent images of the video signal, or a combination thereof. [0025] Referring now to FIG. 2, a flow diagram depicting a method 200 for an electronic device (e.g., the electronic device 104) to correct artifacts in images is provided, in accordance with various examples. The method 200 includes a start point 202 during which the electronic device starts processing an image. The image may be captured by an image sensor (e.g., the image sensor 110) of the electronic device. During a decision point 204 of the method 200, the electronic device determines whether the image includes a pair of eyeglasses (e.g., the pair of eyeglasses 102). Responsive to a determination that the image includes the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark at a detect process 206 of the method 200. During a detect process 208 of the method 200, the electronic device analyzes the image to detect an artifact. During a decision point 210 of the method 200, the electronic device determines whether the artifact is severe. Responsive to a determination that the artifact is not severe, the electronic device returns to the start point 202 to start processing another image. Responsive to a determination that the artifact is severe, the electronic device corrects the artifact at a correct process 212 of the method 200. During a decision point 214 of the method 200, the electronic device determines whether the correction introduces another artifact. Responsive to a determination that the correction introduces another artifact, the electronic device returns to the correct process 212 to correct the another artifact. Responsive to a determination that the correction did not introduce another artifact, the electronic device causes a display of the corrected image during a display process 216.
[0026] During the start point 202, the electronic device may store the image from the image sensor to a storage device of the electronic device. In some examples, the electronic device analyzes the image to determine whether the image includes a feature of the pair of eyeglasses at the decision point 204. The feature of the pair of eyeglasses may be a frame, an arm, a lens, a rim, a nose pad, a bridge, or a combination thereof. Responsive to a determination that the image includes the feature of the pair of eyeglasses, the electronic device determines that the image includes the pair of eyeglasses. In other examples, to determine whether the image includes the pair of eyeglasses at the decision point 204, the electronic device analyzes the image utilizing a computer vision technique, a machine learning technique, or a combination thereof. The computer vision technique identifies a feature of the image, classify the feature, compare the feature to multiple templates (e.g., images of pairs eyeglasses), or a combination thereof. For example, the computer vision technique identifies an H-shaped feature of the image, classifies the H-shaped feature as a bridge of a pair of eyeglasses, compares the H-shaped feature to multiple templates of pairs of eyeglasses in different perspectives within a field of view of the image sensor, or a combination thereof. Responsive to a determination that the H-shaped feature indicates the pair of eyeglasses, the electronic device determines that the image includes the pair of eyeglasses.
[0027] In other examples, the electronic device utilizes a machine learning technique to determine whether a feature or a combination of features indicates a pair of eyeglasses. The machine learning technique compares the feature or the combination of features to multiple templates to determine that the feature or the combination of features indicates that the image includes the pair of eyeglasses. In various examples, the electronic device utilizes a machine learning technique that implements a convolution neural network (CNN) to determine whether the image includes the pair of eyeglasses. The electronic device may utilize the CNN trained with a training set that includes multiple images of multiple users. A subset of the multiple images may include users wearing pairs of eyeglasses and another subset of the multiple images may include users not wearing pairs of eyeglasses. Utilizing the trained CNN, the electronic device identifies multiple features of the image, classify the features, and determines whether the image includes the pair of eyeglasses. In some examples, the CNN implements a Visual Geometry Group (VGG) network, a Residual Network (ResNet) network, a SqueezeNet network, or an AlexNet network. In various examples, responsive to a determination that the image includes the pair of eyeglasses and a determination that a user (e.g., the user 100) is not wearing the pair of eyeglasses, the electronic device causes the display of the image. For example, the electronic device causes a display device (e.g., the display device 108) to display the image.
[0028] Responsive to a determination that the image includes the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark during the detect process 206 and an artifact during the detect process 208. As described above, the eye landmark may include an eyebrow, an upper eye lid, a lower eye lid, an iris, a pupil, an inside corner of the eye, an outside corner of the eye, or a combination thereof. In some examples, to detect the eye landmark, the artifact, or a combination thereof, the electronic device utilizes an eye detection technique. For example, during the detect process 206, the electronic device analyzes a region of the image that includes the pair of eyeglasses for the eye landmark. The electronic device determines that the region includes multiple eye landmarks. In various examples, the electronic device determines that the artifact is located in the region based on an inability to locate an eye landmark. For example, the electronic device determines that the region includes two eyebrows, one upper eye lid, one lower eye lid, an iris, and a pupil and determine, during the detect process 208, that, based on a presence of the two eyebrows, an artifact or multiple artifacts obscure another upper eye lid, another lower eye lid, another iris, and another pupil. For another example, refer to FIG. 4A below.
[0029] In other examples, to detect the eye landmark during the detect process 206, the electronic device utilizes a machine learning technique to determine whether the region of the image that includes the pair of eyeglasses includes the eye landmark. For instance, the machine learning technique compares a feature of the region of the image that includes the pair of eyeglasses to multiple templates (e.g., images of users wearing pairs of eyeglasses) to determine that the feature is the eye landmark.
[0030] In various examples, the electronic device utilizes a CNN to perform the detect process 206 and the detect process 208 simultaneously to determine whether the region of the image that includes the pair of eyeglasses includes the eye landmark and the artifact. For example, the electronic device utilizes the CNN trained with a training set that includes multiple images of multiple users. The multiple images may include different perspectives of users wearing pairs of eyeglasses. Utilizing the trained CNN, the electronic device identifies multiple features of the region of the image that includes the pair of eyeglasses, classify the multiple features, and determine that a feature of the multiple features is an eye landmark and that another feature of the multiple features is an artifact. In some examples, the electronic device determines that the multiple features include multiple eye landmarks, multiple artifacts, or a combination thereof.
[0031] In some examples, during the detect process 208, the electronic device detects the artifact by analyzing the region of the image that includes the pair of eyeglasses. In various examples, the electronic device detects the artifact by analyzing a sub-region of the region, where the sub-region includes the eye landmark. To detect the artifact, the electronic device utilizes an image processing technique such as histogram calculation, image thresholding, edge detection, or a combination thereof. For example, the electronic device performs a histogram calculation for pixels of the region or sub-region. Responsive to a determination that an intensity of a pixel of the pixels differs from an intensity of a subset of pixels that are contiguous with the pixel by more than an intensity threshold, the electronic device determines that the pixel is an artifact. The electronic device determines that an intensity of another subset of pixels of the pixels that are contiguous with the pixel are within another intensity threshold of the intensity of the pixel and determine that the artifact includes the another subset of the pixels.
[0032] In another example, the electronic device performs image thresholding to convert the region or the sub-region to a grayscale. The electronic device determines that the artifact is a pixel or multiple pixels of the region or the subregion that have a highest tonal value (e.g., white). In yet another example, the electronic device performs edge detection on the image to detect an area of the region or the sub-region where a discontinuity of the image occurs. The discontinuity may include a boundary between objects of the image, where an object of the objects is the eye landmark and another object of the objects may be the artifact.
[0033] As described above, during the decision point 210, the electronic device determines a severity of the artifact by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof. For example, the electronic device compares the color of the artifact to a color of another object that is contiguous to the artifact. Responsive to a determination that the color of the artifact differs from the color of the another object by more than a color threshold, the electronic device determines that the artifact is severe. In another example, the electronic device compares the brightness of the artifact to a brightness of the another object. Responsive to a determination that the brightness of the artifact differs from the brightness of the another object by more than a brightness threshold, the electronic device determines that the artifact is severe. In yet another example, the electronic device determines that the size of the artifact exceeds a size threshold that indicates the artifact is severe. In another example, the electronic device determines a distance in pixels from the artifact to the eye landmark. Responsive to a determination that the distance is less than a distance threshold, the electronic device determines that the artifact is severe.
[0034] In other examples, the electronic device calculates a weight based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact in relation to a center of the eye landmark, or a combination thereof. Responsive to a determination that the weight is below a weight threshold, the electronic device determines that the artifact is not severe. In some examples, responsive to a determination that the artifact is not severe, the electronic device causes the display device to display the video signal.
[0035] Responsive to a determination that the artifact is severe at the decision point 210, the electronic device corrects the image utilizing image processing techniques. As described above, the image processing techniques removes the artifact, reduces a severity of the artifact, enhances an area of the image, or a combination thereof. The area of the image enhanced is defined by the artifact, the eye landmark, the pair of eyeglasses, or a combination thereof. The electronic device removes the artifact or reduces the severity of the artifact utilizing the techniques described below with respect to FIGS. 5 - 9. In some examples, to enhance the area of the image, the electronic device performs a tone mapping technique. The tone mapping technique adjusts a tonal value of a pixel so that the tonal values are between one and 255. The electronic device performs the tone mapping technique on the image or on the area of the image. For example, the electronic device enhances the area of the image defined by the eye landmark. In other examples, as described below with respect to FIGS. 7 — 9, the electronic device performs other image processing techniques to enhance the area of the image.
[0036] During the decision point 214, the electronic device determines whether the corrected image includes another artifact introduced by the image processing technique. Responsive to a determination that the corrected image does include the another artifact, the electronic device returns to the decision point 210 to determine a seventy of the another artifact, in some examples. Responsive to a determination that the severity of the another artifact does not exceed the severity, the electronic device causes the display of the corrected image during the display process 216. Does not exceed, as used herein, indicates that a value is equivalent to a threshold or below the threshold. Responsive to a determination that the severity of the another artifact does exceed the severity threshold, the electronic device corrects the another artifact during the correct process 212. In some examples, responsive to a determination that the correction does not introduce yet another artifact, the electronic device causes the display device to display the corrected image.
[0037] While in the examples described above the electronic device performs the decision point 204 and the detect processes 206, 208, sequentially, in other examples, the electronic device performs the decision point 204 and the detect processes 206, 208 simultaneously. In other examples the electronic device performs the detect processes 206, 208 simultaneously after the decision point 204.
[0038] Referring now to FIG. 3, a schematic diagram depicting an electronic device 300 for correcting artifacts in images is provide, in accordance with various examples. The electronic device 300 may be the electronic device 104. The electronic device 300 includes a processor 302, an image sensor 304, a network interface 306, a display device 308, and a storage device 310. The processor 302 is a microprocessor, a microcomputer, a microcontroller, or another suitable processor or controller for managing operations of the electronic device 300. The processor 302 is a central processing unit (CPU), graphics processing unit (GPU), system on a chip (SoC), image signal processor (ISP), or a field programmable gate array (FPGA), for example. The image sensor 304 may be the image sensor 110. The network interface 306 enables communication over a network. The network interface 306 may include a wired connection (e.g., Ethernet, USB) or a wireless connection (e.g., WI-FI®, BLUETOOTH®). The display device 308 may be the display device 108. The storage device 310 may include a hard drive, solid state drive (SSD), flash memory, random access memory (RAM), or other suitable memory for storing data or executable code of the electronic device 300.
[0039] In some examples, the processor 302 couples to the image sensor 304, the network interface 306, the display device 308, and the storage device 310. The storage device 310 stores machine-readable instructions which, when executed by the processor 302, cause the processor 302 to perform some or all of the actions attributed herein to the processor 302. The machine-readable instructions are the machine-readable instructions 312, 314, 316, 318.
[0040] In various examples, the machine-readable instructions 312, 314, 316, 318, when executed by the processor 302, cause the processor 302 to correct artifacts of images. The machine-readable instruction 312, when executed by the processor 302, causes the processor 302 to detect a pair of eyeglasses in an image received via the image sensor 304. The machine-readable instruction 314, when executed by the processor 302, causes the processor 302 to identify an artifact in the image. Responsive to the artifact satisfying a criterion, the machine-readable instruction 316, when executed by the processor 302, causes the processor 302 to generate a corrected image. The corrected image includes a mitigated appearance of the artifact. The mitigated appearance, as used herein, includes a reduction in a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact in relation to a center of a lens of the pair of eyeglasses, or a combination thereof, a removal of the artifact from the image, an enhancement of objects located behind the pair of eyeglasses in the image, or a combination thereof. The machine-readable instruction 318, when executed by the processor 302, causes the processor 302 to cause the display device 308 to display the corrected image, the network interface 306 to transmit the corrected image, or a combination thereof.
[0041] As described above with respect to FIG. 1 , in some examples, to determine that the pair of eyeglasses is in the image, the machine-readable instruction 312, when executed by the processor 302, causes the processor 302 to detect a user in the image and determine that the user wears the pair of eyeglasses. In various examples, as described above with respect to FIG. 2, to determine that the pair of eyeglasses is in the image, the machine-readable instruction 312, when executed by the processor 302, causes the processor 302 to analyze the image utilizing a computer vision technique, a machine learning technique, or a combination thereof. For example, the processor 302 analyzes the image utilizing a computer vision technique to identify a feature of the pair of eyeglasses, to classify the feature, to compare the feature to multiple templates, or a combination thereof.
[0042] As described above with respect to FIG. 2, the machine-readable instruction 314, when executed by the processor 302, causes the processor 302 to analyze a region of the image that includes the pair of eyeglasses to identify the artifact. The processor 302 determines that the region includes the artifact by determining that an eye landmark is obscured, for example. In some examples, to identify that the artifact satisfies the criterion, a machine-readable instruction (not explicitly shown), when executed by the processor 302, causes the processor 302 to analyze a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact in relation to a center of a lens of the pair of eyeglasses, or a combination thereof, as described above with respect to FIG. 2.
[0043] To generate the corrected image, the processor 302 utilizes image processing techniques to remove the artifact, reduce a severity of the artifact, enhance an area of the image, or a combination thereof. The processor 302 may remove the artifact, reduce a severity of the artifact, or a combination thereof, as described below with respect to FIGS. 5 - 9. The processor 302 may enhance the area of the image utilizing tone mapping, as described above with respect to FIG. 2, or utilizing other image processing techniques as described below with respect to FIGS. 7 - 9.
[0044] Referring now to FIGS. 4A and 4B, examples of an electronic device (e.g. , the electronic device 104, 300) correcting artifacts 412, 414 in images 400, 416 is provided, in accordance with various examples. FIG. 4A shows the image 400. The image 400 includes a pair of eyeglasses 402, an eyebrow 404, an outer corner of an eye 406, an iris 408, a pupil 410, and artifacts 412, 414. The eyebrow 404, the eye 406, the iris 408, the pupil 410, or a combination thereof are referred to as eye landmarks herein. FIG. 4B shows the image 416. The image 416 includes a pair of eyeglasses 418, an eyebrow 420, an outer corner of an eye 422, an iris 424, and a pupil 426.
[0045] In some examples, the image 400 is an image before processing by the electronic device and the image 416 is the image after processing by the electronic device. The pair of eyeglasses 402 may be the pair of eyeglasses 418. The eyebrow 404 may be the eyebrow 420. The outer corner of the eye 406 may be the outer corner of the eye 422. Before processing by the electronic device, the image 400 includes the artifacts 412, 414. After processing by the electronic device, the artifacts 412, 414 are removed, as illustrated by the image 416.
[0046] For example, a processor (e.g. , the processor 302) of the electronic device detects the pair of eyeglasses 402 in the image 400 utilizing a computer vision technique, a machine learning technique, or a combination thereof. Utilizing an eye detection technique, the processor detects the eyebrow 404, outer corner of the eye 406, the iris 408, and the pupil 410. The processor detects the artifact 412 by determining that a pupil is obscured and detects the artifact 414 by determining that an inner corner of an eye is obscured. The processor determines that a severity of the artifacts 412, 414, respectively, exceed a seventy threshold. Responsive to the determination that the severity of the artifacts 412, 414, respectively, exceeds the seventy threshold, the processor corrects the image 400 to generate the image 416. The image 416 shows the artifacts 412, 414 removed and an enhancement of the eyes located behind the pair of eyeglasses 418. For example, the iris 424 and the pupil 426 are fully visible in the image 416.
[0047] By correcting the artifacts 412, 414 in the video signal and transm itting the corrected video signal, the electronic device enhances a visibility of the eyes of the user without the user taking corrective actions. Additionally, the user and audience experiences are enhanced by removing awkwardness that occurs while the user takes the corrective action and by removing the perceived barrier that blocks the appearance of eye-to-eye contact. Automatically correcting the artifacts and enhancing the user experience reduces non-productive time of the user, thereby enhancing user productivity.
[0048] Referring now to FIG. 5, a flow diagram depicting a method 500 for an electronic device (e.g., the electronic device 104, 300) to correct artifacts (e.g., the artifacts 412, 414) in images (e.g., the image 400, 416) is provided, in accordance with various examples. The method 500 includes a start point 502 during which the electronic device starts processing an image. During a receive process 504 of the method 500, the electronic device receives an image in real-time. Real-time, as used herein, is a time at which the image is captured by an image sensor (e.g., the image sensor 110, 304). During a decision point 506 of the method 500, the electronic device determines whether the image includes a pair of eyeglasses (e.g., the pair of eyeglasses 402, 418). Responsive to a determination that the image does not include the pair of eyeglasses, the electronic device returns to the receive process 504 to receive another image. Responsive to a determination that the image includes the pair of eyeglasses, the electronic device analyzes the image to detect an eye landmark (e.g., the eyebrow 404, 420, the outer corner of the eye 406, 422, the iris 408, 424, the pupil 410, 426) during a detect process 508 of the method 500 and to detect an artifact during a detect process 510 of the method 500. During a decision point 512 of the method 500, the electronic device determines whether the artifact overlaps the eye landmark. Responsive to a determination that the artifact overlaps the eye landmark, the electronic device determines whether the overlap is severe at a decision point 514 of the method 500. Responsive to a determination that the overlap is severe, the electronic device determines whether the artifact is a reflection at a decision point 516 of the method 500 and determines whether the artifact is a glare at a decision point 520. Responsive to a determination that the artifact is a reflection, the electronic device corrects the reflection at a correct process 518 of the method 500. Responsive to a determination that the artifact is a glare, the electronic device corrects the glare at a correct process 522 of the method 500. At a decision point 524 of the method 500, the electronic device determines whether a quality of the corrected image is satisfactory. Responsive to a determination that the quality of the corrected image is satisfactory, the electronic device causes a display of the corrected image during a display process 526 of the method 500.
[0049] In various examples, during the detect process 508, the electronic device may not be able to detect the eye landmark. Utilizing a computer vision technique, a machine learning technique, or a combination thereof, the electronic device may determine that the pair of eyeglasses have tinted lenses and block a view of the eye landmark. Responsive to a determination that the pair of eyeglasses have tinted lenses, the electronic device causes the display of the image and return to the receive process 504 to receive another image. In other examples, responsive to the electronic device not locating the eye landmark, the electronic device determines a location of the eye landmark based on a location of a feature of the pair of eyeglasses. For example, the electronic device determines a location of a pupil based on a location of a frame of the pair of eyeglasses. In various examples, at the decision point 514, the electronic device determines that the artifact detected during the detect process 510 obscures the unlocated eye landmark and that the overlap is severe based on the inability to locate the eye landmark.
[0050] In other examples, during the detect process 510, the electronic device may not be able to detect the artifact. The electronic device causes the display of the image and return to the receive process 504 to receive another image. In some examples, responsive to a determination that the artifact does not overlap the eye landmark at the decision point 512, the electronic device displays the image and return to the receive process 504 to receive another image. In various examples, responsive to a determination that the overlap is not severe at the decision point 514, the electronic device displays the image and return to the receive process 504 to receive another image.
[0051] As described above with respect to FIG. 2, during the decision point 514, the electronic device determines a seventy of the overlap by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof. The electronic device may calculate a weight based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact, or a combination thereof. Responsive to a determination that the weight is below a weight threshold, the electronic device determines that the overlap is not severe. Responsive to the seventy exceeding an overlap threshold, the electronic device determines that the overlap is severe. In various examples, responsive to a determination that the overlap is severe at the decision point 514, the electronic device utilizes a two or a three-layer machine learning technique to determine whether an artifact is a reflection or a glare. The machine learning technique may implement a supportvector machine (SVM), a logistic regression, or a combination thereof to classify the artifact. The SVM is a supervised machine learning technique that analyzes training sets to determine classifications and utilizes regression analysis to determine a class of the artifact. Logistic regression is a statistical technique that predicts a likelihood that the artifact belongs to a first class or to a second class. In other examples, responsive to a determination that the overlap is severe at the decision point 514, the electronic device calculates a weight to determine whether the artifact is a reflection or a glare. The weight may be based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact, or a combination thereof.
[0052] In some examples, the electronic device performs the decision points 516, 520 sequentially. For example, the electronic device determines whether the artifact is a reflection at the decision point 516. Responsive to a determination that the artifact is not a reflection, the electronic device determines whether the artifact is a glare at the decision point 520. Responsive to a determination that the artifact is a glare, the electronic device corrects the glare at the correct process 522. The electronic device may correct the glare utilizing the image processing technique described below with respect to FIG. 8. In another example, responsive to a determination that the artifact is a reflection, the electronic device corrects the reflection at the correct process 518 and determines whether the quality of the corrected image is satisfactory. The electronic device corrects the reflection utilizing the image processing technique described below with respect to FIG. 7, for example. Responsive to a determination that the quality of the corrected image is satisfactory, the electronic device determines whether the corrected image includes a portion of the artifact. Responsive to a determination that the corrected image includes the portion of the artifact, the electronic device determines whether the portion of the artifact is a glare at the decision point 520.
[0053] The electronic device determines whether the artifact is a reflection or a glare by analyzing the image to determine an angle, an intensity, or a combination thereof of a light scattered by the artifact. For example, responsive to a determination that the angle is greater than a threshold angle, the intensity is greater than a threshold intensity, or a combination thereof, the electronic device determines that the artifact is a reflection. In another example, responsive to a determination that the angle is less than or equivalent to the threshold angle, the intensity is less than or equivalent to the threshold intensity, or a combination thereof, the electronic device determines that the artifact is a glare. In other examples, the electronic device determines whether the artifact is a reflection or a glare by analyzing a contrast of the artifact to another object within a region of the pair of eyeglasses such as an eye landmark, a feature of the pair of eyeglasses, a skin tone, or a combination thereof.
[0054] In some examples, prior to determining whether the quality of the corrected image is satisfactory at the decision point 524, the electronic device fuses a first corrected image including the reflection corrected during the correct process 518 and a second corrected image including the glare corrected during the correct process 522. For example, responsive to a determination that the image includes a reflection and a glare, the electronic device performs the correct processes 518, 522 concurrently. The electronic device detects an eye landmark in the image corrected for the reflection and the eye landmark in the image corrected for the glare. The electronic device aligns the eye landmark in the corrected images and fuse the aligned images. In various examples, the electronic device fuses a portion of the aligned images. For example, the electronic device fuses a region of the aligned images defined by the pair of eyeglasses. In some examples, the electronic device determines whether the fused image includes another artifact, as described above with respect to FIG. 2
[0055] In various examples, to determine whether the image quality is satisfactory at the decision point 524, the electronic device determines whether colors of the image satisfy a criterion, whether a noise ratio of the image satisfies a criterion, whether a range of brightness of the image satisfies a criterion, whether a range of contrast of the image satisfies a criterion, or a combination thereof. The criterions are settings of an executable code, for example. In various examples, the colors of the image satisfy the criterion responsive to the colors of the image having values within lower and upper color settings. In some examples, the noise ratio of the image satisfies the criterion responsive to the noise ratio having a value that does not exceed a noise setting. In other examples, the range of brightness of the image satisfies the criterion responsive to the range of brightness having values within lower and upper brightness settings. In various examples, the range of contrast of the image satisfies the criterion responsive to the range of contrast having values within lower and upper contrast settings.
[0056] Referring now to FIG. 6, a schematic diagram depicting an electronic device 600 for correcting artifacts (e.g., the artifacts 412, 414) in images (e.g., the images 400, 416) is provided, in accordance with various examples. The electronic device 600 may be the electronic device 104, 300. The electronic device 600 includes a processor 602, an image sensor 604, a network interface 606, a display device 608, and a storage device 610. The processor 602 may be the processor 302. The image sensor 604 may be the image sensor 110, 304. The network interface 606 may be the network interface 306. The display device 608 may be the display device 108, 308. The storage device 610 may be the storage device 310.
[0057] In some examples, the processor 602 couples to the image sensor 604, the network interface 606, the display device 608, and the storage device 610. The storage device 610 stores machine-readable instructions which, when executed by the processor 602, cause the processor 602 to perform some or all of the actions attributed herein to the processor 602. The machine-readable instructions are the machine-readable instructions 612, 614, 616, 618.
[0058] In various examples, the machine-readable instructions 612, 614, 616, 618, when executed by the processor 602, cause the processor 602 to correct artifacts of images. The machine-readable instruction 612, when executed by the processor 602, causes the processor 602 to detect a pair of eyeglasses (e.g., the pair of eyeglasses 402, 418) in an image received via the image sensor 604. The machine-readable instruction 614, when executed by the processor 602, causes the processor 602 to identify an eye landmark (e.g., the eyebrow 404, 420, the outer corner of the eye 406, 422, the iris 408, 424, the pupil 410, 426) and an artifact in the image. Responsive to the artifact overlapping the eye landmark, the machine-readable instruction 616, when executed by the processor 602, causes the processor 602 to generate a corrected image. The machine-readable instruction 618, when executed by the processor 602, causes the processor 602 to cause the display device 608 to display the corrected image, the network interface 606 to transmit the corrected image, or a combination thereof.
[0059] As described above with respect to FIGS. 1 and 3, in some examples, to determine that the pair of eyeglasses is in the image, the machine-readable instruction 612, when executed by the processor 602, causes the processor 602 to detect a user in the image and determine that the user wears the pair of eyeglasses. In various examples, as described above with respect to FIGS. 2 and 3, to determine that the pair of eyeglasses is in the image, the machine-readable instruction 612, when executed by the processor 602, causes the processor 602 to analyze the image utilizing a computer vision technique, a machine learning technique, or a combination thereof. In some examples, to determine that the artifact overlaps the eye landmark, a machine-readable instruction (not explicitly shown), when executed by the processor 602, causes the processor 602 to analyze, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof, as described above with respect to FIGS. 2 and 3.
[0060] In some examples, a machine-readable instruction (not explicitly shown), when executed by the processor 602, causes the processor 602 to determine a seventy of the overlap by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof. Responsive to a determination that the severity of the overlap exceeds the overlap threshold, the processor 602 is to generate the corrected image.
[0061] To generate the corrected image, the machine-readable instruction 616, when executed by the processor 602, causes the processor 602 to utilize image processing techniques to remove the artifact, reduce a severity of the artifact, enhance an area of the image, or a combination thereof. In some examples, to reduce the severity of the artifact, the processor 602 reduces a color of the artifact, a brightness of the artifact, a size of the artifact, or a combination thereof. In other examples, the processor 602 removes the artifact, reduce the severity of the artifact, or a combination thereof, utilizing the image processing techniques described above with respect to FIG. 5 or below with respect to FIGS. 7 - 9. The processor 602 enhances the area of the image utilizing tone mapping, as described above with respect to FIG. 2, or utilizing other image processing techniques as described below with respect to FIGS. 7 - 9.
[0062] Referring now to FIG. 7, a flow diagram depicting a method 700 for an electronic device (e.g., the electronic device 104, 300, 600) to correct artifacts (e.g., the artifacts 412, 414) in images (e.g., the images 400, 416) is provided, in accordance with various examples. The electronic device performs the method 700 to mitigate a reflection within an image, for example. At a start point 702 of the method 700, the electronic device receives the image. The electronic device may receive the image from an image sensor (e.g., the image sensor 110, 304, 604). The electronic device stores the image during a store process 704 of the method 700. The electronic device stores the image to a storage device (e.g., the storage device 310, 610). During an isolate process 706 of the method 700, the electronic device uses a neural network to isolate the reflection. The electronic device generates a corrected image a generate process 708 of the method 700.
[0063] In various examples, during the isolate process 706, the electronic device utilizes a neural network to decompose the image to separate a transmission layer and a reflection layer. The neural network may implement ReflectNet, a Siamese Dense Network (SDN), or a combination thereof to decompose the image. The transmission layer includes objects hidden by the reflection and the reflection layer includes objects reflected by the reflection. The electronic device generates the corrected image utilizing the transmission layer during the generate process 708. In some examples, the neural network is a neural network trained to reduce a loss function between the transmission layer and the reflection layer so that a noise of the corrected image is below a noise threshold. [0064] Referring now to FIG. 8, a flow diagram depicting a method 800 for an electronic device (e.g., the electronic device 104, 300, 600) to correct artifacts in images (e.g., the images 400, 416) is provided, in accordance with various examples. The electronic device performs the method 800 to mitigate a glare within an image, for example. At a start point 802 of the method 800, the electronic device receives the image. The electronic device may receive the image from an image sensor (e.g., the image sensor 110, 304, 604). The electronic device stores the image during a store process 804 of the method 800. The electronic device stores the image to a storage device (e.g., the storage device 310, 610). During an adjust process 806 of the method 800, the electronic device may lower an exposure of the image sensor. During a fuse process 808 of the method 800, the electronic device fuses another image captured by the image sensor utilizing the lower exposure and the stored image. The electronic device determines whether the glare is reduced in the fused image during a decision point 810 of the method 800. Responsive to a determination that the glare is reduced, the electronic device enhances the fused image during an enhance process 812 of the method 800. During a generate process 814 of the method 800, the electronic device utilizes the enhanced fused image as a corrected image.
[0065] In some examples, during the fuse process 808, the electronic device detects an eye landmark in the image captured by the image sensor utilizing the lower exposure. The electronic device aligns the eye landmark with the eye landmark of the image stored during the store process 804. The electronic device fuses the aligned images. In various examples, the electronic device fuses a portion of the aligned images. For example, the electronic device fuses a region of the aligned images defined by the pair of eyeglasses. After fusing the images to generate a fused image, the electronic device utilizes a neural network, as described above with respect to FIG. 7, to remove a reflection layer of the fused image.
[0066] In various examples, during the decision point 810, the electronic device calculates a weight based on the color of the artifact, the brightness of the artifact, the size of the artifact, the location of the artifact in relation to a center of the eye landmark, or a combination thereof. Responsive to a determination that the weight is below a weight threshold, the electronic device determines that the glare is reduced. Responsive to a determination that the glare is not reduced, the electronic device may repeat the method 800.
[0067] Referring now to FIG. 9, a schematic diagram depicting an electronic device 900 for correcting artifacts (e.g., the artifacts 412, 414) in images (e.g., the images 400, 416) is provided, in accordance with various examples. The electronic device 900 may be the electronic device 104, 300, 600. The electronic device 900 comprises a processor 902 and a non-transitory machine-readable medium 904. The processor 902 may be the processor 302, 602. The non-transitory machine- readable medium 904 may be the storage device 310, 610. The term “non- transitory” does not encompass transitory propagating signals.
[0068] In various examples, the electronic device 900 comprises the processor 902 coupled to the non-transitory machine-readable medium 904. The non- transitory machine-readable medium 904 stores machine-readable instructions. The machine-readable instructions are the machine-readable instructions 906, 908, 910, 912, 914. The machine-readable instructions 906, 908, 910, 912, 914 when executed by the processor 902, cause the processor 902 to perform some or all of the actions attributed herein to the processor 902.
[0069] In various examples, when executed by the processor 902, the machine- readable instructions 906, 908, 910, 912, 914 cause the processor 902 to correct artifacts in images. The machine-readable instruction 906 causes the processor 902 to monitor a video signal for an image that includes a pair of eyeglasses. The video signal may be received via an image sensor (e.g., the image sensor 110, 304, 604). The machine-readable instruction 908 causes the processor 902 to identify an artifact in the image that includes a pair of eyeglasses (e.g., the pair of eyeglasses 402, 418). The machine-readable instruction 910 causes the processor 902 to determine a type of the artifact. Based on the type of the artifact, the machine-readable instruction 912 causes the processor 902 to generate a corrected image. The machine-readable instruction 914 causes the processor 902 to cause a display device (e.g., the display device 108, 308, 608) to display the corrected image, a network interface (e.g., the network interface 306, 606) to transmit the corrected image, or a combination thereof. [0070] During the monitor process, the processor 902 may store a number of images of the video signal for processing. The number of images that the processor 902 stores may be a multiplier of a refresh rate of the display device. For example, responsive to the refresh rate of 60 Hertz (Hz), the processor 902 stores 20 to 30 images for processing. By storing the number of images that is a multiplier of the refresh rate of the display device, the processor 902 reduces a delay of the display, the transmission, or a combination thereof of the number of images to a rate that is below what a user perceives. For example, to process the 20 to 30 images takes less than half of a second.
[0071] In some examples, when executed by the processor 902, the machine- readable instruction 906 causes the processor 902 to, utilizing a machine learning technique, monitor the video signal for the image that includes the pair of eyeglasses. The machine learning technique may be a machine learning technique described above with respect to FIGS. 1 , 3, and 6. In various examples, to determine that the image includes the artifact, the machine-readable instruction 908, when executed by the processor 902, causes the processor 902 to utilize an image processing technique, a machine learning technique, or a combination thereof, as described above with respect to FIGS. 2 and 3.
[0072] In various examples, in response to a determination that the type of the artifact is indicative of a reflection, the machine-readable instruction 912, when executed by a processor 902, causes the processor 902 to generate a corrected image utilizing a second machine learning technique. The processor 902 utilizes the technique described above with respect to FIG. 7, for example. In response to a determination that the type of the artifact is indicative of a glare, the machine- readable instruction 912, when executed by the processor 902, causes the processor to generate the corrected image utilizing an image processing technique. The processor 902 utilizes the techniques described above with respect to FIG. 8, for example.
[0073] In some examples, a machine-readable instruction (not explicitly shown), when executed by the processor 902, causes the processor 902 to cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof responsive to an image quality of the corrected image exceeding a quality threshold. The processor 902 determines the image quality utilizing the techniques described above with respect to FIG. 5, for example. The processor 902 determines whether colors of the corrected image satisfy a criterion, whether a noise ratio of the corrected image satisfies a criterion, whether a range of brightness of the corrected image satisfies a criterion, whether a range of contrast of the corrected image satisfies a criterion, or a combination thereof.
[0074] In various examples, the method 200, 500, 700, 800 is implemented by machine-readable instructions stored to a storage device (e.g., the storage device 310, 610, the non-transitory machine-readable medium 904) of an electronic device (e.g., the electronic device 104, 300, 600, 900). A processor (e.g., the processor 302, 602, 902) of the electronic device executes the machine-readable instructions to perform the method 200, 500, 700, 800, for example. A process, as used herein, refers to operations performed by execution of machine-readable instructions by the processor. A decision point, as used herein, refers to operations performed by execution of machine-readable instructions by the processor. Unless infeasible, some or all of the blocks (e.g., process, decision point) of the method 200, 500, 700, 800 may be performed concurrently or in different sequences. For example, the processor performs a block that occurs responsive to a command sequential to the block describing the command. In another example, the processor performs a block that depends upon a state of a component after the state of the component is enabled.
[0075] In various examples describing thresholds and settings, initial values for the thresholds and settings are determined during a manufacture process. As described above, an executable code may provide a GUI to enable a user of an electronic device (e.g., the electronic device 104, 300, 600, 900) to adjust the thresholds and settings. The thresholds and settings may be stored to a storage device (e.g., the storage device 310, 610, the non-transitory machine-readable medium 904) of the electronic device.
[0076] The above description is meant to be illustrative of the principles and various examples of the present description. Numerous variations and modifications become apparent to those skilled in the art once the above description is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
[0077] In the figures, certain features and components disclosed herein may be shown in exaggerated scale or in somewhat schematic form, and some details of certain elements may not be shown in the interest of clarity and conciseness. In some of the figures, in order to improve clarity and conciseness, a component or an aspect of a component may be omitted.
[0078] In the above description and in the claims, the term “comprising” is used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to... .” Also, the term “couple” or “couples” is intended to be broad enough to encompass both direct and indirect connections. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices, components, and connections. Additionally, the word “or” is used in an inclusive manner. For example, “A or B” means any of the following: “A” alone, “B” alone, or both “A” and “B.”

Claims

28 CLAIMS What is claimed is:
1 . An electronic device, comprising: an image sensor; and a processor to: determine that a pair of eyeglasses is in an image received via the image sensor; in response to the determination, identify an artifact in the image; in response to identifying that the artifact satisfies a criterion, generate a corrected image, the corrected image comprising a mitigated appearance of the artifact; and cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
2. The electronic device of claim 1 , wherein to determine that the pair of eyeglasses is in the image, the processor is to: detect a user in the image; and determine that the user wears the pair of eyeglasses.
3. The electronic device of claim 1 , wherein to determine that the pair of eyeglasses is in the image, the processor is to analyze the image utilizing a computer vision technique to identify a feature of the pair of eyeglasses, to classify the feature, to compare the feature to multiple templates, or a combination thereof.
4. The electronic device of claim 1 , wherein to identify the artifact in the image, the processor is to analyze a region of the image that includes the pair of eyeglasses.
5. The electronic device of claim 1 , wherein to identify that the artifact satisfies the criterion, the processor is to analyze a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact in relation to a center of a lens of the pair of eyeglasses, or a combination thereof.
6. An electronic device, comprising: an image sensor; and a processor to: determine that a pair of eyeglasses is in an image received via the image sensor; in response to the determination, identify an eye landmark and an artifact in the image; in response to identifying that the artifact overlaps the eye landmark, generate a corrected image, the corrected image comprising a mitigated appearance of the artifact; and cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
7. The electronic device of claim 6, wherein the eye landmark includes an eyebrow, an upper eye lid, a lower eye lid, an iris, a pupil, an inside corner of the eye, an outside corner of the eye, or a combination thereof.
8. The electronic device of claim 6, wherein to identify that the artifact overlaps the eye landmark, the processor is to determine a severity of the overlap by analyzing, in relation to the eye landmark, a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof.
9. The electronic device of claim 8, wherein the processor is to generate the corrected image responsive to a determination that the severity of the overlap exceeds a threshold.
10. The electronic device of claim 6, wherein to generate the corrected image, the processor is to reduce a severity of the artifact, remove the artifact from the image, enhance the eye landmark, or a combination thereof.
11 . The electronic device of claim 6, wherein the mitigated appearance includes a reduction in a color of the artifact, a brightness of the artifact, a size of the artifact, a location of the artifact, or a combination thereof.
12. A non-transitory machine-readable medium storing machine-readable instructions which, when executed by a processor, cause the processor to: utilizing a first machine learning technique, monitor a video signal for an image that includes a pair of eyeglasses, the video signal received via an image sensor; in response to receiving the image that includes the pair of eyeglasses, identify an artifact in the image; in response to identifying that the artifact satisfies a criterion, determine a type of the artifact; in response to a determination that the type of the artifact is indicative of a reflection, generate a corrected image utilizing a second machine learning technique, the corrected image comprising a mitigated appearance of the reflection; and in response to a determination that the type of the artifact is indicative of a glare, generate the corrected image utilizing an image processing technique, the corrected image comprising a mitigated appearance of the glare; and cause a display device to display the corrected image, a network interface to transmit the corrected image, or a combination thereof.
13. The non-transitory machine-readable medium of claim 12, wherein the machine-readable instructions, when executed by the processor, cause the processor to store a number of images of the video signal for processing.
14. The non-transitory machine-readable medium of claim 13, wherein the number of images is a multiplier of a refresh rate of the display device.
15. The non-transitory machine-readable medium of claim 12, wherein the machine-readable instructions, when executed by the processor, cause the processor to cause the display device to display the corrected image, the network interface to transmit the corrected image, or a combination thereof responsive to an image quality of the corrected image exceeding a quality threshold.
PCT/US2021/050969 2021-09-17 2021-09-17 Artifacts corrections in images WO2023043458A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2021/050969 WO2023043458A1 (en) 2021-09-17 2021-09-17 Artifacts corrections in images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/050969 WO2023043458A1 (en) 2021-09-17 2021-09-17 Artifacts corrections in images

Publications (1)

Publication Number Publication Date
WO2023043458A1 true WO2023043458A1 (en) 2023-03-23

Family

ID=85603366

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/050969 WO2023043458A1 (en) 2021-09-17 2021-09-17 Artifacts corrections in images

Country Status (1)

Country Link
WO (1) WO2023043458A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150379348A1 (en) * 2014-06-25 2015-12-31 Kodak Alaris Inc. Adaptable eye artifact identification and correction system
US20190102872A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Glare Reduction in Captured Images
US20200388208A1 (en) * 2019-06-10 2020-12-10 Ati Technologies Ulc Frame replay for variable rate refresh display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150379348A1 (en) * 2014-06-25 2015-12-31 Kodak Alaris Inc. Adaptable eye artifact identification and correction system
US20190102872A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Glare Reduction in Captured Images
US20200388208A1 (en) * 2019-06-10 2020-12-10 Ati Technologies Ulc Frame replay for variable rate refresh display

Similar Documents

Publication Publication Date Title
US11250241B2 (en) Face image processing methods and apparatuses, and electronic devices
CN105930821B (en) Human eye identification and tracking method and human eye identification and tracking device device applied to naked eye 3D display
US8224035B2 (en) Device, method and program for detecting eye
WO2019137038A1 (en) Method for determining point of gaze, contrast adjustment method and device, virtual reality apparatus, and storage medium
US8913005B2 (en) Methods and systems for ergonomic feedback using an image analysis module
WO2021004138A1 (en) Screen display method, terminal device, and storage medium
CN112384127B (en) Eyelid ptosis detection method and system
EP3710983B1 (en) Pose correction
CN105095885A (en) Human eye state detection method and detection device
JP2021077265A (en) Line-of-sight detection method, line-of-sight detection device, and control program
JP2021077333A (en) Line-of-sight detection method, line-of-sight detection device, and control program
US20190355325A1 (en) Image Rendering Method and Apparatus, and VR Device
US11573633B1 (en) Active areas of display devices
TWI683302B (en) Electronic system and electronic device for performing viewing-angle enhancement regarding display panel
US9934583B2 (en) Expectation maximization to determine position of ambient glints
WO2023043458A1 (en) Artifacts corrections in images
TWI466070B (en) Method for searching eyes, and eyes condition determining device and eyes searching device using the method
US11854260B2 (en) Situation-sensitive safety glasses
US10083675B2 (en) Display control method and display control apparatus
US20230245494A1 (en) Automatic face and human subject enhancement algorithm for digital images
TWI673034B (en) Methods and system for detecting blepharoptosis
US20220142473A1 (en) Method and system for automatic pupil detection
US20240370081A1 (en) Visibility of Frames
US20250208814A1 (en) Display devices focus indicators
WO2023048731A1 (en) Active image sensors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21957697

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21957697

Country of ref document: EP

Kind code of ref document: A1