[go: up one dir, main page]

EP1436781A1 - Method and system for generating an avatar animation transform using a neutral face image - Google Patents

Method and system for generating an avatar animation transform using a neutral face image

Info

Publication number
EP1436781A1
EP1436781A1 EP01959816A EP01959816A EP1436781A1 EP 1436781 A1 EP1436781 A1 EP 1436781A1 EP 01959816 A EP01959816 A EP 01959816A EP 01959816 A EP01959816 A EP 01959816A EP 1436781 A1 EP1436781 A1 EP 1436781A1
Authority
EP
European Patent Office
Prior art keywords
avatar
generating
head image
animation transform
transform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01959816A
Other languages
German (de)
French (fr)
Inventor
Luciano Pasquale Agostino Nocera
Hartmut Neven
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eyematic Interfaces Inc
Original Assignee
Eyematic Interfaces Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyematic Interfaces Inc filed Critical Eyematic Interfaces Inc
Publication of EP1436781A1 publication Critical patent/EP1436781A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to avatar animation, and more particularly, to generation of an animation transform using a neutral face image.
  • Virtual spaces filled with avatars are an attractive the way to allow for the experience of a shared environment.
  • manual creation of a photo- realistic avatar is time consuming and automated avatar creation is prone to artifacts and feature distortion.
  • the present invention is embodied in a method, and related system, for generating an avatar animation transform using a neutral face image.
  • the method may include providing a neutral-face front head image and a side head image for generating an avatar and automatically finding head feature locations on the front head image and the side head image using elastic bunch graph matching. Nodes are automatically positioned at feature locations on the front head image and the side head image. The node positions are manually reviewed and corrected to remove artifacts and minimize distorted features in the avatar generated based on the node positions.
  • the method may further include generating an animation transform based on the corrected node positions for the neutral face.
  • the method also may include applying the animation transform to expression face avatar meshes for generating the avatar.
  • FIG. 1 is a flow diagram for illustrating a method for generating an avatar animation transform using a neutral face image, according to the present invention.
  • FIG. 2 is an image of an avatar editor for generating an avatar, according to the present invention.
  • FIG. 3 is an image of a rear view of an avatar generated using anchor points provided by the avatar editor of FIG. 2.
  • FIG. 4 is an image of an avatar editor for generating an avatar using anchor point positions corrected to remove artifacts and distortions from the avatar image, according to the present invention.
  • FIG. 5 is an image of a rear view of an avatar generated using the corrected anchor point positions shown in FIG. 4, according to the present invention.
  • FIG. 6 is a graph of facial expression features versus avatar mesh for linear regression mapping of sensed facial features to an avatar mesh.
  • the present invention is embodied in a method, shown in FIG. 1 , and a system for generating an animation transform using a neutral face image.
  • An avatar editor uses a frontal head image and a side head image of a neutral face model for generating an avatar (block 12).
  • the avatar is generated by automatically finding head feature locations on the front and side head images using elastic bunch graph matching (block 14). Locating features in an image using elastic bunch graph matching is described in U.S. patent application serial number 09/188,079.
  • an image is transformed into Gabor space using a wavelet transformations based on Gabor wavelets.
  • the transformed image is represented by complex wavelet component values associated with each pixel of the original image.
  • Elastic bunch graph matching automatically places node graphs having anchor points on .the front and side head images, respectively.
  • the anchor points are placed at the general location of facial features found using the matching process (block 16).
  • An avatar editor window 26, shown in FIG. 2, allows a user to generate an avatar that looks and appears similar to a model.
  • a new avatar 28 is generated based on the front head image 30 and a side head image 32 of the model.
  • an existing avatar may be edited to the satisfaction of the user.
  • the front and side images are mapped onto an avatar mesh.
  • the avatar may be animated or driven by moving drive control points on the mesh. The motion of the drive control points may be directed by facial feature tracking.
  • the avatar editor window 26 includes a wizard (not shown) that leads the user through a sequence of steps for allowing the user to improve the accuracy of tracking of an avatar tracker.
  • the avatar wizard may include a tutor face that prompts the user to make a number of expressions and varying head poses. An image is taken for each expression or pose and facial features are automatically located for each face image. However, certain artifacts of the image may cause the feature process to place feature nodes at erroneous locations. In addition, correct node locations may generate artifacts that detract from a photorealistic avatar. Accordingly, the user has the opportunity to manually correct the positions of the automatically located features (block 18).
  • the front and side head images, 30 and 32, shown in FIG. 2 have a shadow outline that is erroneously detected as the profile outline of the side head image 32.
  • certain features, such as the model's ears have numerous patterns which may cause erroneous node placement.
  • the avatar 28 may have artificial eye and teeth inserts that are "exposed" while the eyes and/or the mouth are open. Accordingly, although the matching process is able to correctly locate the nodes, of the resulting avatar may have distracting features.
  • Empirical adjustment of the node locations may result in a more photo- realistic avatar.
  • a rear view of the avatar 28, shown in FIG. 3 is generated using the node locations shown in the avatar editor window 26 of FIG.
  • a particularly distracting artifact is a white patch 34 on the rear of the head.
  • the white patch appears because the automatically placed node locations cause a portion of the white background of the side head image 32 to be patched onto the rear of the avatar.
  • the incorrectly placed nodes may be manually adjusted, at shown in FIG. 4, for more accurate placement of the nodes to the corresponding features.
  • Generic head models, 36 and 38 have the node locations indicated so that a user may correctly place the node locations on the front and side head images.
  • a node is moved by clicking a pointer, such as a mouse, on the node and dragging the node to the desired position.
  • the avatar based on the corrected node positions is a more photo-realistic avatar.
  • the node locations at the back of the head on the side head image are adjusted to eliminate the distracting white patch as shown in FIG. 5.
  • the model images shown in FIGS. 2-5 are of a neutral face.
  • Using several avatar meshes corresponding to a variety of facial expressions allows for more accurate depiction of a sensed facial expressions.
  • Meshes for different expressions may be referred to as morph targets.
  • one avatar mesh M SMILE may be generated using features f SMILE from smiling face images.
  • Another avatar mesh M EXCL may be generated using a facial features f EXCL from face images showing surprise or exclamation.
  • the neutral facial features f NEUTRAL correspond the avatar mesh M NEUTRAL .
  • Sensed facial features f SENSED maybe mapped to a corresponding avatar mesh M SENSED using linear regression.
  • a photo-realistic avatar may require as many as 14 to 18 expression-based avatars meshes.
  • the animation transform for the neutral face features may be applied to the other facial expression avatar meshes to improve the quality of the resulting avatars (block 22).
  • the avatar mesh associated with a smile may be transformed by the neutral face animation transform p as indicated in equation 2.
  • M SMILE P • S ILE Equation 2
  • the neutral face-based animation transform provides significant improvement to the facial expression head models without the significant editing time incurred by generating a particular animation transform for each particular facial expression (and/or pose).

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is embodied in a method and system for generating an animation transform using a neutral face image. An avatar editor uses a front head image and a side head image of a neutral face model for generating an avatar. The avatar is generated by automatically finding head feature locations on the front and side head images using elastic bunch graph matching. Significant time savings may be accomplished by generating an animation transform relating an avatar mesh derived from the neutral face features to a generic avatar mesh. The animation transform for the neutral face features may be applied to the other facial expression avatar meshes to improve the quality of the resulting avatar. The neutral-face-based animation transform provides significant improvement to the facial expression head models without the significant editing time incurred by generating a particular animation transform for each particular facial expression (and/or pose) features.

Description

METHOD AND SYSTEM FOR GENERATING AN AVATAR ANIMATION TRANSFORM USE G A NEUTRAL FACE IMAGE
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority under 35 U.S.C. §119(e)(1) and 37 C.F.R.
§ 1.78(a)(4) to U.S. provisional application serial number 60/220,330, entitled METHOD AND SYSTEM FOR GENERATING AN AVATAR ANIMATION TRANSFORM USING A NEUTRAL FACE IMAGE and filed July 24, 2000; and claims priority under 35 U.S.C. § 120 and 37 C.F.R. § 1.78(a)(2) as a continuation-in-part to U.S. patent application serial number 09/188,079, entitled WAVELET-BASED FACIAL MOTION CAPTURE FOR AVATAR ANIMATION and filed November 6, 1998. The entire disclosure of U.S. patent application serial number 09/188,079 is incorporated herein by reference.
BACKGROUND OF THE INVENTION
The present invention relates to avatar animation, and more particularly, to generation of an animation transform using a neutral face image.
Virtual spaces filled with avatars are an attractive the way to allow for the experience of a shared environment. However, manual creation of a photo- realistic avatar is time consuming and automated avatar creation is prone to artifacts and feature distortion.
Accordingly, there exists a significant need for an avatar editor for quickly and reliably generating an avatar head model. The present invention satisfies this need.
SUMMARY OF THE INVENTION
The present invention is embodied in a method, and related system, for generating an avatar animation transform using a neutral face image. The method may include providing a neutral-face front head image and a side head image for generating an avatar and automatically finding head feature locations on the front head image and the side head image using elastic bunch graph matching. Nodes are automatically positioned at feature locations on the front head image and the side head image. The node positions are manually reviewed and corrected to remove artifacts and minimize distorted features in the avatar generated based on the node positions.
The method may further include generating an animation transform based on the corrected node positions for the neutral face. The method also may include applying the animation transform to expression face avatar meshes for generating the avatar.
Other features and advantages of the present invention should be apparent from the following description of the preferred embodiments taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a flow diagram for illustrating a method for generating an avatar animation transform using a neutral face image, according to the present invention.
FIG. 2 is an image of an avatar editor for generating an avatar, according to the present invention.
FIG. 3 is an image of a rear view of an avatar generated using anchor points provided by the avatar editor of FIG. 2.
FIG. 4 is an image of an avatar editor for generating an avatar using anchor point positions corrected to remove artifacts and distortions from the avatar image, according to the present invention.
FIG. 5 is an image of a rear view of an avatar generated using the corrected anchor point positions shown in FIG. 4, according to the present invention. FIG. 6 is a graph of facial expression features versus avatar mesh for linear regression mapping of sensed facial features to an avatar mesh.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention is embodied in a method, shown in FIG. 1 , and a system for generating an animation transform using a neutral face image. An avatar editor uses a frontal head image and a side head image of a neutral face model for generating an avatar (block 12). The avatar is generated by automatically finding head feature locations on the front and side head images using elastic bunch graph matching (block 14). Locating features in an image using elastic bunch graph matching is described in U.S. patent application serial number 09/188,079. In the elastic graph matching technique, an image is transformed into Gabor space using a wavelet transformations based on Gabor wavelets. The transformed image is represented by complex wavelet component values associated with each pixel of the original image. Elastic bunch graph matching automatically places node graphs having anchor points on .the front and side head images, respectively. The anchor points are placed at the general location of facial features found using the matching process (block 16).
An avatar editor window 26, shown in FIG. 2, allows a user to generate an avatar that looks and appears similar to a model. A new avatar 28 is generated based on the front head image 30 and a side head image 32 of the model. Alternatively, an existing avatar may be edited to the satisfaction of the user. The front and side images are mapped onto an avatar mesh. The avatar may be animated or driven by moving drive control points on the mesh. The motion of the drive control points may be directed by facial feature tracking.
Initially, the avatar editor window 26 includes a wizard (not shown) that leads the user through a sequence of steps for allowing the user to improve the accuracy of tracking of an avatar tracker. The avatar wizard may include a tutor face that prompts the user to make a number of expressions and varying head poses. An image is taken for each expression or pose and facial features are automatically located for each face image. However, certain artifacts of the image may cause the feature process to place feature nodes at erroneous locations. In addition, correct node locations may generate artifacts that detract from a photorealistic avatar. Accordingly, the user has the opportunity to manually correct the positions of the automatically located features (block 18).
For example, the front and side head images, 30 and 32, shown in FIG. 2 have a shadow outline that is erroneously detected as the profile outline of the side head image 32. Also certain features, such as the model's ears, have numerous patterns which may cause erroneous node placement. Of particular importance is proper placement of the nodes for the eyes and for the mouth. The avatar 28 may have artificial eye and teeth inserts that are "exposed" while the eyes and/or the mouth are open. Accordingly, although the matching process is able to correctly locate the nodes, of the resulting avatar may have distracting features.
Empirical adjustment of the node locations may result in a more photo- realistic avatar. As an example, a rear view of the avatar 28, shown in FIG. 3, is generated using the node locations shown in the avatar editor window 26 of FIG.
2. A particularly distracting artifact is a white patch 34 on the rear of the head.
The white patch appears because the automatically placed node locations cause a portion of the white background of the side head image 32 to be patched onto the rear of the avatar.
The incorrectly placed nodes may be manually adjusted, at shown in FIG. 4, for more accurate placement of the nodes to the corresponding features. Generic head models, 36 and 38, have the node locations indicated so that a user may correctly place the node locations on the front and side head images. A node is moved by clicking a pointer, such as a mouse, on the node and dragging the node to the desired position. As seen by the front view of the avatar 28', the avatar based on the corrected node positions is a more photo-realistic avatar. Further, the node locations at the back of the head on the side head image are adjusted to eliminate the distracting white patch as shown in FIG. 5. The model images shown in FIGS. 2-5 are of a neutral face. As discussed above, images for a variety of facial expressions and poses are captured using training facial expressions. As shown in FIG. 6, facial expression features f are sensed and the resulting parameters may be mapped to corresponding avatar meshes M by a transform T ( M = T (f )). Using several avatar meshes corresponding to a variety of facial expressions allows for more accurate depiction of a sensed facial expressions. Meshes for different expressions may be referred to as morph targets. For example, one avatar mesh MSMILE may be generated using features fSMILE from smiling face images. Another avatar mesh MEXCL may be generated using a facial features fEXCL from face images showing surprise or exclamation. Likewise, the neutral facial features fNEUTRAL correspond the avatar mesh MNEUTRAL . Sensed facial features fSENSED maybe mapped to a corresponding avatar mesh MSENSED using linear regression.
For a more photo-realistic effect, the node positions for each expression should be manually reviewed and artifacts and distortions addressed for each can model. However, empirical experience has shown that correction for each avatar head model may take several minutes of editing time. A photo-realistic avatar may require as many as 14 to 18 expression-based avatars meshes.
Significant time savings may be accomplished by a generating an animation transform p using the neutral face features fNEUTRAL (block 20 - FIG. 1). The resulting avatar mesh M^EUTRAL is related to a generic avatar mesh M NEUTRAL by the avatar transform as indicated in equation 1.
MNEUTRAL = P • M EUTRAL Equation 1
The animation transform for the neutral face features may be applied to the other facial expression avatar meshes to improve the quality of the resulting avatars (block 22). For example, the avatar mesh associated with a smile may be transformed by the neutral face animation transform p as indicated in equation 2. M SMILE = P S ILE Equation 2
The neutral face-based animation transform provides significant improvement to the facial expression head models without the significant editing time incurred by generating a particular animation transform for each particular facial expression (and/or pose).
Although the foregoing discloses the preferred embodiments of the present invention, it is understood that those skilled in the art may make various changes to the preferred embodiments without departing from the scope of the invention. The invention is defined only by the following claims.
WE CLAIM:

Claims

1. A method for generating an avatar animation transform, comprising: providing a neutral-face front head image and a side head image for generating an avatar; automatically finding head feature locations on the front head image and the side head image using elastic bunch graph matching; automatically positioning nodes at feature locations on the front head image and the side head image; and manually reviewing and correcting the node positions to remove artifacts and minimize distorted features in the avatar generated based on the node positions.
2. A method for generating an avatar animation transform as defined in claim 1, further comprising generating an animation transform based on the corrected node positions for the neutral face.
3. A method for generating an avatar animation transform as defined in claim 2, further comprising applying the animation transform to expression face avatar meshes for generating the avatar.
4. A method for generating an avatar animation transform as defined in claim 2, further comprising applying the animation transform to morph targets.
5. A system for generating an avatar animation transform, comprising: means for providing a neutral-face front head image and a side head image for generating an avatar; means for automatically finding head feature locations on the front head image and the side head image using elastic bunch graph matching; means for automatically positioning nodes at feature locations on the front head image and the side head image; and means for manually reviewing and correcting the node positions to remove artifacts and minimize distorted features in the avatar generated based on the node positions.
6. A system for generating an avatar animation transform as defined in claim 5, further comprising means for generating an animation transform based on the corrected node positions for the neutral face.
7. A system for generating an avatar animation transform as defined in claim 6, further comprising means for applying the animation transform to expression face avatar meshes for generating the avatar.
8. A system for generating an avatar animation transform as defined in claim 6, further comprising means for applying the animation transform to morph targets.
9. A method for generating an avatar animation transform, comprising: providing a neutral-face front head image and a side head image for generating an avatar; automatically finding head feature locations on the front head image and the side head image using image analysis based on wavelet component values generated from wavelet transformations of the respective neutral-face front head image and the side head image; automatically positioning nodes at feature locations on the front head image and the side head image; and manually reviewing and correcting the node positions to remove artifacts and minimize distorted features in the avatar generated based on the node positions.
10. A method for generating an avatar animation transform as defined in claim 9, further comprising generating an animation transform based on the corrected node positions for the neutral face.
11. A method for generating an avatar animation transform as defined in claim 10, further comprising applying the animation transform to expression face avatar meshes for generating the avatar.
12. A method for generating an avatar animation transform as defined in claim 10, further comprising applying the animation transform to morph targets.
13. A method for generating an avatar animation transform as defined in claim 9, wherein the wavelet transformations use Gabor wavelets.
EP01959816A 2000-07-24 2001-07-24 Method and system for generating an avatar animation transform using a neutral face image Withdrawn EP1436781A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US22033000P 2000-07-24 2000-07-24
PCT/US2001/041397 WO2002009040A1 (en) 2000-07-24 2001-07-24 Method and system for generating an avatar animation transform using a neutral face image

Publications (1)

Publication Number Publication Date
EP1436781A1 true EP1436781A1 (en) 2004-07-14

Family

ID=22823129

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01959816A Withdrawn EP1436781A1 (en) 2000-07-24 2001-07-24 Method and system for generating an avatar animation transform using a neutral face image

Country Status (5)

Country Link
EP (1) EP1436781A1 (en)
JP (1) JP2004509391A (en)
KR (1) KR20030029638A (en)
AU (1) AU2001281335A1 (en)
WO (1) WO2002009040A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070199096A1 (en) 2005-11-14 2007-08-23 E.I. Du Pont De Nemours And Company Compositions and Methods for Altering Alpha- and Beta-Tocotrienol Content
EP2730656A1 (en) 2007-05-24 2014-05-14 E. I. du Pont de Nemours and Company Soybean meal from beans having DGAT genes from yarrowia lipolytica for increased seed storage lipid production and altered fatty acid profiles in soybean
EP2288710B1 (en) 2008-05-23 2014-06-18 E. I. du Pont de Nemours and Company DGAT genes from oleaginous organisms for increased seed storage lipid production and altered fatty acid profiles in in oilseed plants
CA2780527C (en) 2009-11-23 2020-12-01 E. I. Du Pont De Nemours And Company Sucrose transporter genes for increasing plant seed lipids
WO2013152453A1 (en) 2012-04-09 2013-10-17 Intel Corporation Communication using interactive avatars
EP3410399A1 (en) 2014-12-23 2018-12-05 Intel Corporation Facial gesture driven animation of non-facial features
US9830728B2 (en) 2014-12-23 2017-11-28 Intel Corporation Augmented facial animation
US9824502B2 (en) 2014-12-23 2017-11-21 Intel Corporation Sketch selection for rendering 3D model avatar
US10475225B2 (en) 2015-12-18 2019-11-12 Intel Corporation Avatar animation system
KR102565755B1 (en) 2018-02-23 2023-08-11 삼성전자주식회사 Electronic device for displaying an avatar performed a motion according to a movement of a feature point of a face and method of operating the same
WO2019177870A1 (en) 2018-03-15 2019-09-19 Magic Leap, Inc. Animating virtual avatar facial movements
KR102665643B1 (en) 2019-02-20 2024-05-20 삼성전자 주식회사 Method for controlling avatar display and electronic device thereof
WO2025038916A1 (en) * 2023-08-16 2025-02-20 Roblox Corporation Automatic personalized avatar generation from 2d images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU3452697A (en) * 1996-07-05 1998-02-02 British Telecommunications Public Limited Company Image processing
BR9909611B1 (en) * 1998-04-13 2012-08-07 Method and apparatus for detecting facial features in a sequence of image frames comprising an image of a face.

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0209040A1 *

Also Published As

Publication number Publication date
JP2004509391A (en) 2004-03-25
AU2001281335A1 (en) 2002-02-05
WO2002009040A1 (en) 2002-01-31
KR20030029638A (en) 2003-04-14

Similar Documents

Publication Publication Date Title
US20020067362A1 (en) Method and system generating an avatar animation transform using a neutral face image
EP2043049B1 (en) Facial animation using motion capture data
US7733346B2 (en) FACS solving in motion capture
EP1436781A1 (en) Method and system for generating an avatar animation transform using a neutral face image
US11189084B2 (en) Systems and methods for executing improved iterative optimization processes to personify blendshape rigs
US9978175B2 (en) Real time concurrent design of shape, texture, and motion for 3D character animation
US8659596B2 (en) Real time generation of animation-ready 3D character models
US6714661B2 (en) Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image
US7567251B2 (en) Techniques for creating facial animation using a face mesh
EP2718903B1 (en) Controlling objects in a virtual environment
US6097396A (en) Method and apparatus for creating lifelike digital representation of hair and other fine-grained images
WO2010060113A1 (en) Real time generation of animation-ready 3d character models
US7961947B2 (en) FACS cleaning in motion capture
EP2615583B1 (en) Method and arrangement for 3D model morphing
CN107230250B (en) Forming method for direct three-dimensional modeling by referring to solid specimen
EP2047429B1 (en) Facs solving in motion capture
GB2632743A (en) Techniques for re-aging faces in images and video frames
CN118864736A (en) Method and device for molding oral prosthesis model
US20240221307A1 (en) Capture guidance for video of patient dentition
AU2001277148B2 (en) Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image
CN119169154B (en) Video display method, device, electronic device and storage medium
US20240402827A1 (en) Method for controlling at least one characteristic of a controllable object, a related system and related device
US20240185518A1 (en) Augmented video generation with dental modifications
JP2022152058A (en) Information processing system, information processing method and information processing program
Wang et al. Bright-NeRF: Brightening Neural Radiance Field with Color Restoration from Low-light Raw Images

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030219

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20040803