WO2023113823A1 - Generating a three-dimensional representation of an object - Google Patents
Generating a three-dimensional representation of an object Download PDFInfo
- Publication number
- WO2023113823A1 WO2023113823A1 PCT/US2021/064120 US2021064120W WO2023113823A1 WO 2023113823 A1 WO2023113823 A1 WO 2023113823A1 US 2021064120 W US2021064120 W US 2021064120W WO 2023113823 A1 WO2023113823 A1 WO 2023113823A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scan
- target object
- dimensional representation
- scan data
- optical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
Definitions
- Three-dimensional (3D) representations of objects can be used for a range of uses, including obtaining a model for 3D printing, computer aided design applications, production of movies and video games, or virtual reality environments.
- highly accurate 3D models of real-world objects are desirable, where a manually designed 3D model using computer aided design software may not provide enough accuracy, or the time or effort to produce an accurate model this way may be prohibitive.
- 3D optical scanning systems can be used to generate a 3D representation of a target object without the need to manually create a 3D model using a computer aided design software or similar, or can determine how a 3D printed model deviates from an intended design.
- Figures 1 a and 1 b illustrate an example scan environment
- Figures 2a and 2b illustrate an example of scans taken from multiple views of a scan environment including a target object
- Figure 2c illustrates an example of a scan of a scan environment in the absence of a target object
- Figure 3 is a flowchart illustrating an example method of generating a three- dimensional representation of a target object
- Figure 4 is a flowchart illustrating a further example method of generating a three-dimensional representation of a target object
- Figure 5 is an example of a device comprising a computer-readable storage medium coupled to a processor.
- Three-dimensional (3D) optical scanning systems such as a structured light scanning system, can be used to generate a 3D representation of an target object.
- a 3D representation of the target object may comprise a mathematical coordinate-based model of a surface of the target object.
- the 3D representation may comprise a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, polygons, or similar, for example to represent a physical body.
- the 3D representation may be generated using one or more photogrammetry techniques.
- Example file formats for a 3D representation may include stereolithography format (.stl), Wavefront OBJ format (. obj), 3D Manufacturing Format (.3mf), or similar.
- the 3D representation may comprise a Computer Aided Design (CAD) model.
- the 3D representation may be represented using polygonal modelling, curve modelling, or digital sculpting techniques.
- collection of points may be converted into a polygon mesh by connecting each point to the respective nearest neighbour using straight lines. The density of these points can vary, with a larger number of points improving the ability to re-construct fine features of a target object.
- a file for the 3D representation may comprise a 3D position for each of a plurality of vertices, a normal direction for each of the vertices, and for each face a list of vertices that form said face.
- a structured light scanning system uses a projected light pattern and a camera, or similar imaging device, in order to obtain 3D data relating to a target object.
- the projected light pattern may use visible or non-visible light.
- Alternative methods of 3D scanning may include modulated light scanning, laser scanning, or similar 3D imaging techniques.
- Some 3D scanning systems may use a plurality of scans comprising multiple views of a target object to obtain data relating to, for example, both a front face and a back face of the target object, in order to accurately measure a dimension of the target object, for example a thickness of a part.
- the multiple views may be transformed to a common coordinate system using registration or alignment techniques, for example the multiple views may be registered by using the overlapping 3D data as a key, by identifying common features within one or more of the multiple views, or by registering data using photogrammetry techniques from a 2D image to a generated 3D data set. Therefore, many views of an object, with sufficient overlap for registering common features, may be used to obtain an accurate 360-degree representation of the target object.
- Additional objects may be added to the scan environment, in the field of view of the optical imaging device, which move in conjunction with the scanned object.
- the scanned object and the additional objects may be stationary whilst the imaging device moves about the target object.
- These additional objects can help with the registration of a plurality of scans comprising multiple views of the target object.
- two- or three-dimensional fiducial markers can be used in order to facilitate registration of multiple views of a target object.
- These fiducial markers can be attached to the background of the scan environment or to the object itself.
- the visible fiducial markers in each overlapping view of the target object can be used to constrain and register these views.
- a 2D fiducial marker may comprise a marker placed on a surface and identified independently in each image.
- a 3D fiducial marker may comprise a geometric construct such as a sphere that may be identified and accurately located (i.e. it’s centre) in 3D mesh data. In each case the fiducial marker may be representable by a single coordinate position in the 3D space.
- Attaching fiducial markers to the target object can be impractical or a burden to the ease of use of the scanning system. Furthermore, the scanned target object may obscure the number of fiducial markers visible in each view, which can compromise the subsequent accuracy of the registration of the available fiducial markers, leading to measurements with reduced accuracy.
- a method of improving the accuracy of a scan of a target object may be provided, by generating a 3D representation of the scan environment in the absence of a target object, for example based on first optical scan data.
- This 3D representation of the scan environment can then be used as a reference against which scan data including the target object is registered which may improve the registration accuracy whilst substantially reducing the number of scan views of the target object used to generate the 3D representation.
- fiducial markers or other identifiable features in the target object scan data may be registered against corresponding fiducial markers or other identifiable features in the 3D representation of the scan environment.
- a first plurality of scans is obtained, the first plurality of scans comprising multiple views of the scan environment in the absence of a target object.
- the scan environment may include any of 2D or 3D fiducial markers, support structures, platforms, or similar.
- the 3D representation of the scan environment may then be generated based on the first plurality of scans by identifying and registering a common feature(s) between multiple views of the scan environment. For example, registration may be based on 2D or 3D fiducials, 3D scan data of a support structure, an extracted feature (for example an edge or vertex) of a support structure, etc., or a combination thereof.
- the 3D representation of the scan environment can then be generated by combining the aligned multiple views to obtain an accurate representation of the scan environment.
- Registration and combination of the first plurality of scans may be done by compositing the scan data from the aligned views.
- the scan data from the aligned views may be fused into a single mesh.
- Positions of fiducial markers that are aligned may also be combined to generate a more accurate location. Combination can be achieved by simply averaging the positions of the aligned fiducial markers or other common feature(s), or by using a more complex statistical process taking into account the error distribution of the individual measurements.
- This process may be performed iteratively, wherein each set of fiducial markers for each view is aligned to a current best combined estimate of the fiducial marker positions. From this alignment, a new best combined estimate may be derived before further rounds of alignment and combination.
- a single scan of the scan environment may be used to generate the 3D representation of the scan environment.
- generation of a 3D representation of the target object is achieved by registering a second plurality of scans with the 3D representation of the scan environment, the second plurality of scans comprising multiple views of the target object within the scan environment.
- Registering the second plurality of scans with the 3D representation of the scan environment may comprise identifying a feature(s) in a scan of the second plurality of scans and registering said feature with a corresponding feature in the 3D representation of the scan environment.
- the registration may be based on 2D or 3D fiducials, an edge or vertex of a support structure, or a combination thereof.
- the 3D representation of the target object may then be generated based on the registered second plurality of scans, by combining each registered scan of the registered second plurality of scans to generate a composite 3D representation of the target object.
- the registered second plurality of scans may be used to generate an intermediate 3D representation of the scan environment including the target object.
- Objects identified in the 3D representation of the scan environment may then be removed from the intermediate 3D representation in order to generate a 3D representation of the target object that does not include those objects or features present in the scan environment and not related to the target object. Removal of these objects may be done by subtraction of the 3D representation of the scan environment from the intermediate 3D representation. For example, subtracting the 3D representation of the scan environment from the intermediate 3D representation of the target object may remove any fiducial markers or support structures from the intermediate 3D representation, resulting in a 3D representation of the target object which includes the target object without other objects or features which are not of interest.
- removing a support structure from the intermediate 3D representation may comprise defining a region around the support structure, and, once the second plurality of scans have been registered or aligned, deleting any objects within the defined region from the intermediate 3D representation.
- generation of a 3D representation of the target object is achieved by registering a first scan of the second plurality of scans with a second scan of the second plurality of scans, in order to generate an intermediate 3D representation of the target object based on the second plurality of scans.
- This intermediate 3D representation may be generated based on the second plurality of scans by identifying and aligning a common feature(s) between multiple views of the target object.
- the 3D representation of the scan environment may then be subtracted from the intermediate 3D representation of the target object in order to generate a 3D representation of the target object that does not include objects or features present relating to the scan environment itself, i.e. , without other objects or features which are not of interest.
- Scanning the scan environment in the absence of the target object, and generating a 3D representation of the scan environment based on said scan(s), may allow for fewer scan views of the target object to be used whilst maintaining accurate registration of the target object within the scan environment, and accurate registration between multiple scan views of the target object.
- the target object scan data By registering the target object scan data to the 3D representation of the scan environment, multiple views of a target object may be accurately registered even when there is little or no overlap between the multiple views. Accordingly, accurate registration may be possible with fewer overall scan views of the target object. Additionally, registration of the target object scans to the 3D representation of the scan environment may be less computationally demanding compared to registration of the target object scans to other scans within the set of target object scans.
- a single set of first scans may be used to generate a 3D representation of the scan environment which can then be used as a reference against which to register scan data relating to multiple different target objects. Accordingly, through reuse of a common scan environment representation, the overall number of scan views for sequentially scanning multiple target objects may be reduced.
- Figures 1 a and 1 b show example scan environments 100, 101 comprising a target object 102.
- Figure 1 a shows an example scan environment 100 comprising a target object 102, a platform 104, and one or more fiducial markers 106a-106h.
- the target object 102 is a simple cuboid object, but it will be appreciated that the method herein is also applicable to more complex shapes.
- the fiducial markers 106 of Figure 1a are illustrated as 2D fiducial markers, but examples may use 3D fiducial markers or a combination of 2D and 3D fiducial markers.
- the fiducial markers 106 may be positioned randomly within the scan environment.
- Figure 1 b shows the example scan environment 101 including a target object 102 and a support structure 108.
- the support structure 108 may be any of a platform, a turntable, a clamp, a robotic arm, a pillar, or any similar structure suitable for supporting the target object in the scan environment within the field of view of an optical imaging device.
- Figures 2a and 2b show an example of scans of a scan environment 200 including a target object 202 taken from multiple views.
- the example of Figures 2a and 2b shows approximately 180 degrees of rotation in the plane of platform 204 between the view of Figure 2a and the view of Figure 2b.
- a plurality of fiducial markers 206a-206h are positioned within scan environment 200.
- Fiducial markers 206a, 206b, 206c and 206d are visible in both the view of Figure 2a and the view of Figure 2b.
- Fiducial markers 206e and 206f are visible in the view of Figure 2a, but are visually obstructed by the target object 202 in the view of Figure 2b.
- fiducial markers 206g and 206h are visible in the view of Figure 2b, but are visually obstructed by the target object 202 in the view of Figure 2a.
- Figure 2c shows an example of a 3D representation of a scan of the scan environment 200 in the absence of target object 202.
- the example of Figure 2c is a view from the same angle as Figure 2a, but with target object 202 not present. With the target object 202 absent, fiducial markers 206g and 206h, which were previously obscured by target object 202, are now visible to the imaging device.
- the 3D representation of Figure 2c can be used to provide a baseline or reference point against which to compare or register the scans of the scan environment 200 including the target object 202.
- the scan(s) of the environment in the absence of a target object may be taken a different view to any of the scans of the scan environment including the target object.
- the fiducial markers visible in both views of Figures 2a and 2b may be used to align the views including the target object 202 with each other.
- a 3D representation of the scan environment 200 will include all fiducial markers, as none of the fiducial markers will be visually obstructed by the target object 202, because the target object 202 is not present in the 3D representation of the scan environment 200.
- all fiducial markers visible in a scan comprising the target object 202 may be used to register the target object scan data with the 3D representation of the 3D environment, even if they are visible in one view of the scan environment 200 including the target object 202 but not in other views.
- aligning based on more fiducial markers, as opposed to just those common to target object scan views the accuracy of registration and alignment of the target object, second, scan data may be improved.
- Figure 3 shows a flowchart of an example method 300 of generating a three- dimensional representation of a target object, for example target object 102 of Figure 1 a or 1 b, or target object 202 of Figure 2a or 2b.
- first scan data is obtained, the first scan data corresponding to the scan environment without the target object included.
- a 3D representation of the scan environment is generated based on the first scan data.
- second scan data is obtained, the second scan data corresponding to the scan environment with the target object included.
- a 3D representation of the target object is generated based on the first and second scan data.
- a 3D representation of the target object may be generated by compositing aligned scans.
- this alignment can be achieved based on the 3D positions of fiducial markers which may be used to calculate a transformation that best aligns the fiducial markers between multiple views.
- the alignment can be based on calculating an alignment which best aligns 3D mesh data (or a point cloud extracted from the 3D mesh data).
- alignment can be based on a combination of fiducial marker and mesh or point cloud data.
- Accurate alignment of 3D mesh data or point clouds may comprise roughly aligning the 3D mesh data or point clouds, and then applying a refinement process based on an iterative closest point (ICP) approach to improve the accuracy of the alignment.
- ICP iterative closest point
- Example techniques for alignment of mesh data or point clouds are described in Winkelbach, S., Molkenstruck, S., and Wahl, F. M. (2006), ‘Low-cost laser range scanner and fast surface registration approach’, Pattern Recognition, pages 718-728; and in Azhar, F., Pollard, S., and Adams, G.
- Example techniques for computing a 3D transformation based on a set of corresponding points are described in Lorusso, A., Eggert, D., and Fisher, R. (1995), ‘A comparison of four algorithms for estimating 3-D rigid transformations’, BMVC. which describes techniques using a singular value decomposition of a matrix, orthonormal matrices, unit quaternions and dual quaternions.
- the composited scan data may be combined in order to reduce multiple overlapping meshes to a single mesh.
- Techniques for combining the composited scan data are described in Kazhdan, M., Hoppe, H. (2012), ‘Screened Poisson Surface Reconstruction’, ACM Transactions on Graphics (ToG) 32, no.3, pages 1 -13, which describes techniques to explicitly incorporate oriented point sets as interpolation constraints. This combination may also comprise applying smoothing, hole filling, or similar techniques to the 3D mesh data.
- the first scan data may be obtained prior to the second scan data.
- the second scan data may be obtained prior to the first scan data.
- Figure 4 shows a flowchart of an example method 400 of generating a three- dimensional representation of a target object, for example target object 102 of Figure 1 a or 1 b, or target object 202 of Figure 2a or 2b.
- first scan data is obtained, the first scan data corresponding to the scan environment without the target object included.
- a 3D representation of the scan environment is generated based on the first scan data.
- second scan data is obtained, the second scan data corresponding to the scan environment with the target object included.
- the second scan data is registered with the 3D representation of the scan environment to create registered second scan data.
- a 3D representation of the target object is generated based on the registered second scan data.
- FIG. 3 shows an example 500 of a device comprising a computer-readable storage medium 530 coupled to a processor 520.
- Processors suitable for the execution of computer program code include, by way of example, both general and special purpose microprocessors, application specific integrated circuits (ASIC) or field programmable gate arrays (FPGA) operable to retrieve and act on instructions and/or data from the computer-readable storage medium 530.
- ASIC application specific integrated circuits
- FPGA field programmable gate arrays
- the computer-readable storage medium 530 may be any media that can contain, store, or maintain programs and data for use by or in connection with an instruction execution system (e.g., non-transitory computer readable media).
- Computer-readable media can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable machine-readable media include, but are not limited to, a hard drive, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory, or a portable disc.
- the computer-readable storage medium comprises program code to, when executed on a computing device: obtain 502 first scan data of a scan environment in the absence of a target object, generate 504 a 3D representation of the scan environment based on the first scan data, obtain 506 second scan data of the scan environment within which the target object is present, and generate 508 a 3D representation of the target object based on the first and second scan data.
- the computer-readable storage medium 530 may comprise program code to perform any of the methods, or parts thereof, illustrated in Figures 3 and 4, and discussed above.
- All of the features disclosed in this specification may be combined in any combination, except combinations where some of such features are mutually exclusive.
- Each feature disclosed in this specification, including any accompanying claims, abstract, and drawings may be replaced by alternative features serving the same, equivalent, or similar purpose, unless expressly stated otherwise.
- each feature disclosed is one example of a generic series of equivalent or similar features.
- a method of generating a three- dimensional representation of a target object comprising: obtaining first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generating a three-dimensional representation of the scan environment based on the first optical scan data; obtaining second optical scan data of the scan volume comprising the scan environment and the target object; and generating a three- dimensional representation of the target object based on the first and second optical scan data.
- generating the three-dimensional representation of the target object may comprise: registering the second optical scan data with the three- dimensional representation of the scan environment; and generating a three- dimensional representation of the target object based on the registered second optical scan data.
- the first optical scan data may comprise a first plurality of scans, wherein each scan of the first plurality of scans is from a different view of the scan volume.
- the second optical scan data may comprise a second plurality of scans, wherein each scan of the second plurality of scans is from a different view of the scan volume.
- the second plurality of scans may be less than the first plurality of scans.
- generating the three-dimensional representation of the scan environment may comprise identifying and aligning a feature in the first optical scan data.
- the feature in the first optical scan data may comprise a fiducial marker, a support structure, or a combination thereof.
- registering the second optical scan data may comprise: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three-dimensional representation of the scan environment.
- the feature in the second optical scan data may comprise a fiducial marker, a support structure, or a combination thereof.
- generating the three-dimensional representation of the target object may comprise: generating an intermediate three-dimensional representation based on the second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three-dimensional representation based on the second optical scan data.
- generating the three-dimensional representation of the target object may further comprise: generating an intermediate three-dimensional representation based on the registered second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three- dimensional representation based on the registered second optical scan data.
- a non-transitory computer- readable storage medium comprising instructions that when executed cause a processor of a computing device to: obtain first optical scan data of a scan volume comprising a scan environment in the absence of a target object; generate a three- dimensional representation of the scan environment based on the first optical scan data; obtain second optical scan data of the scan volume comprising the scan environment and the target object; generate a three-dimensional representation of the target object based on the first and second optical scan data.
- generating a three-dimensional representation of the target object may comprise: registering the second optical scan data with the three- dimensional representation of the scan environment; and generating a three- dimensional representation of the target object based on the registered second optical scan data.
- the first optical scan data may comprise a first plurality of images
- the second optical scan data may comprise a second plurality of images, the second plurality of images being less than the first plurality of images.
- registering the second optical scan data may comprise: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three-dimensional representation of the scan environment.
- a system for generating a three- dimensional representation of a target object comprising: an optical imaging device; a memory; and a processor, the processor programmed to: receive, from the optical imaging device, first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generate, based on the first optical scan data, a three-dimensional representation of the scan environment; receive, from the optical imaging device, second optical scan data of the scan volume comprising the scan environment and the target object; and generate, based on the first and second optical scan data, a three-dimensional representation of the target object.
- the optical imaging device may be a three-dimensional capture device.
- the optical imaging device may further comprise a projector arranged to project a structured light pattern on the scan environment.
- the optical imaging device may comprise a camera.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
According to aspects of the present disclosure, there is provided a method of generating a three-dimensional representation of a target object, comprising: obtaining first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generating a three-dimensional representation of the scan environment based on the first optical scan data; obtaining second optical scan data of the scan volume comprising the scan environment and the target object; and generating a three-dimensional representation of the target object based on the first and second optical scan data.
Description
GENERATING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT
BACKGROUND
[0001] Three-dimensional (3D) representations of objects can be used for a range of uses, including obtaining a model for 3D printing, computer aided design applications, production of movies and video games, or virtual reality environments. For many applications, highly accurate 3D models of real-world objects are desirable, where a manually designed 3D model using computer aided design software may not provide enough accuracy, or the time or effort to produce an accurate model this way may be prohibitive. 3D optical scanning systems can be used to generate a 3D representation of a target object without the need to manually create a 3D model using a computer aided design software or similar, or can determine how a 3D printed model deviates from an intended design.
BRIEF INTRODUCTION OF THE DRAWINGS
[0002] Examples of the disclosure are further described hereinafter with reference to the accompanying drawings, in which:
[0003] Figures 1 a and 1 b illustrate an example scan environment;
[0004] Figures 2a and 2b illustrate an example of scans taken from multiple views of a scan environment including a target object;
[0005] Figure 2c illustrates an example of a scan of a scan environment in the absence of a target object;
[0006] Figure 3 is a flowchart illustrating an example method of generating a three- dimensional representation of a target object;
[0007] Figure 4 is a flowchart illustrating a further example method of generating a three-dimensional representation of a target object;
[0008] Figure 5 is an example of a device comprising a computer-readable storage medium coupled to a processor.
DETAILED DESCRIPTION
[0009] In the following description, for purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to "an example" or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples.
[0010] Further, reference in the specification to “a”, “an” or similar language in relation to a particular feature, structure or characteristic described means a single feature/structure/characteristic or at least one feature/structure/characteristic. Thus, this wording should not be construed as limiting in its use.
[0011] Three-dimensional (3D) optical scanning systems, such as a structured light scanning system, can be used to generate a 3D representation of an target object. A 3D representation of the target object may comprise a mathematical coordinate-based model of a surface of the target object. The 3D representation may comprise a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, polygons, or similar, for example to represent a physical body. In examples, the 3D representation may be generated using one or more photogrammetry techniques. Example file formats for a 3D representation may include stereolithography format (.stl), Wavefront OBJ format (. obj), 3D Manufacturing Format (.3mf), or similar. The 3D representation may comprise a Computer Aided Design (CAD) model. In examples, the 3D representation may be represented using polygonal modelling, curve modelling, or digital sculpting techniques. In examples, collection of points may be converted into a polygon mesh by connecting each point to the respective nearest neighbour using straight lines. The density of these points can vary, with a larger number of points improving the ability to re-construct fine features of a target object. A file for the 3D representation may comprise a 3D position for each of a plurality of vertices, a normal direction for each of the vertices, and for each face a list of vertices that form said face.
[0012] A structured light scanning system uses a projected light pattern and a camera, or similar imaging device, in order to obtain 3D data relating to a target object. The projected light pattern may use visible or non-visible light. Alternative methods of
3D scanning may include modulated light scanning, laser scanning, or similar 3D imaging techniques.
[0013] Some 3D scanning systems may use a plurality of scans comprising multiple views of a target object to obtain data relating to, for example, both a front face and a back face of the target object, in order to accurately measure a dimension of the target object, for example a thickness of a part. The multiple views may be transformed to a common coordinate system using registration or alignment techniques, for example the multiple views may be registered by using the overlapping 3D data as a key, by identifying common features within one or more of the multiple views, or by registering data using photogrammetry techniques from a 2D image to a generated 3D data set. Therefore, many views of an object, with sufficient overlap for registering common features, may be used to obtain an accurate 360-degree representation of the target object. However, for some target objects, even though the scans of multiple views may overlap, there may be insufficient geometric detail present in the scans to accurately register the multiple views. This may result in slippage of the registration along the length or width of the multiple views of the target object. For example, when trying to measure the thickness of a lamina plate, the registration of the front and back views of the plate may be difficult or not possible from the scanned data alone.
[0014] Additional objects may be added to the scan environment, in the field of view of the optical imaging device, which move in conjunction with the scanned object. In some examples, the scanned object and the additional objects may be stationary whilst the imaging device moves about the target object. These additional objects can help with the registration of a plurality of scans comprising multiple views of the target object. In particular, two- or three-dimensional fiducial markers can be used in order to facilitate registration of multiple views of a target object. These fiducial markers can be attached to the background of the scan environment or to the object itself. The visible fiducial markers in each overlapping view of the target object can be used to constrain and register these views. A 2D fiducial marker may comprise a marker placed on a surface and identified independently in each image. Multiple 2D fiducial markers may then be matched and used to reconstruct respective points in the world. A 3D fiducial marker may comprise a geometric construct such as a sphere that may be identified and accurately located (i.e. it’s centre) in 3D mesh data. In each case the fiducial marker may be representable by a single coordinate position in the 3D space.
[0015] Attaching fiducial markers to the target object can be impractical or a burden to the ease of use of the scanning system. Furthermore, the scanned target object may obscure the number of fiducial markers visible in each view, which can compromise the subsequent accuracy of the registration of the available fiducial markers, leading to measurements with reduced accuracy.
[0016] In examples, a method of improving the accuracy of a scan of a target object may be provided, by generating a 3D representation of the scan environment in the absence of a target object, for example based on first optical scan data. This 3D representation of the scan environment can then be used as a reference against which scan data including the target object is registered which may improve the registration accuracy whilst substantially reducing the number of scan views of the target object used to generate the 3D representation. This is to say, fiducial markers or other identifiable features in the target object scan data may be registered against corresponding fiducial markers or other identifiable features in the 3D representation of the scan environment.
[0017] In order to generate a 3D representation of the scan environment, a first plurality of scans is obtained, the first plurality of scans comprising multiple views of the scan environment in the absence of a target object. For example, the scan environment may include any of 2D or 3D fiducial markers, support structures, platforms, or similar. The 3D representation of the scan environment may then be generated based on the first plurality of scans by identifying and registering a common feature(s) between multiple views of the scan environment. For example, registration may be based on 2D or 3D fiducials, 3D scan data of a support structure, an extracted feature (for example an edge or vertex) of a support structure, etc., or a combination thereof. The 3D representation of the scan environment can then be generated by combining the aligned multiple views to obtain an accurate representation of the scan environment. Registration and combination of the first plurality of scans may be done by compositing the scan data from the aligned views. In some examples, the scan data from the aligned views may be fused into a single mesh. Positions of fiducial markers that are aligned may also be combined to generate a more accurate location. Combination can be achieved by simply averaging the positions of the aligned fiducial markers or other common feature(s), or by using a more complex statistical process taking into account the error distribution of the individual measurements. This process
may be performed iteratively, wherein each set of fiducial markers for each view is aligned to a current best combined estimate of the fiducial marker positions. From this alignment, a new best combined estimate may be derived before further rounds of alignment and combination. In some examples, a single scan of the scan environment may be used to generate the 3D representation of the scan environment.
[0018] In examples, generation of a 3D representation of the target object is achieved by registering a second plurality of scans with the 3D representation of the scan environment, the second plurality of scans comprising multiple views of the target object within the scan environment. Registering the second plurality of scans with the 3D representation of the scan environment may comprise identifying a feature(s) in a scan of the second plurality of scans and registering said feature with a corresponding feature in the 3D representation of the scan environment. For example, the registration may be based on 2D or 3D fiducials, an edge or vertex of a support structure, or a combination thereof. The 3D representation of the target object may then be generated based on the registered second plurality of scans, by combining each registered scan of the registered second plurality of scans to generate a composite 3D representation of the target object.
[0019] In examples, the registered second plurality of scans may be used to generate an intermediate 3D representation of the scan environment including the target object. Objects identified in the 3D representation of the scan environment may then be removed from the intermediate 3D representation in order to generate a 3D representation of the target object that does not include those objects or features present in the scan environment and not related to the target object. Removal of these objects may be done by subtraction of the 3D representation of the scan environment from the intermediate 3D representation. For example, subtracting the 3D representation of the scan environment from the intermediate 3D representation of the target object may remove any fiducial markers or support structures from the intermediate 3D representation, resulting in a 3D representation of the target object which includes the target object without other objects or features which are not of interest. In some examples, removing a support structure from the intermediate 3D representation may comprise defining a region around the support structure, and, once the second plurality of scans have been registered or aligned, deleting any objects within the defined region from the intermediate 3D representation.
[0020] In examples, generation of a 3D representation of the target object is achieved by registering a first scan of the second plurality of scans with a second scan of the second plurality of scans, in order to generate an intermediate 3D representation of the target object based on the second plurality of scans. This intermediate 3D representation may be generated based on the second plurality of scans by identifying and aligning a common feature(s) between multiple views of the target object. The 3D representation of the scan environment may then be subtracted from the intermediate 3D representation of the target object in order to generate a 3D representation of the target object that does not include objects or features present relating to the scan environment itself, i.e. , without other objects or features which are not of interest.
[0021] Scanning the scan environment in the absence of the target object, and generating a 3D representation of the scan environment based on said scan(s), may allow for fewer scan views of the target object to be used whilst maintaining accurate registration of the target object within the scan environment, and accurate registration between multiple scan views of the target object.
[0022] By registering the target object scan data to the 3D representation of the scan environment, multiple views of a target object may be accurately registered even when there is little or no overlap between the multiple views. Accordingly, accurate registration may be possible with fewer overall scan views of the target object. Additionally, registration of the target object scans to the 3D representation of the scan environment may be less computationally demanding compared to registration of the target object scans to other scans within the set of target object scans.
[0023] A single set of first scans may be used to generate a 3D representation of the scan environment which can then be used as a reference against which to register scan data relating to multiple different target objects. Accordingly, through reuse of a common scan environment representation, the overall number of scan views for sequentially scanning multiple target objects may be reduced.
[0024] Figures 1 a and 1 b show example scan environments 100, 101 comprising a target object 102. Figure 1 a shows an example scan environment 100 comprising a target object 102, a platform 104, and one or more fiducial markers 106a-106h. In the example of Figure 1 a, the target object 102 is a simple cuboid object, but it will be appreciated that the method herein is also applicable to more complex shapes. The
fiducial markers 106 of Figure 1a are illustrated as 2D fiducial markers, but examples may use 3D fiducial markers or a combination of 2D and 3D fiducial markers. The fiducial markers 106 may be positioned randomly within the scan environment.
[0025] Figure 1 b shows the example scan environment 101 including a target object 102 and a support structure 108. In examples, the support structure 108 may be any of a platform, a turntable, a clamp, a robotic arm, a pillar, or any similar structure suitable for supporting the target object in the scan environment within the field of view of an optical imaging device.
[0026] The scan environments illustrated in Figures 1 a and 1 b are examples, and it will be appreciated that other scan environments comprising other identifiable alignment or reference features may be used.
[0027] Figures 2a and 2b show an example of scans of a scan environment 200 including a target object 202 taken from multiple views. The example of Figures 2a and 2b shows approximately 180 degrees of rotation in the plane of platform 204 between the view of Figure 2a and the view of Figure 2b. A plurality of fiducial markers 206a-206h are positioned within scan environment 200. Fiducial markers 206a, 206b, 206c and 206d are visible in both the view of Figure 2a and the view of Figure 2b. Fiducial markers 206e and 206f are visible in the view of Figure 2a, but are visually obstructed by the target object 202 in the view of Figure 2b. Similarly, fiducial markers 206g and 206h are visible in the view of Figure 2b, but are visually obstructed by the target object 202 in the view of Figure 2a.
[0028] Figure 2c shows an example of a 3D representation of a scan of the scan environment 200 in the absence of target object 202. The example of Figure 2c is a view from the same angle as Figure 2a, but with target object 202 not present. With the target object 202 absent, fiducial markers 206g and 206h, which were previously obscured by target object 202, are now visible to the imaging device. The 3D representation of Figure 2c can be used to provide a baseline or reference point against which to compare or register the scans of the scan environment 200 including the target object 202. In some examples the scan(s) of the environment in the absence of a target object may be taken a different view to any of the scans of the scan environment including the target object.
[0029] Without a 3D representation of the scan environment 200, the fiducial markers visible in both views of Figures 2a and 2b, for example fiducial markers 206a, 206b, 206c and 206d, may be used to align the views including the target object 202 with each other. However, a 3D representation of the scan environment 200 will include all fiducial markers, as none of the fiducial markers will be visually obstructed by the target object 202, because the target object 202 is not present in the 3D representation of the scan environment 200. Therefore, all fiducial markers visible in a scan comprising the target object 202 may be used to register the target object scan data with the 3D representation of the 3D environment, even if they are visible in one view of the scan environment 200 including the target object 202 but not in other views. By aligning based on more fiducial markers, as opposed to just those common to target object scan views, the accuracy of registration and alignment of the target object, second, scan data may be improved.
[0030] Figure 3 shows a flowchart of an example method 300 of generating a three- dimensional representation of a target object, for example target object 102 of Figure 1 a or 1 b, or target object 202 of Figure 2a or 2b. At block 302, first scan data is obtained, the first scan data corresponding to the scan environment without the target object included. At block 304, a 3D representation of the scan environment is generated based on the first scan data. At block 306, second scan data is obtained, the second scan data corresponding to the scan environment with the target object included. At block 308, a 3D representation of the target object is generated based on the first and second scan data.
[0031] A 3D representation of the target object may be generated by compositing aligned scans. In examples, this alignment can be achieved based on the 3D positions of fiducial markers which may be used to calculate a transformation that best aligns the fiducial markers between multiple views. In further examples, the alignment can be based on calculating an alignment which best aligns 3D mesh data (or a point cloud extracted from the 3D mesh data). In some examples, alignment can be based on a combination of fiducial marker and mesh or point cloud data.
[0032] Accurate alignment of 3D mesh data or point clouds may comprise roughly aligning the 3D mesh data or point clouds, and then applying a refinement process based on an iterative closest point (ICP) approach to improve the accuracy of the alignment.
[0033] Example techniques for alignment of mesh data or point clouds are described in Winkelbach, S., Molkenstruck, S., and Wahl, F. M. (2006), ‘Low-cost laser range scanner and fast surface registration approach’, Pattern Recognition, pages 718-728; and in Azhar, F., Pollard, S., and Adams, G. (2019) ‘Gaussian Curvature Criterion based Random Sample Matching for Improved 3D Registration’, VISIGRAPP 2019, which uses a Gaussian Curvature based criterion to discard false point correspondences within a random sample matching framework to improve 3D registration.
[0034] Example techniques for computing a 3D transformation based on a set of corresponding points are described in Lorusso, A., Eggert, D., and Fisher, R. (1995), ‘A comparison of four algorithms for estimating 3-D rigid transformations’, BMVC. which describes techniques using a singular value decomposition of a matrix, orthonormal matrices, unit quaternions and dual quaternions.
[0035] In examples, the composited scan data may be combined in order to reduce multiple overlapping meshes to a single mesh. Techniques for combining the composited scan data are described in Kazhdan, M., Hoppe, H. (2012), ‘Screened Poisson Surface Reconstruction’, ACM Transactions on Graphics (ToG) 32, no.3, pages 1 -13, which describes techniques to explicitly incorporate oriented point sets as interpolation constraints. This combination may also comprise applying smoothing, hole filling, or similar techniques to the 3D mesh data.
[0036] In some examples, the first scan data may be obtained prior to the second scan data. Alternatively, in some examples, the second scan data may be obtained prior to the first scan data.
[0037] Figure 4 shows a flowchart of an example method 400 of generating a three- dimensional representation of a target object, for example target object 102 of Figure 1 a or 1 b, or target object 202 of Figure 2a or 2b. At block 402, first scan data is obtained, the first scan data corresponding to the scan environment without the target object included. At block 404, a 3D representation of the scan environment is generated based on the first scan data. At block 406, second scan data is obtained, the second scan data corresponding to the scan environment with the target object included. At block 408, the second scan data is registered with the 3D representation of the scan environment to create registered second scan data. At block 410, a 3D
representation of the target object is generated based on the registered second scan data.
[0038] Certain methods and systems as described herein may be implemented by a processor that processes program code that is retrieved from a non-transitory storage medium. As used herein, the term “non-transitory” does not encompass transitory propagating signals. In particular, all or part of the methods illustrated in Figures 3 or 4 may be implemented in the form of computer program code stored on computer readable media and executable by a processor to perform the described methods. Figure 5 shows an example 500 of a device comprising a computer-readable storage medium 530 coupled to a processor 520.
[0039] Processors suitable for the execution of computer program code include, by way of example, both general and special purpose microprocessors, application specific integrated circuits (ASIC) or field programmable gate arrays (FPGA) operable to retrieve and act on instructions and/or data from the computer-readable storage medium 530.
[0040] The computer-readable storage medium 530 may be any media that can contain, store, or maintain programs and data for use by or in connection with an instruction execution system (e.g., non-transitory computer readable media). Computer-readable media can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable machine-readable media include, but are not limited to, a hard drive, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory, or a portable disc.
[0041] In Figure 5, the computer-readable storage medium comprises program code to, when executed on a computing device: obtain 502 first scan data of a scan environment in the absence of a target object, generate 504 a 3D representation of the scan environment based on the first scan data, obtain 506 second scan data of the scan environment within which the target object is present, and generate 508 a 3D representation of the target object based on the first and second scan data.
[0042] In other examples, the computer-readable storage medium 530 may comprise program code to perform any of the methods, or parts thereof, illustrated in Figures 3 and 4, and discussed above.
[0043] All of the features disclosed in this specification (including any accompanying claims, abstract, and drawings) may be combined in any combination, except combinations where some of such features are mutually exclusive. Each feature disclosed in this specification, including any accompanying claims, abstract, and drawings, may be replaced by alternative features serving the same, equivalent, or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example of a generic series of equivalent or similar features.
[0044] The present teachings are not restricted to the details of any foregoing examples. Any novel combination of the features disclosed in this specification (including any accompanying claims, abstract, and drawings) may be envisaged. The claims should not be construed to cover merely the foregoing examples, but also any variants that fall within the scope of the claims.
[0045] According to an example, there is provided a method of generating a three- dimensional representation of a target object, comprising: obtaining first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generating a three-dimensional representation of the scan environment based on the first optical scan data; obtaining second optical scan data of the scan volume comprising the scan environment and the target object; and generating a three- dimensional representation of the target object based on the first and second optical scan data.
[0046] In an example, generating the three-dimensional representation of the target object may comprise: registering the second optical scan data with the three- dimensional representation of the scan environment; and generating a three- dimensional representation of the target object based on the registered second optical scan data.
[0047] In an example, the first optical scan data may comprise a first plurality of scans, wherein each scan of the first plurality of scans is from a different view of the scan volume.
[0048] In an example, the second optical scan data may comprise a second plurality of scans, wherein each scan of the second plurality of scans is from a different view of the scan volume.
[0049] In an example, the second plurality of scans may be less than the first plurality of scans.
[0050] In an example, generating the three-dimensional representation of the scan environment may comprise identifying and aligning a feature in the first optical scan data.
[0051] In an example, the feature in the first optical scan data may comprise a fiducial marker, a support structure, or a combination thereof.
[0052] In an example, registering the second optical scan data may comprise: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three-dimensional representation of the scan environment.
[0053] In an example, the feature in the second optical scan data may comprise a fiducial marker, a support structure, or a combination thereof.
[0054] In an example, generating the three-dimensional representation of the target object may comprise: generating an intermediate three-dimensional representation based on the second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three-dimensional representation based on the second optical scan data.
[0055] In an example, generating the three-dimensional representation of the target object may further comprise: generating an intermediate three-dimensional representation based on the registered second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three- dimensional representation based on the registered second optical scan data.
[0056] According to an example, there is provided a non-transitory computer- readable storage medium comprising instructions that when executed cause a processor of a computing device to: obtain first optical scan data of a scan volume comprising a scan environment in the absence of a target object; generate a three- dimensional representation of the scan environment based on the first optical scan data; obtain second optical scan data of the scan volume comprising the scan environment and the target object; generate a three-dimensional representation of the target object based on the first and second optical scan data.
[0057] In an example, generating a three-dimensional representation of the target object may comprise: registering the second optical scan data with the three- dimensional representation of the scan environment; and generating a three- dimensional representation of the target object based on the registered second optical scan data.
[0058] In an example, the first optical scan data may comprise a first plurality of images, and the second optical scan data may comprise a second plurality of images, the second plurality of images being less than the first plurality of images.
[0059] In an example, registering the second optical scan data may comprise: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three-dimensional representation of the scan environment.
[0060] According to an example, there is provided a system for generating a three- dimensional representation of a target object, comprising: an optical imaging device; a memory; and a processor, the processor programmed to: receive, from the optical imaging device, first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generate, based on the first optical scan data, a three-dimensional representation of the scan environment; receive, from the optical imaging device, second optical scan data of the scan volume comprising the scan environment and the target object; and generate, based on the first and second optical scan data, a three-dimensional representation of the target object.
[0061] In an example, the optical imaging device may be a three-dimensional capture device.
[0062] In an example, the optical imaging device may further comprise a projector arranged to project a structured light pattern on the scan environment.
[0063] In an example, the optical imaging device may comprise a camera.
Claims
1 . A method of generating a three-dimensional representation of a target object, comprising: obtaining first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generating a three-dimensional representation of the scan environment based on the first optical scan data; obtaining second optical scan data of the scan volume comprising the scan environment and the target object; and generating a three-dimensional representation of the target object based on the first and second optical scan data.
2. The method of claim 1 , wherein generating the three-dimensional representation of the target object comprises: registering the second optical scan data with the three-dimensional representation of the scan environment; and generating a three-dimensional representation of the target object based on the registered second optical scan data.
3. The method of claim 1 , wherein the first optical scan data comprises a first plurality of scans, wherein each scan of the first plurality of scans is from a different view of the scan volume.
4. The method of claim 2, wherein the second optical scan data comprises a second plurality of scans, the second plurality of scans being less than the first plurality of scans, wherein each scan of the second plurality of scans is from a different view of the scan volume.
5. The method of claim 3, wherein generating the three-dimensional representation of the scan environment comprises identifying and aligning a feature in the first optical scan data.
6. The method of claim 5, wherein the feature comprises a fiducial marker.
7. The storage medium of claim 2, wherein registering the second optical scan data comprises: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three- dimensional representation of the scan environment.
8. The method of claim 7, wherein the feature comprises a fiducial marker or a support structure.
9. The method of claim 1 , wherein generating the three-dimensional representation of the target object comprises: generating an intermediate three-dimensional representation based on the second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three-dimensional representation based on the second optical scan data.
10. The method of claim 2, wherein generating the three-dimensional representation of the target object further comprises: generating an intermediate three-dimensional representation based on the registered second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three-dimensional representation based on the registered second optical scan data.
11. A non-transitory computer-readable storage medium comprising instructions that when executed cause a processor of a computing device to: obtain first optical scan data of a scan volume comprising a scan environment in the absence of a target object; generate a three-dimensional representation of the scan environment based on the first optical scan data;
obtain second optical scan data of the scan volume comprising the scan environment and the target object; generate a three-dimensional representation of the target object based on the first and second optical scan data.
12. The storage medium of claim 11 , wherein generating a three-dimensional representation of the target object comprises: registering the second optical scan data with the three-dimensional representation of the scan environment; and generating a three-dimensional representation of the target object based on the registered second optical scan data.
13. The storage medium of claim 11 , wherein the first optical scan data comprises a first plurality of images, and the second optical scan data comprises a second plurality of images, the second plurality of images being less than the first plurality of images.
14. The storage medium of claim 11 , wherein registering the second optical scan data comprises: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three- dimensional representation of the scan environment.
15. A system for generating a three-dimensional representation of a target object, comprising: an optical imaging device; a memory; and a processor, the processor programmed to: receive, from the optical imaging device, first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generate, based on the first optical scan data, a three- dimensional representation of the scan environment; receive, from the optical imaging device, second optical scan data of the scan volume comprising the scan environment and the target object; and
16
generate, based on the first and second optical scan data, a three-dimensional representation of the target object.
17
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2021/064120 WO2023113823A1 (en) | 2021-12-17 | 2021-12-17 | Generating a three-dimensional representation of an object |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2021/064120 WO2023113823A1 (en) | 2021-12-17 | 2021-12-17 | Generating a three-dimensional representation of an object |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023113823A1 true WO2023113823A1 (en) | 2023-06-22 |
Family
ID=86773277
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2021/064120 Ceased WO2023113823A1 (en) | 2021-12-17 | 2021-12-17 | Generating a three-dimensional representation of an object |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2023113823A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117893696A (en) * | 2024-03-15 | 2024-04-16 | 之江实验室 | A method, device, storage medium and electronic device for generating three-dimensional human body data |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160284104A1 (en) * | 2013-11-27 | 2016-09-29 | Hewlett-Packard Development Company, Lp. | Determine the Shape of a Representation of an Object |
| US20180197328A1 (en) * | 2015-09-30 | 2018-07-12 | Hewlett-Packard Development Company, L.P. | Three-dimensional model generation |
| CN110268449A (en) * | 2017-04-26 | 2019-09-20 | 惠普发展公司有限责任合伙企业 | Locate a region of interest on an object |
| WO2020222781A1 (en) * | 2019-04-30 | 2020-11-05 | Hewlett-Packard Development Company, L.P. | Geometrical compensations |
| US20210093414A1 (en) * | 2018-06-19 | 2021-04-01 | Tornier, Inc. | Mixed-reality surgical system with physical markers for registration of virtual models |
-
2021
- 2021-12-17 WO PCT/US2021/064120 patent/WO2023113823A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160284104A1 (en) * | 2013-11-27 | 2016-09-29 | Hewlett-Packard Development Company, Lp. | Determine the Shape of a Representation of an Object |
| US20180197328A1 (en) * | 2015-09-30 | 2018-07-12 | Hewlett-Packard Development Company, L.P. | Three-dimensional model generation |
| CN110268449A (en) * | 2017-04-26 | 2019-09-20 | 惠普发展公司有限责任合伙企业 | Locate a region of interest on an object |
| US20210093414A1 (en) * | 2018-06-19 | 2021-04-01 | Tornier, Inc. | Mixed-reality surgical system with physical markers for registration of virtual models |
| WO2020222781A1 (en) * | 2019-04-30 | 2020-11-05 | Hewlett-Packard Development Company, L.P. | Geometrical compensations |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117893696A (en) * | 2024-03-15 | 2024-04-16 | 之江实验室 | A method, device, storage medium and electronic device for generating three-dimensional human body data |
| CN117893696B (en) * | 2024-03-15 | 2024-05-28 | 之江实验室 | A method, device, storage medium and electronic device for generating three-dimensional human body data |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5430456B2 (en) | Geometric feature extraction device, geometric feature extraction method, program, three-dimensional measurement device, object recognition device | |
| JP5206366B2 (en) | 3D data creation device | |
| US20100328308A1 (en) | Three Dimensional Mesh Modeling | |
| US20120177284A1 (en) | Forming 3d models using multiple images | |
| JP2016119086A (en) | Texturing 3d modeled object | |
| JP2016217941A (en) | 3D data evaluation apparatus, 3D data measurement system, and 3D measurement method | |
| US20240312033A1 (en) | Method to register facial markers | |
| KR101602472B1 (en) | Apparatus and method for generating 3D printing file using 2D image converting | |
| KR102023042B1 (en) | Foot scanning apparatus and method for scanning foot thereof | |
| JP2000268179A (en) | Three-dimensional shape information obtaining method and device, two-dimensional picture obtaining method and device and record medium | |
| JP2004234350A (en) | Image processing apparatus, image processing method, and image processing program | |
| Akca et al. | Fast correspondence search for 3D surface matching | |
| WO2023113823A1 (en) | Generating a three-dimensional representation of an object | |
| Mao et al. | Robust surface reconstruction of teeth from raw pointsets | |
| JP2007322351A (en) | 3D object verification device | |
| CN116433841A (en) | A real-time model reconstruction method based on global optimization | |
| Kumara et al. | Real-time 3D human objects rendering based on multiple camera details | |
| CN118070434B (en) | Method and system for constructing process information model of automobile part | |
| EP4345748A1 (en) | Computer-implemented method for providing a lightweight 3-dimensional model, a non-transitory computer-readable medium, and a data structure | |
| KR101533494B1 (en) | Method and apparatus for generating 3d video based on template mode | |
| Sosa et al. | 3D surface reconstruction of entomological specimens from uniform multi-view image datasets | |
| CN116523927B (en) | Method for fusion cutting of cultural relic CT image and three-dimensional shell based on VTK | |
| Thomas et al. | A study on close-range photogrammetry in image based modelling and rendering (imbr) approaches and post-processing analysis | |
| Budd et al. | Temporal alignment of 3d video sequences using shape and appearance | |
| JP2013254300A (en) | Image processing method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21968362 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21968362 Country of ref document: EP Kind code of ref document: A1 |