GB2621846A - 3D visualisation system - Google Patents
3D visualisation system Download PDFInfo
- Publication number
- GB2621846A GB2621846A GB2212272.5A GB202212272A GB2621846A GB 2621846 A GB2621846 A GB 2621846A GB 202212272 A GB202212272 A GB 202212272A GB 2621846 A GB2621846 A GB 2621846A
- Authority
- GB
- United Kingdom
- Prior art keywords
- subject
- mesh
- model
- photogrammetry
- bodily portion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2/00—Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
- A61F2/02—Prostheses implantable into the body
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2240/00—Manufacturing or designing of prostheses classified in groups A61F2/00 - A61F2/26 or A61F2/82 or A61F9/00 or A61F11/00 or subgroups thereof
- A61F2240/001—Designing or manufacturing processes
- A61F2240/002—Designing or making customized prostheses
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30052—Implant; Prosthesis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/44—Morphing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Transplantation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Hardware Design (AREA)
- Cardiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Architecture (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Vascular Medicine (AREA)
- General Engineering & Computer Science (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Prostheses (AREA)
Abstract
Methods and systems for visualising a desired bodily portion of a subject for use in cosmetic or reconstructive surgery are disclosed. Three-dimensional (3D) imaging data of a bodily portion of the subject is obtained 101. A 3D model of the bodily portion of the subject is generated 102 based on the 3D imaging data and data representative of a 3D structure representative of the bodily portion, such as soft-tissue and bone. The external appearance of the 3D model of the bodily portion of the subject is adjusted 103 until a desired outcome is reached whilst taking into account the structural properties of the bodily portion of the subject and the 3D model of the desired bodily portion is output 104. The output data may be used for generating one or more surgical implants. The adjusting may involve using finite element analysis, mass-spring analysis and elasticity and/or compression properties. Imaging data comprises 3D photogrammetry data and this is superimposed on structure data using meshes and landmarked features. Taking into account structural properties may involve on assessment of implantable regions. The invention seeks to improve the visualisation of a desired bodily portion of a subject prior to surgery.
Description
3D VISUALISATION SYSTEM
Technical Field
[0001] The present application relates to apparatus, systems and method(s) for three-dimensional (3D) visualisation of a desired bodily portion of a subject in preparation for surgery.
Background
[0002] In the field of cosmetic and reconstructive craniofacial surgery facial implants are placed on the skull and change the morphology and appearance of the face of a subject (e.g. a patient). Facial implants include implants such as zygomatic, maxillary, forehead, orbital and mandibular implants. Conventional design and manufacture of facial implants is still highly dependent on the skill of the surgeon to shape the facial implant for a subject rather than being precisely engineered for that subject. Most facial implants are designed to reconstruct or enhance facial form of the subject, but it is currently not possible to accurately predict or even present to the subject the change in facial shape a specific implant will induce. Neither do existing implant design techniques nor processes provide the subject with a visualised or realistic prediction of their post-operative facial appearance prior to surgery.
[0003] Currently implants are designed using an inside out process; i.e., determine the shape of the implant based on anatomical norms and infer what changes in facial shape of the subject this will induce. Implants can be designed and then shaped to restore normal anatomical shape to the bone of a subject. Whilst restoring normal bony anatomy, this technique does not take into account any overlying soft tissue anomalies, which makes the final change in facial appearance of a subject very unpredictable. As well, this usually leads to adjustments being made to the facial implant during surgery, which may also lead to unpredictable results, and more repeat visits.
[0004] Similarly, other cosmetic and reconstructive surgery performed on other parts of the body of a subject (e.g. chest, calves, legs, arms) also involve designing implants to change or modify the appearance of the underlying bone/muscle of the 30 body part of the subject. Again, there are no standard techniques that may be applied to enable either the subject or surgeon to predict, visualise or even present to the subject the change in shape of the body part that is feasible for the subject and which a specific implant will induce. Although the subject may be directed to look at other cases and/or the experience of the surgeon to describe the impact an implant may have, it is difficult for the subject to understand what to expect as a result of the surgery. As well, given most implants are generic, last minute adjustments may be required by the surgeon to the implant during surgery. This can also lead to unpredictable results, and more repeat corrective visits.
[0005] There is a desire to improve the accuracy, efficiency and predictability of 3D visualisation of a desired bodily portion of a subject prior to surgery which is suitable for driving the implant design and manufacture process.
Summary
[0006] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to determine the scope of the claimed subject matter; variants and alternative features which facilitate the working of the invention and/or serve to achieve a substantially similar technical effect should be considered as falling into the scope of the invention disclosed herein.
[0007] According to a first aspect, there is described a computer-implemented method of visualising a desired bodily portion of a subject for use in cosmetic or reconstructive surgery, the method comprising: obtaining three-dimensional (3D) imaging data of a bodily portion of the subject; generating a 3D model of the bodily portion of the subject based on the 3D imaging data and data representative of a 3D structure representative of the bodily portion of the subject; adjusting the external appearance of the 3D model of the bodily portion of the subject until a desired outcome is reached whilst taking into account data representative of the structural properties of the 3D structure of the bodily portion of the subject; and outputting data representative of a 3D model of the desired bodily portion of the subject based on the desired outcome.
[0008] In some example embodiments, generating the 3D model of the bodily portion of the subject further comprising: receiving data representative of the 3D structure representative of the bodily portion of the subject; and superimposing the 3D imaging data onto the 3D structure representative of the bodily portion of the subject.
[0009] In some example embodiments, adjusting the external appearance of the 3D model further comprising: receiving data representative of desired adjustments to one or more regions of the 3D imaging data in relation to the 3D structure representative of the bodily portion of the subject; and controlling adjustments to the 3D imaging data relative to the 3D structure representative of the bodily portion of the subject to only those adjustments that are feasible taking into account the structural properties of at least the 3D structure of the bodily portion of the subject.
[ooio] In some example embodiments, adjusting the external appearance of the 3D model taking into account the structural properties of at least the 3D structure of the bodily portion of the subject further comprising modelling the structural properties of the 3D structure until the outer surface of the 3D structure substantially coincides with the desired adjustment with the 3D imaging data.
[oon] In some example embodiments, modelling the structural properties of the 3D structure until the outer surface of the 3D structure substantially coincides with the desired adjustment further comprising one or more from the group of: adjusting the 3D structure using finite element analysis and structural properties of the 3D structure; adjusting the 3D structure using mass-spring analysis and structural properties of the 3D structure; and adjusting the 3D structure based on modelling the elasticity and/or compression of the structural properties of the 3D structure.
[0012] In some example embodiments, the 3D structure of the bodily portion of the subject comprises an outer surface structure and one or more underlying structures representative of the bodily portion of the subject.
[0013] In some example embodiments, the outer surface structure comprises skin of the bodily portion of the subject, the one or more underlying structures comprising one or more from the group of: soft-tissue of the bodily portion of the subject; and/or bone associated with the bodily portion of the subject.
[0014] In some example embodiments, the bodily portion of the subject comprises at least one from the group of: a head portion of the subject comprising at least a face or facial region of the subject; a neck portion of the subject comprising at least a neck region of the subject; a torso portion of the subject comprising at least a shoulder, chest, abdominal and/or pelvis region of the subject; a limb portion of the subject comprising at least an upper extremity or arm region including one or more of an upper arm, forearm or hand region of the subject, a lower extremity region including one or more of a hip, thigh, knee, leg, ankle or foot region of the subject.
[0015] In some example embodiments, obtaining the 3D imaging data further comprising receiving, from an image capturing apparatus, 3D photogrammetry data of the bodily portion of the subject, the 3D photogrammetry data defining an external 3D appearance of the bodily portion of the subject.
[0016] In some example embodiments, generating the 3D model of the bodily portion of the subject further comprising: receiving a 3D structure representative of the 15 bodily portion of the subject; and superimposing the 3D photogrammetry data onto the 3D structure representative of the bodily portion of the subject.
[0017] In some example embodiments, adjusting the external appearance of the 3D model further comprising: receiving data representative of desired adjustments to one or more regions of the 3D photogrammetry data in relation to the 3D structure representative of the bodily portion of the subject; controlling adjustments to the 3D photogrammetry data relative to the 3D structure representative of the bodily portion of the subject to only those adjustments that are feasible taking into account the structural properties of at least the 3D structure of the bodily portion of the subject; displaying updates to the 3D model with those feasible adjustments applied based on adjusting the 3D photogrammetry data; outputting an updated 3D model comprising at least the adjusted 3D photogrammetry data defining the desired external appearance of the bodily portion of the subject and the 3D structure representative of the bodily portion of the subject.
[0018] In some example embodiments, controlling the adjustments of the 3D photogrammetry data in relation to the properties of the 3D structure of the bodily portion of the subject further comprising: identifying one or more implantable regions based on the 3D structure of the bodily portion of the subject and structural properties thereof; receiving data representative of desired adjustments in relation to one or more identified implantable regions; determining the feasibility of each desired adjustment in relation to the desired adjusted one or more implantable regions based on modelling the desired adjustments of the implantable regions in relation to the structural properties of the 3D structure of the bodily portion of the subject; in response to determining a desired adjustment of an implantable region is infeasible, indicating the infeasibility of the desired adjustment and/or limiting the adjustment to a feasible adjustment for the implantable region, otherwise allowing the adjustment until the desired outcome is reached.
[0019] In some example embodiments, obtaining the 3D imaging data further comprising: receiving, from an image capturing apparatus, 3D photogrammetry data of the bodily portion of the subject requiring the one or more implants, the 3D photogrammetry data defining an external 3D appearance of the bodily portion of the subject; and receiving, from a medical image capturing apparatus, the 3D structure representing the bodily portion of the subject comprising data representative of medical imagery of the bodily portion of the subject for generating a 3D medical model of the portion of the subject, the 3D medical model comprising a first 3D structure and the second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject.
[0020] in some example embodiments, generating the 3D model of the bodily portion of the subject further comprising superimposing the 3D photogrammetry data onto the first 3D structure of the 3D medical model, and outputting data representative of a 3D model of a bodily portion of the subject, the 3D model comprising: 3D photogrammetry data representative of the bodily portion of the subject and defining a desired external appearance of the bodily portion of the subject; and the 3D medical model representative of the bodily portion of the subject, the 3D medical model comprising a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject.
[0021] in some example embodiments, the first 3D structure comprising a first 3D structural mesh of the outer surface of the first 3D structure, the 3D photogrammetry data of the bodily portion comprising a 3D photogrammetry mesh and 3D photogrammetry texture map data, wherein superimposing the 3D photogrammetry data onto the 3D structural mesh further comprising: receiving a reference 3D mesh comprising a plurality of landmarked features associated with the bodily portion of the subject; identifying the plurality of landmarked features on each of the first 3D structural mesh and the 3D photogrammetry mesh; generating a first reference 3D mesh based on superimposing the reference 3D mesh onto the first 3D structural mesh using the reference and identified landmarks of the reference 3D mesh and first 3D structural mesh; generating a second reference 3D mesh based on superimposing the reference 3D mesh onto the 3D photogrammetry mesh using the reference and identified landmarks of the reference 3D mesh and 3D photogrammetry mesh; and adjusting the second reference 3D mesh to match the first reference 3D mesh based on performing one or more mesh transformation operations; superimposing the 3D photogrammetry mesh onto the first 3D structural mesh based on the one or more performed mesh transformation operations; and superimposing the 3D photogrammetry texture map data onto the superimposed 3D photogrammetry mesh to form the 3D model of the bodily portion of the subject.
[0022] In some example embodiments, adjusting the 3D model further comprising: controlling adjustments to the superimposed 3D photogrammetry mesh and corresponding 3D photogrammetry texture map data taking into account the structural properties of at least the first 3D structure of the bodily portion of the subject until a desired outcome is reached; and generating the 3D model based on data representative of: the adjusted 3D photogrammetry mesh and 3D photogrammetry texture map data defining the desired external appearance of the bodily portion of the subject; and the 3D medical model comprising data representative of the first and second 3D structures and corresponding first and second 3D structural meshes.
[0023] in some example embodiments, further comprising using the output 3D model for generating one or more implants for the subject for use in cosmetic or reconstructive surgery based on the following steps of: receiving the output 3D model of a desired bodily portion of the subject, the 3D model comprising: 3D photogrammetry data representative of the bodily portion of the subject and defining a desired external appearance of the bodily portion of the subject; and a 3D medical model representative of the bodily portion of the subject, the 3D medical model comprising a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject; determining one or more volumes generated between the 3D photogrammetry data of the desired bodily portion of the subject and the second 3D structure taking into account the structural properties of at least the first 3D structure; and outputting data representative of the determined one or more volumes for controlling the manufacture of one or more corresponding implants for the subject.
[0024] In some example embodiments, determining the one or more volumes further comprising: adjusting the first and second 3D structures taking into account the structural properties of at least the first 3D structure until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data; and calculating one or more volumes generated between the adjusted second 3D structure and the original second 3D structure based on determining one or more volume shapes resulting from differences between the adjusted second 3D structure and the original second 3D structure.
[0025] In some example embodiments, adjusting the first and second 3D structures taking into account at least the properties of the first 3D structure further comprising iteratively increasing or decreasing regions of the outer surface of the second 3D structure whilst modelling the structural properties of the first 3D structure until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data.
[0026] In some example embodiments, iteratively adjusting the second 3D structure further comprising determining the extent of the first 3D structure based on modelling the structural properties of the first 3D structure in relation to adjustments to the outer surface of the second 3D structure until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data.
[0027] In some example embodiments, taking into account the structural properties of the first and second 3D structures further comprising performing finite dement analysis or mass-spring analysis on the first and second 3D structures to adjust the outer surfaces of the first and second 3D structures based on the structural properties of the first 3D structure and until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data.
[0028] In some example embodiments, the first 3D structure comprises a first 3D structural mesh representing the outer surface of the first 3D structure, the second 3D structure comprises a second 3D structural mesh representing the outer surface of the second 3D structure, and the 3D photogrammetry data of the desired bodily portion comprises a desired 3D photogrammetry mesh, and determining the one or more volumes further comprising: adjusting the first and second 3D structural meshes taking into account at least the properties of the first 3D structure until the first 3D structural mesh substantially coincides with the desired 3D photogrammetry mesh; and calculating any volumes generated between the adjusted second 3D structural mesh and the original second 3D structural mesh.
[0020] In some example embodiments, adjusting the first and second 3D structural meshes taking into account at least the properties of the first 3D structure further comprising iteratively increasing outwardly or decreasing inwardly the second 3D structural mesh whilst modelling the structural properties of the first 3D structure until the first 3D structural mesh substantially coincides with the desired 3D photogrammetry mesh.
[0030] in some example embodiments, taking into account the structure of the first and second 3D structures further comprising performing finite element analysis or mass-spring analysis on the first and second 3D structures to adjust the boundaries of the first 3D structural mesh and second 3D structural mesh based on modelling the structural properties of the first 3D structure and until the first 3D structural mesh substantially coincides with the desired 3D photogrammetry mesh.
[0031] In some example embodiments, taking into account the structural properties of the first and second 3D structures further comprising one or more from the group of: performing finite element analysis on the first and second 3D structures to adjust the outer surfaces of the first and second 3D structures based on the structural properties of the first 3D structure and until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data; performing mass-spring analysis on the first and second 3D structures to adjust the outer surfaces of the first and second 3D structures based on the structural properties of the first 3D structure and until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data; and performing volumetric analysis on the first and second 3D structures to deform the outer surfaces of the first and second 3D structures based adjustments required in relation to the desired 3D photogrammetry data.
[04332] in some example embodiments, wherein calculating the one or more volumes generated between the adjusted second 3D structural mesh and the original second 3D structural mesh further comprising determining one or more volume shapes resulting from differences between the adjusted second 3D structural mesh and the original second 3D structural mesh.
[0033] In some example embodiments, the method further comprising: identifying one or more implant regions of the adjusted second 3D structural mesh forming a boundary with the original 3D structural mesh, wherein the adjusted second 3D structural mesh within the boundary extends outwardly from the original 3D structural mesh; calculating one or more implant volume shapes for each of the identified one or more implant regions based on the volume generated in each implant region between the original second 3D structural mesh and the adjusted second 3D structural mesh therein; outputting data representative of one or more implant volume shapes and identified implant regions for controlling manufacture of the corresponding one or more implants in relation to the bodily portion of the subject.
[0034] in some example embodiments, the method further comprising controlling the manufacturing one or more medical implants based on the output data representative of the one or more implants.
[0035] In some example embodiments, manufacturing one or more medical implants further comprises 3D printing one or more medical implants using the data representative of the one or more implant volumes; and [0036] In some example embodiments, manufacturing one or more medical implants further comprises controlling a manufacturing process at an implant manufacturer for producing one or more medical implants using the data representative of the one or more implant volumes.
[0037] In some example embodiments, the method further comprising: identifying one or more removal regions of the adjusted second 3D structural mesh forming a boundary with the original 3D structural mesh, wherein the adjusted second 3D structural mesh within the boundary extends to within the interior of the original 3D structural mesh; calculating one or more removal volume shapes for each of the identified one or more removal regions based on the volume generated in each removal region between the original second 3D structural mesh and the adjusted second 3D structural mesh; outputting data representative of one or more removal volume shapes and identified removal regions for controlling removal of said removal regions from the bodily part of the subject.
[0038] In some example embodiments, the first 3D structure comprises data representative of one or more soft tissue regions associated with the bodily portion of the subject.
[0030] In some example embodiments, the second 3D structure comprises data representative of one or more bone or skeletal regions associated with the bodily portion of the subject.
[0040] in some example embodiments, the second 3D structure comprises data representative of one or more second soft tissue and/or facia regions associated with the bodily portion of the subject.
[0041] In some example embodiments, the first 3D structure comprises data representative of one or more soft tissue regions associated with the bodily portion of the subject; and the second 3D structure comprises data representative of either: a) one or more bone or skeletal regions associated with the bodily portion of the subject; b) one or more second soft tissue and/or facia regions associated with the bodily portion of the subject; or c) a combination of both a) and b).
[0042] in some example embodiments, the 3D medical model of the bodily portion of the subject is based on data representative of a 3D medical image captured from a medical imaging apparatus capable of capturing the first and second 3D structures of the bodily portion of the subject.
[0043] In some example embodiments, the medical imaging apparatus comprises one or more from the group of: ultrasound or ultrasonic scanning apparatus; Computerized Tomography (CT) scanning apparatus; Photon Counting CT scanning apparatus; Magnetic Resonance Imaging (MRI) apparatus; any other medical imaging apparatus capable of capturing the first and second 3D structures of the bodily portion of the subject.
[0044] In some example embodiments, the 3D imaging data comprises 3D photogrammetry data of the bodily portion of the subject based on imaging data captured from one or more image capturing apparatus for generating 3D photogrammetry data representative of the external appearance of the bodily portion of the subject.
[0045] According to a second aspect, there is provided a non-transitory tangible computer-readable medium comprising data or instruction code, which when executed on one or more processor(s), causes at least one of the one or more processor(s) to perform the steps of the computer-implemented method of the first aspect or any preceding computer-implemented method definition.
[0046] According to a third aspect, there is described a system for visualising a desired bodily portion of a subject for use in cosmetic or reconstructive surgery, the system comprising one or more processors, a memory and a communication interface, the memory comprising instructions that, when executed by the one or more processors, cause the system to perform operations comprising: obtaining three-dimensional (3D) imaging data of a bodily portion of the subject; generating a 3D model of the bodily portion of the subject based on the 3D imaging data and data representative of a 3D structure representative of the bodily portion of the subject; adjusting the external appearance of the 3D model of the bodily portion of the subject until a desired outcome is reached whilst taking into account data representative of the structural properties of the 3D structure of the bodily portion of the subject; and outputting data representative of a 3D model of the desired bodily portion of the subject based on the desired outcome.
[0047] The program instructions of the third aspect may also perform operations, steps, functionality according to any of the first and/or second aspect or any preceding method definition of the first and/or second aspects.
[0048] According to a fourth aspect, there is provided an apparatus comprising a processor, a memory unit and a communication interface, wherein the processor is connected to the memory unit and the communication interface, wherein processor and memory are configured to implement the computer-implemented method according to preceding method definition of the first and/or second aspects.
[0049] In a fifth aspect, this specification describes a computer-readable medium (such as a non-transitory computer-readable medium) comprising program instructions stored thereon for performing (at least) any preceding method or apparatus definition as described with reference to any of the first, second, third, and/or fourth aspects.
Brief Description of the Drawings
[0050] Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which: [oosi] Figure ia is a schematic diagram illustrating an example 3D visualisation system according to some embodiments of the invention; [0052] Figure ib is a flow diagram illustrating an example 3D visualisation process according to some embodiments of the invention; [0053] Figure ic is a flow diagram illustrating an example 3D model adjustment and analysis process for use in the 3D visualisation system figure ia according to some embodiments of the invention; [0054] Figure id is a schematic diagram illustrating an example 3D model adjustment component for use in the 3D visualisation system of figure la according to some embodiments of the invention; [0055] Figure le is a schematic diagram illustrating another example 3D visualisation system according to some embodiments of the invention; [00561 Figure if is a flow diagram illustrating an example 3D model generation process for use with 3D visualisation system of figure le according to some embodiments of the invention; [0057] Figure ig is a flow diagram illustrating an example 3D mesh superposition process for use in the 3D model generation process of figure if according to some embodiments of the invention; [0058] Figure ih is a flow diagram illustrating another example 3D model adjustment process for use in the 3D visualisation system and 3D model generation process of figures le and if according to some embodiments of the invention; [0059] Figure 2 is a schematic diagram illustrating an example 3D visualisation and implant generation system according to some embodiments of the invention; [0060] Figure 3a is a schematic diagram illustrating another example 3D model generation system for use with the 3D visualisation system of figures la, le and 2 according to some embodiments of the invention; [0061] Figure 31) is a schematic diagram illustrating another example 3D model adjustment apparatus for use with 3D visualisation systems la, le and 2 according to some embodiments of the invention; [0062] Figure 4a is a schematic diagram illustrating an example implant design system using output from the 3D visualisation system of figures ia-3b according to some embodiments of the invention; [0°63] Figure 4b is a flow diagram illustrating an example implant design process for use with implant design system of figure 4a according to some embodiments of the invention; [0064] Figure 4c is a flow diagram illustrating an example implant design analysis process for use with implant design system of figure 4a according to some embodiments of the invention; [0065] Figure 4d is a schematic diagram illustrating an example implant analysis apparatus according to some embodiments of the invention; [oo66] Figures 5a to 5d are schematic diagrams illustrating an example 3D visualisation and implant analysis process according to some embodiments of the 25 invention; [0067] Figure 6 is a schematic diagram of a computing system according to some embodiments of the invention; and [0068] Figure 7 is a schematic diagram of a computer-readable medium according to some embodiments of the invention.
[0069] Common reference numerals are used throughout the figures to indicate similar features.
Detailed Description
[0070] Example embodiments relate to systems, methods and/or computer programs for visualising a 3D model of one or more bodily portions of a subject based on combining a 3D capture of imaging data (e.g. 3D photographs/models) of the one or more bodily portions of the subject with a 3D structure of the bodily portion of the subject (e.g. a standard 3D structure of the bodily portion and/or CT/M1U scans of a bodily portion), the 3D structure describing a relationship between the aesthetic shape, the overlying structure (e.g. soft tissue shape) and underlying structure (e.g. bone form or other underlying soft tissues) of the bodily portion of the subject. During visualisation, desired changes to the aesthetic shape of one or more regions of the 3D model may include modelling the overlying and underlying structures of the 3D structure to determine which desired changes are feasible and therefore realistically visualise and model the desired bodily portion of the subject.
[0071] For example, the relationship between the aesthetic shape and structure of the bodily portion enables corresponding changes to the overlying structure and/or underlying structures of the 3D model of the bodily portion of the subject (e.g., face) to be modelled and verified by taking into account the properties of the overlying and/or underlying structures (e.g. mass-spring modelling, finite element modelling, machine learning (ML) models and/or other modelling/analytical techniques). This modelling models the changes these structures may undertake when, constrained by their properties, as they are adapted to the desired changes in aesthetic shape. This may involve adapting the underlying structural surface to increase/decrease in one or more regions that expand, stretch and/or compress the overlying structure to naturally form into the changed aesthetic shape. The modelling may further include modelling properties of any proposed implant materials in relation to adapting the underlying structure. The resulting changes to the 3D model form a desired 3D model that can be realistically visualised and verified by the subject and/or surgeon prior to reconstructive surgery. The desired 3D model may be displayed for verification of the proposed changes and/or used further used to drive an implant design process.
[0072] The 3D capture of imaging data may be based on capturing, without limitation, for example 3D photogrammetry data (e.g. 3D photographs/models) of the one or more bodily portions of the subject that may form a 3D photogrammetric model (e.g. 3D photogrammetry mesh and texture map) of the one or more bodily portions of the subject describing an external appearance of the bodily portion of the subject. The 3D structure may be obtained based on capturing, using medical imaging apparatus, 3D medical imaging data associated with the bodily portion of the subject (e.g., CT/MRI scans of a patient's face) and used to model the overlying structure (e.g. soft tissue shape) and underlying structure (e.g. bone form or other underlying soft tissues) of the bodily portion of the subject. The 3D photogrammetry model and the 3D medical model may be combined to form the 3D model of the bodily portion of the subject and used to describe the relationship between the aesthetic shape (e.g. 3D photogrammetry data), the overlying structure (e.g. soft tissue shape) and underlying structure (e.g. bone form or other underlying soft tissues) of the bodily portion of the subject. Changes to aesthetic shape of one or more regions of the 3D photogrammetric data and modelling the underlying structures may be used to obtain the desired 3D model for visualising feasible changes to the bodily portion of the subject. The desired 3D model may be displayed for verification and/or further used in an implant design process/workflow for generating implants for realising the changes to the bodily portion of the subject.
[0073] The bodily portion of the subject may include, without limitation, for example at least one or more bodily portions of a subject from the group of: a head portion of the subject including, for example, at least a face or facial region of the subject; a neck portion of the subject; a torso portion of the subject including, for example, at least a shoulder, chest, abdominal and/or pelvis region of the subject, and/or any other bodily portion of the torso of the subject; a limb portion of the subject; at least an upper extremity or arm region such as, for example, one or more upper arms of the subject, forearms of the subject, or hand regions of the subject; a lower extremity region including, for example, one or more of a hip of the subject, one or more thighs of the subject, one or more knees of the subject, a lower leg or calve region of the subject, one or more ankle or foot regions of the subject; and/or any other bodily portion of the subject suitable for cosmetic and/or reconstructive surgery requiring implants being applied to and/or removal of one or more underlying structures of the subject and the like that may change the aesthetic outcome of the bodily portion and the like and/or as the application demands.
[0074] Furthermore, the resulting 3D model may further drive an implant and design process/workflow in which data representative of volumes and/or shapes required to achieve the aesthetic changes may be generated based on the differences between the adapted underlying structure and the original underlying structure in relation to the one or more regions. Data representative of one or more of these volumes and/or shapes may be used to define the volume/shape of one or more implants that are designed specifically for the subject. This may be output in a suitable implant volume/shape format for manufacture/creation of the implant for the subject. Once manufactured/created, the one or more implants may be applied during surgery to the corresponding region and underlying structure of the subject to achieve the aesthetic changes. Data representative of one or more of these volumes and/or shapes may be used to define the volume/shape of one or more implants that are designed specifically for the subject. This may be output in a suitable implant volume/shape format for manufacture/creation of the implant for the subject. Once manufactured/created, the one or more implants may be applied during surgery to the corresponding region and underlying structure of the subject to achieve the aesthetic changes.
[0075] For example, data representing the implant volumes/shapes may be output in a suitable format for creation and production of implants according to the implant volumes/shapes. The creation and production of the implants may include, without limitation, for example sending the data representative of the implant volumes to a manufacturer for producing implants according to the implant volumes and materials required, or the data representative of the implant volumes may be transmitted to a 3D printer for printing the 3D shapes of the implant volumes in accordance to the required implants and implant materials and medical standard.
[0076] Alternatively and/or additionally, data representative of one or more of the volumes and/or shapes may define how much of the underlying structure in the corresponding region may be removed from the subject (e.g. assisting in, without limitation, for example osteotomies, orthognathic surgery, mandibuloplasty or other similar types of cosmetic/reconstructive/surgical operations). This may be output in a suitable volume/shape reduction format for use in providing an indication of which regions and how much of the underlying structure (e.g. bone structure) said regions of the subject may be removed to achieve the aesthetic or desired changes. For example, the data representative of the volume to be removed from the second underlying structure may be used to control a robotic reconstructive surgical apparatus including image sensors and a surgical implement, which may be controlled to sense the location of the volume of the second structure of the subject and remove, using the surgical implement, the required volume from the second underlying structure of the bodily portion of the subject. Alternatively, the data representative of the volume may be used by an image analysis apparatus configured to analyse an image or video stream of the underlying structure of the subject (e.g. during reconstructive surgery) and provide an indication of whether or not the determined volume and shape thereof has been sufficiently removed to realise the desired bodily portion of the subject.
[00771 The advantages of the systems, methods and/or computer programs described herein include providing a subject and their surgeon with a 3D model / simulation of realistic desired outcomes for the bodily portion of the subject. Further advantages of the systems, methods and/or computer programs described herein include, processing the resulting 3D model, to determine the volume and shape of any implants required, and from an output implant format, medically manufacturing/creating said implants, and/or indicating an amount of underlying structure for removal from the subject, needed to induce anatomical normality or the desired anatomical changes rather than relying on real-time surgical guess-work, estimation, experimentation, and/or repeated corrective follow-up visits. The systems, methods, and/or computer programs herein provide an efficient and cost effective mechanism for improving desired outcomes for subjects and the like in relation to reconstructive and/or cosmetic surgery and the like.
[0078] Figure la is a schematic diagram illustrating an example 3D model visualisation system 100. The 3D model visualisation system 100 includes a 3D capture component 101 for capturing and obtaining 3D imaging data of a bodily portion of the subject image, a 3D model processing component 102, a 3D model adjustment component 103, and a 3D model output component 104, which may be coupled to a display for visualising the desired 3D model of the bodily portion of the subject and/or for outputting data representative of the desired 3D model to an implant generation process system for automatically generating one or more implants for use in realising the desired 3D model.
[0079] The 3D capture component rot may be configured to capture 3D imaging data of the bodily portion of the subject that defines the external appearance of the bodily portion of the subject. For example, the 3D capture component tot may include using one or more imaging capturing devices that capture one or more 2D photographs of the external appearance or texture of the bodily portion of the subject, which may be processed into a 3D photogrammetry model of the bodily portion of the subject. In any event, the 3D capture component 101, may capture and generate 3D photogrammetry data that represents the 3D external texture/appearance of the bodily portion of the subject. This may be referred to herein as a 3D photogrammetry model. The 3D photogrammetry data may include data representative of a 3D photogrammetry model of the bodily portion of the subject, where the 3D photogrammetry data is transformed and combined with a 3D structure of the bodily portion of the subject in 3D model processing component 102.
[oo8o] The 3D model processing component 102 may receive the 3D imaging data captured by the 3D capture component tor and also receive data representative of a 3D structure representative of the bodily portion of the subject. The 3D model processing component 102 may use the received 3D imaging data and 3D structure data to form a 3D model of the bodily portion of the subject. The 3D structure of the bodily portion of the subject including data representative of the structural properties of the 3D structure of the bodily portion of the subject. The 3D model processing component 102 may be configured to superimpose the 3D imaging data onto the 3D structure representative of the bodily portion of the subject. For example, the 3D imaging data may be superimposed (or wrapped) over the 3D structure to form a 3D model of the bodily portion of the subject. The 3D model of the bodily portion of the subject incorporates both the 3D structure and structural properties and the superimposed 3D imaging data of the bodily portion of the subject. The 3D model of the bodily portion of the subject may be displayed to a user, where a 3D model adjust component 103 may be used for adjustment and/or manipulation of the 3D model by the user until a desired 3D model of the bodily portion of the subject is achieved.
[oo81] The 3D structure of the bodily portion of the subject may include one or more structures describing the structural properties of the bodily portion of the subject. The one or more structures may include a first 3D structure representing a first structure (e.g. skin/soft facial tissue) and a second 3D structure representing an underlying structure of the bodily portion of the subject (e.g. bone/skull underlying the soft facial tissue), which may be modelled.
[0082] The 3D model adjust component 103 may be configured to, once the 3D model of the bodily portion of the subject has been generated, receive adjustments from an operator or user, which may input adjustments to the generated 3D model to adjust the external appearance of the 3D model of the bodily portion of the subject until a desired outcome, e.g. a desired appearance for the bodily portion of the subject is achieved is reached. Each of the one or more adjustments that are received may take into account data representative of the structural properties of the 3D structure of the bodily portion of the subject to more realistically model the possible/feasible adjustments that may be made to the bodily portion of the subject given the constraints of the underlying structures of the bodily portion of the subject. Thus, the 3D model adjust component 102 may be configured to control adjustments to the 3D imaging data relative to the 3D structure representative of the bodily portion of the subject to only those adjustments that are feasible taking into account the structural properties of at least the 3D structure of the bodily portion of the subject.
[0083] For example, this may include receiving the adjustments and adjusting the 3D imaging data that may be superimposed over the 3D structure by pulling, stretching, pushing various regions or data points of the bodily portion of the subject represented by the 3D imaging data, which changes the appearance of the bodily portion of the subject (e.g. for a face, this may include changing the shape of the cheek bone area or jaw line/chin areas and the like). The underlying structures of the 3D structure of the 3D model may be modelled using the structural properties of the 3D structure of the bodily portion of the subject in relation to the desired adjustments that have been received. Those adjustments that are determined by the modelling process to be feasible adjustments, given the structural constraints of the bodily portion of the subject, may be applied to the 3D model to transform the original 3D model into a desired 3D model of the bodily portion of the subject. Those adjustments that are determined infeasible, based on violating the modelled structural constraints of the bodily portion of the subject (e.g. going beyond the elasticity of the soft-tissue/skin, or where no feasible implant may be supported and the like), are not applied and/or a notification providing reasoning as to why the adjustment is infeasible provided to the user.
[0084] Thus, the 3D model and imaging data may be transformed into a 3D model of a desired bodily portion of the subject using those received adjustments that are determined to be feasible being applied to the 3D model and imaging data. The desired 3D model includes the transformed 3D imaging data with the desired feasible adjustments and also the original 3D structure. This may be displayed by the 3D model output component 104 to the user for use in either further adjusting the 3D model to a desired 3D model in an iterative manner, or for outputting the desired 3D model for further processing or approval/feasibility analysis the like. For example, further feedback 105 may be input and received by the 3D model adjust component 103 by the user to further adjust the 3D model in an iterative manner until a desired outcome is reached. Once the desired outcome is reached, i.e. the desired 3D model that is displayed is approved by the user, the 3D model output component 104 may be configured to output data representative of the 3D model of the desired bodily portion of the subject based on the desired outcome.
[0085] The 3D model of the desired bodily portion of the subject may include at least: data representative of: 3D imaging data representative of the bodily portion of the subject and defining a desired external appearance of the bodily portion of the subject, and a 3D structure of the bodily portion of the subject. The 3D model may be output to a display for viewing by the user, subject and/or surgeon. The 3D model may further be output to an implant analysis process component for further processing/feasibility analysis to automatically generate one or more implants or shapes thereof suitable for applying to the bodily portion of the subject to realise the desired appearance of the desired bodily portion of the subject.
[0086] Figure tb is a flow diagram illustrating an example 3D visualisation process no for visualising a desired bodily portion of a subject for use in cosmetic or reconstructive surgery in relation to the subject. The 3D visualisation process no may include one or more of the following steps of: [0087] In step 111, obtaining three-dimensional, 3D, imaging data of a bodily portion of the subject.
[o o88] In step 112, generating a 3D model of the bodily portion of the subject based on the 3D imaging data and data representative of a 3D structure representative of the bodily portion of the subject.
[0089] in step 113, adjusting the external appearance of the 3D model of the bodily portion of the subject until a desired outcome is reached whilst taking into account data representative of the structural properties of the 3D structure of the bodily portion of the subject.
[0090] In step 114, outputting data representative of a 3D model of the desired bodily portion of the subject based on the desired outcome.
[0091] Figure ic is a flow diagram illustrating an example 3D model adjustment and analysis process 115 for use in the 3D visualisation system 100 figure la. The 3D model adjustment and analysis process 115 may further modify steps 113 and 114 of 3D visualisation process llo. The 3D model adjustment and analysis process 115 may include the following steps of: [0092] in step 116, receive a 3D model with 3D imaging data and a 3D structure representing a bodily portion of the subject. The 3D imaging data including a 3D photogrammetry mesh and/or texture map, and the 3D structure including data representative of a 3D structural mesh representing the outer surface of the 3D structure of the bodily portion of the subject. The 3D imaging data may be superimposed on to the 3D structural mesh. For example, the 3D photogrammetry mesh may be superimposed onto the 3D structural mesh of 3D structure of bodily portion of the subject. For example, the 3D model may be received from step 112 of process 110.
[0093] In step 117, receiving data representative of adjustments to one or more regions of the 3D photogrammetry mesh of the bodily portion of the subject. For example, the external appearance (e.g. 3D photogrammetry mesh and texture map) of the 3D model may be displayed to a user or operator on a device. The user or operator 30 may manipulate one or more regions of the displayed 3D model that change the shape and external appearance of the 3D model as desired via one or more predefined manipulation operations (e.g. pull or push etc.), which are received as adjustments to be made to the corresponding regions of the 3D photogrammetry mesh. These adjustments may be used to adjust or transform the 3D photogrammetry mesh of the received 3D model to form a 3D photogrammetry mesh of the desired bodily portion of the subject.
[0094] In step n8, confirming whether the received adjustments to said one or more regions are feasible. For example, prior to displaying the adjustments, the received adjustments to the 3D photogrammetry mesh may be assessed as to whether they are feasible to be applied to one or more of the regions of the 3D model based on the corresponding regions of the 3D structure and the structural properties of the 3D structure of the bodily portion of the subject such as, without limitation, for example elasticity of the skin or outer surface of the 3D structure, compression of the underlying structures of the 3D structure such as soft tissue/muscle and/or whether there are sufficient portions (e.g. bone) of the underlying structures that may support an implant to realise the desired bodily portion of the subject. This may be a rule based assessment based on standard skin elasticity properties, closeness of adjustments to landmarked features (e.g. eye sockets, nose, and/or mouth etc.), the type of adjustment and the like. Alternatively, the assessment may include modelling and adjusting the underlying structures of the 3D structure to determine whether the underlying structures may be deformed sufficiently to meet the requested adjusted regions of the 3D photogrammetry mesh of the 3D model. Such modelling may include, without limitation, for example finite element analysis (FEA), or mass-spring analysis, volumetric analysis, any other suitable analysis technique and/or ML algorithms trained to assess whether changes to the 3D structure may be made to meet the desired adjustments to the 3D photogrammetry mesh.
[0095] In step 119, displaying an update of 3D model representing the desired bodily portion of the subject after feasible adjustments have been applied to the 3D photogrammetry mesh. For example, only those feasible adjustments may be applied and displayed, with infeasible adjustments not being made or an indication being displayed as to why such adjustments are not feasible. This may be overridden though depending on the experience of the user and/or operator when making said adjustments. In which case, all adjustments may be applied to the 3D photogrammetry mesh and the resulting 3D model of the desired bodily portion of the subject may be displayed.
[0096] In step 120, it is determined whether further modifications or adjustments to the 3D model are received. For example, the user may be asked whether further adjustments are to be made. If this is the case (e.g. Y), then the process 115 proceeds to step 117 for receiving the further adjustments and controlling whether they are feasible or not and the like, which may be added to the set of feasible adjustments, otherwise (e.g. 'N'), where the user may indicate finalisation of the adjustments made to the 3D photogrammetry mesh for output and/or storage, the process 115 proceeds to step 121 for displaying, storing and/or outputting a finalised 3D model of the desired bodily portion of the subject based on the adjustments 3D photogrammetry mesh that were iteratively performed in steps 117-120.
[0097] In step 121, outputting data representative of a finalised 3D model of a desired bodily portion of the subject, where the finalised 3D model includes at least data representative of: the 3D photogrammetry data of the desired bodily portion of the subject as displayed, and the 3D structure of the original bodily portion of the subject. The 3D photogrammetry data of the desired bodily portion of the subject further includes the final adjusted 3D photogrammetry mesh corresponding to the desired bodily portion of the subject as displayed, and the corresponding texture mapping of the desired bodily portion of the subject. The finalised 3D model of the desired bodily portion of the subject may be displayed and/or stored for further adjustments and/or modifications, or for later processing by one or more workflow processes such as, without limitation, an implant generation process or implant analysis process for automatically generating one or more implants and the like to realise the desired 3D model of the bodily portion of the subject when applied to the bodily portion of the subject during 102 and the like.
[0098] Figure id is a schematic diagram illustrating an example 3D model adjustment component 125 for use in the 3D visualisation system 100 of figure la and or for performing at least steps 117 to 120 of process 115 of figure ib. The 3D model adjustment component 125 may receive the 3D model output from 3D model component 102 of figure la, and enable a user to adjust the external appearance of the received 3D model to a desired 3D model, determine whether one or more of the adjustments resulting in the desired 3D model are cosmetically feasible and/or able to support implants/cosmetic applications, and if so, output the desired 3D model for display and/or further processing/analysis (e.g. automatically generating one or more implants based on the desired 3D model that is output) and the like. In this example, the 3D model is displayed to the user as 3D model 126a prior to any adjustments, and is displayed as a realistic representation of the head of the subject and the structure may be anatomically accurate. The 3D model adjustment component 125 receives adjustment inputs from the user and controls how the face of the 3D model 126a may be adjusted in a manner that is cosmetically feasible. This is achieved by controlling how the 3D imaging data defining the external appearance of the bodily portion of the subject is deformed from user inputs by a suite of tools that allow the user or operator to adjust the 3D model 126a of the face to a desired shape of the face. The suite of tools may include facial manipulation operations (e.g. bodily portion manipulation operations) that may be used to pull, push, and deform the shape of the surface of the face to a desired shape.
[oo99] The 3D model adjustment component 125 may be configured to limit the adjustments to the 3D model 126a to only those regions of the face that may support implants and/or removal of one or more underlying structures of the 3D structure of the 3D model 126a of the face. That is the 3D model adjustment component 125 controls the manipulations to only those realistic modifications that may be cosmetically performed. For example, only certain selectable regions of the face may allowed to be manipulated such as nose, check bone area, jaw line, chin or any other area in which the soft-tissue is supported by bone. In this example, a cheek bone region 127 may be highlighted to the user that is able to be adjusted (e.g. a region that may include an underlying bodily structure of the 3D structure that is able to support an implant (e.g. cheek bone)), and in this case, the user performs a pull manipulation 128 to deform the shape of the cheek of the 3D model 126a outwardly. As an option, the cheek bone region 127 may be selectable and configured to allow manipulations within a bounded area around the cheek bone based on predetermined measurements from other facial features such as the nose or eye socket. The selectable regions of the face of the 3D model 3o9a may be based on one or more rules in relation to where an implant may be placed and/or where parts of the second 3D structure may be removed and the like. For example, the boundary may not extended to within a predetermined threshold (e.g. icm) of the eye socket, and similarly for the nose and the like. Similar limitations may be applied to other regions such as chin, jaw line and the like.
[ooloo] Thus, a user or operator may select an area to "operate on" for manipulating the facial features of the 3D model 126a. For example, a cheek region 127 may be selected for manipulation 128, where the manipulation is controlled by the 3D model adjustment component 125 to be confined to the selected region 127. In this example, the user, having selected a cheek region 127, may input a pull action 128 that pulls a region of the cheek region 127 outwards. The 3D model adjustment component 125 receives the pull input in relation to the selected region 127 and, in response, is configured to adjust the corresponding region 127 of the 3D image data and 3D structure of the 3D model 126a in a corresponding fashion.
As an option, as the 3D model adjustment component 125 receives data representative of the desired adjustments to one or more regions of the 3D model 126a, the desired adjustments are controlled based on modelling how the received 1 5 adjustments may be achieved in relation to the 3D structure of the bodily portion of the subject while taking into account the structural properties of at least the 3D structure of the bodily portion of the subject. The feasible adjustments, i.e. those adjustments that were able to be applied to the 3D model 126a via modelling that did not violate the constraints of the structural properties of the 3D structure of the bodily portion of the subject, may be kept. Updates to the 3D model 126b may be displayed with those feasible adjustments applied based on adjusting the 3D imaging data accordingly. This avoids a user from generating manipulations such as pulling/pushing that would not be realistically possible given the structural properties such as, for example, elasticity/compressibility and the like of the structure of the bodily portion of the subject.
[0°102] For example, the 3D model adjustment component 125 any suitable modelling methodology such as, without limitation, for example finite element analysis (FEA), mass-spring model/analysis, volumetric analysis, ML modelling and the like that is capable of modelling the structure of the bodily portion of the subject and determine whether one or more of the adjustments 128 are feasible and/or allowable given the structural properties of the bodily portion of the subject. For example, in general, the structural properties of the skin and soft-tissues are well known (e.g. elastic viscosity properties skin and soft tissue), where skin can be deformed more than soft-tissue. Alternatively or additionally, the structural properties of the bodily portion of the subject may be estimated from the medical scan data of the subject (e.g. CT/MRI scans), and which may be used to generate the 3D structure of the bodily portion of the subject. lithe deformation to the 3D structure based on the received adjustments can be modelled or solved by the modelling methodology, then these adjustments may be determined to be feasible and may be applied to update the desired 3D model 126b of the bodily portion of the subject.
l001031 After adjustment, the adjusted 3D model 126b is displayed to the user with the adjusted 3D imaging data and/or 3D structure via the output component 104.
The user may then input further one or more adjustments as feedback 105, in which the 3D adjustment component 125 performs the corresponding manipulations/adjustments/modelling to the 3D imaging data until a desired shape of the face is achieved with feasible adjustments resulting in a desired 3D model 126b being displayed to the user. The adjustments may be applied iteratively until a desired 3D model 1261) of the bodily portion of the subject is displayed and/or achieved.
[00104] Figure le is a schematic diagram illustrating another example 3D visualisation system 130. The 3D visualisation system 130 may further modify the 3D visualisation system loo of figure la. For example, the 3D capture component 101 of figure la may further include a 3D image capture component 131a and a 3D medical model capture component 131b. The 3D visualisation system 130 includes the 3D image capture component 13m and the 3D medical model capture component 131b, which are coupled to a 3D image/model processing component 132. The 3D image/model processing component 132 may perform 3D image/medical model superposition of the captured 3D imaging and/or medical imaging data of the bodily potion of the subject from 3D image capture component 1,31a and a 3D medical model capture components 131b and processed for generating a 3D model of the bodily portion of the subject. The 3D model is passed to 3D model adjustment component 133. The 3D model adjustment component 133 is used to receive adjustments for adjusting and/or transforming the generated 3D model into a desired 3D model of a bodily portion of the subject. A 3D model output component 134 received the desired 3D model for output to a display for viewing by a user, the subject and/or other personnel (e.g. surgeon and the like). the 3D model output component 134 may also be coupled to an implant generation process system lot) for generating one or more implants suitable for applying to the subject to realise the appearance of the desired 3D model.
[oolos] The 3D image capture component 131a may be configured to generate 3D photogrammetry data defining the external appearance of the bodily portion of the subject. For example, the 3D image capture component 1.31a may include using one or more imaging capturing devices that capture one or more 2D photographs of the external appearance or texture of the bodily portion of the subject, which may be processed into a 3D photogrammetry model of the bodily portion of the subject. In 1 0 another example, a number of cameras spaced apart and positioned around the subject (e.g., 2-10 cameras) may be used to capture multiple 2D pictures of the subject, which are processed and simultaneously and stitched together into a 3D picture or model of the subject to form the 3D photogrammetry data. Alternatively, a handheld camera may be moved around the bodily portion of the subject (e.g. subject's head), which 1 5 captures multiple 2D pictures or captures a 3D picture that may be stitched together to form the 3D photogrammetry data. Although 2-10 cameras is described, this is by way of example only an the invention is not so limited, it would be appreciated by the person skilled in the art that any number of one or more cameras or a suitable number of cameras may be used and/or applied to image the subject from different angles/orientations and generate a 3D photogrammetry data of the bodily portion of the subject as long as the generated 3D photogrammetry data is good enough for use in generating a 3D model of the subject with accurate 3D photogrammetry data as the application demands.
[00106] in any event, the 3D image capture component 13m, may capture and generate 3D photogrammetry data that represents the 3D external texture/appearance of the bodily portion of the subject. This may be referred to herein as a 3D photogrammetry model. The 3D photogrammetry data includes data representative of a 3D photogrammetry model of the bodily portion of the subject, where the 3D photogrammetry data is transformed and combined with the 3D medical model of the bodily portion of the subject in 3D image/model component 132. Thus, a 3D model of the bodily portion of the subject incorporating both a 3D medical model and the 3D photogrammetry data/model of the bodily portion of the subject can be generated.
[00107] As well as generating the 3D photogrammetry data of the bodily portion of the subject by 3D image capture component 131a, 3D medical model capture component 131b may be configured to generate the 3D medical model of the bodily portion of the subject defining at least two structures of the bodily portion of the subject such as, for example the soft-tissue structure and underlying bone structure of the bodily portion of the subject. For example, the 3D medical model capture component 131b may be configured to generate the 3D medical model from medical imaging apparatus (e.g. CT, MRI or ultrasonic scan or other suitable type of 3D medical scanning/imaging technology) that may image the bodily portion of the subject, where the first and second 3D structures are a 3D representation of soft tissues and/or bone associated with the bodily portion of the subject. For example, the bodily portion of the subject may be the head, and a CT scan of the subject's head may be performed to yield CT scanning data representative of the soft facial tissue and skull (e.g., cheek and jaw bones) of the subject. The cr scan data may be processed to generate a 3D medical model of the head of the subject that includes a first 3D structure representing by the soft facial tissue and a second 3D structure representing the skull underlying the soft facial tissue.
[00108] The 3D image/model component 132, the 3D photogrammetry data may be transformed and superimposed over the outer surface of the 3D medical model (e.g. over the first 3D structure) to provide a realistic 3D representation of the bodily portion of the subject. The 3D image/model component 132 outputs a 3D model of the bodily portion of the subject that represents a medically accurate and realistic appearance of the bodily portion of the subject. The output 3D model includes at least data representative of the 3D photogrammetry data transformed and superimposed onto the 3D medical model.
[00109] The 3D model adjust component 133 may be configured to, once the 3D model of the bodily portion of the subject has been generated, receive adjustments from an operator or user may input adjustments to the 3D model to achieve a desired appearance for the subject. This may include receiving the adjustments and adjusting the 3D photogrammetry data that is superimposed over the 3D medical model by pulling, stretching, pushing various regions or data points of the bodily portion of the subject represented by the 3D photogrammetry data, which changes the appearance of the bodily portion of the subject (e.g. for a face, this may include changing the shape of the cheek bone area or jaw line/chin areas and the like). Thus the original 3D model may be transformed into a 3D model of a desired bodily portion of the subject, which includes the 3D photogrammetry data transformed with the desired adjustments, i.e. desired 3D photogrammetry data, and also the original 3D medical model. This may then be viewed by a user and/or subject to confirm the desired appearance of the desired 3D model of the bodily portion of the subject. Further feedback 135 may be input and received by the 3D model adjust component 133 by the user to further adjust the 3D model in an iterative manner until a desired outcome is reached. The desired 3D model may be further adjusted iteratively The desired 3D model may be further processed to determine suitable implant volumes that will achieve, from reconstructive or cosmetic, the desired bodily portion for the subject.
[001101 The 3D model output component 134 may be configured to output data representative of the 3D model of the desired bodily portion of the subject. The 3D model of the desired bodily portion of the subject including at least: data representative of: 3D photogrammetry data representative of the bodily portion of the subject and defining a desired external appearance of the bodily portion of the subject, and a 3D medical model representative of the bodily portion of the subject. The 3D medical model includes data representative of a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject. As an example, the 3D photogrammetry data may further include a 3D photogrammetry mesh and texture map of the bodily portion of the subject, the first 3D structure may further include a first 3D structural mesh representing the outer surface of the first 3D structure, and the second 3D structure may further include a second 3D structural mesh representing the outer surface of the second 3D structure, which interfaces with the first 3D structure. The 3D model may be displayed for viewing by one or more users and/or data representative of the desired 3D model may be output to an implant analysis process for generating one or more implants accordingly.
[0 Om] For example, the first and second 3D structures may be represented by a first and second 3D structural mesh, and the desired 3D photogrammetry data may be represented by a 3D photogrammetry mesh that represents the 3D model of the desired bodily portion of the subject. Each of the first 3D structural mesh, second 3D structural mesh, and 3D photogrammetry mesh may each be a different polygon mesh representing the outer surface of the first 3D structure, the second 3D structure and 3D photogrammetry model, respectively. A polygon mesh may be described as a collection of vertices, edges and "faces" that define the outer surface or shape of a polyhedral object and may be used to represent an outer surface of a 3D object (e.g. outer surface of a first 3D structure, outer surface of a second 3D structure, and/or outer surface of a 3D photogrammetric model and the like). The "faces" usually consist of triangles (e.g., triangle mesh), quadrilaterals (e.g., quads), or other simple convex polygons (e.g., ngons) since this simplifies rendering, or a mixture of different polygonal "faces" depending on the complexity of the 3D object/model, but may also be more generally composed of concave polygons, or even polygons with holes, and the like and/or as the application demands.
[00112] Figure if is a flow diagram illustrating an example 3D model generation process 140 for use in example 3D model generation system 130 of figure le. The example 3D model generation process 140 includes the following steps of: [00113] In step 141, obtaining data representative of first 3D photogrammetry data defining an external appearance and/or texture of a bodily portion of the subject.
[00114] For example, in some embodiments, the first 3D photogrammetry data may include a 3D photogrammetry model of the bodily portion of the subject. The 3D photogrammetry model further includes a 3D photogrammetry mesh representing the outer surface of the bodily portion of the subject and a corresponding texture map of the bodily portion of the subject, which when combined forms a 3D photogrammetry model of the bodily portion of the subject.
[001151 In step 142, obtaining a second 3D model of the bodily portion of the subject, the second 3D model defining a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject. The first 3D structure may further include a first 3D structural mesh representing the outer surface of the first 3D structure, and the second 3D structure may further include a second 3D structural mesh representing the outer surface of the second 3D structure, which interfaces with or borders the first 3D structure.
roon61 For example, the second 3D model may be derived from medical imaging data of the bodily portion of the subject captured by a medical imaging apparatus and may form a 3D medical model. For example, the medical imaging apparatus may include, without limitation, for example, a CT scanner, MRI scanner, ultrasonic scanner and/or any other medical imaging apparatus suitable for outputting data for generating a 3D medical model of the bodily portion of the subject. The medical imaging data may be further processed to generate a first 3D structure and second 3D structure underlying the first 3D structure. For example, the first 3D structure may include soft-tissue structures of the bodily portion (e.g. muscles, fat, blood vessels, veins, skin of the bodily portion of the subject). The second 3D structure underlying the first 3D structure may include, without limitation, for example bone structures and/or other soft-tissue structures associated with the bodily portion of the subject. As an option or in some embodiments, the 3D medical model may be further processed and include with the first 3D structure, data representative of a first 3D structural mesh corresponding to the outer surface of the first 3D structure, and also include with the second 3D structure, data representative of a second 3D structural mesh corresponding to the outer surface of the second 3D structure.
[o 0117] In step 143, superimposing the 3D photogrammetry data onto the second 3D model.
[oon8] in step 144, adjusting the appearance of the superimposed first 3D photogrammetry data in relation to the bodily portion of the subject until a desired outcome is reached.
[00119] In step 145, outputting a 3D model of a desired bodily portion of the subject, where the 3D model includes data representative of at least: 3D photogrammetry data of the desired bodily portion of the subject, and a 3D medical model representative of the original bodily portion of the subject including data representative of at least: the first 3D structure and the second 3D structure underlying the first 3D structure of the bodily portion of the subject. In some embodiments, the 3D photogrammetry data includes a 3D photogrammetry mesh and corresponding texture map, and the 3D medical model further includes a first 3D structural mesh corresponding to the outer surface of the first 3D structure and a second 3D structural mesh corresponding to the outer surface of the second 3D structure.
[00120] Figure ig is a flow diagram illustrating an example 3D superposition process 150 for use in the 3D model generation process of figure if according to some embodiments of the invention. In this example, the 3D photogrammetry data includes a 3D photogrammetry mesh and a corresponding texture map. As well, the 3D medical model includes a first 3D structure (e.g., soft tissue) and a second 3D structure (e.g., bone) underlying or supporting the first 3D structure. The first 3D structure further includes a corresponding first 3D structural mesh representing the outer surface of the first 3D structure. The second 3D structure further includes a corresponding second 3D structural mesh representing the outer surface of the second 3D structure. The 3D superposition process 150 transforms the 3D photogrammetry data onto the 3D medical model such that the 3D photogrammetry data represents a realistic and medically accurate outward appearance of the bodily portion of the subject. The 3D superposition process 150 includes the following steps of: [00121] In step 151, retrieve a landmarked 3D reference mesh, the 3D photogrammetry mesh of the 3D model of the bodily portion of the subject, and the first 3D structural mesh of the 3D model of the bodily portion of the subject.
[00122] The landmarked 3D reference mesh may be a standard or uniform 3D mesh representing the outer surface of a standard shape of the bodily portion (e.g., a standard head shape, standard chest shape, standard leg shape etc.). The 3D reference mesh may be of a lower resolution compared with the first 3D structural mesh and/or the 3D photogrammetry mesh. This provides the advantage of reducing the computational load whilst maintaining suitable accuracy when superimposing the 3D photogrammetry mesh onto the first 3D structural mesh. Alternatively, the 3D reference mesh may be of a similar or higher resolution compared with the first 3D structural mesh or 3D photogrammetry mesh to enable hyper accurate superpositioning. In any event, the 3D reference mesh, when landmarked, is used to assist in transforming and mapping the 3D photogrammetry mesh onto the first 3D structural mesh during the superpositioning to provide an accurate as possible 3D representation of the external appearance of the 3D model of the bodily portion of the subject.
[00123] The 3D reference mesh includes a set of designated landmark mesh points representing important features or landmarks associated with the shape of the bodily portion. For example, landmark mesh points associated with a face may include, without limitation, for example points around the eyes, points around the forehead or brow/nose bridge, points around the tip and/or nostrils of the nose, points around the chin, jawline and/or cheek bones and the like. These may be used to lock the reference mesh to corresponding landmark mesh points on the 3D photogrammetry and first 3D structural meshes.
[00124] in step 152, landmarking the 3D photogrammetry mesh and first 3D structural mesh by identifying mesh points corresponding to the landmarked 3D reference mesh points on the 3D photogrammetry mesh and first 3D model structural mesh. For example, each of the 3D photogrammetry mesh and first 3D model structural mesh may be analysed and processed (e.g., using 3D graphics feature identification software and/or ML algorithms trained to identify landmark points, and the like) to identify the mesh points or regions that correspond to the landmark mesh points of the landmarked 3D reference mesh. The identified mesh points for each of the 3D photogrammetry and first 3D structural meshes may be annotated, labelled and/or mapped to the corresponding landmark points of the 3D reference mesh.
[00125] In step 153, generating a first 3D reference mesh by performing a first set of mesh transformation operations on the original landmarked reference mesh to deform the original landmarked 3D reference mesh to substantially match the surface of the landmarked 3D photogrammetry mesh, e.g., corresponding landmark points match. For example, for each of the landmarked points of the landmarked 3D reference mesh, a first set of one or more transformation operations may be determined that transform these landmarked points to match the landmarked points of the landmarked 3D photogrammetry mesh. This first set of transformation operations may also be used to assist in transforming the other points of the landmarked reference mesh until the surface of the landmarked 3D reference mesh substantially coincides or matches the surface of the landmarked 3D photogrammetry mesh. This transformation process generates a first 3D reference mesh that substantially matches the 3D surface of the 3D photogrammetry mesh.
[00126] In step 154, generating a second 3D reference mesh by performing a second set of mesh transformation operations on the original landmarked 3D reference mesh to deform original landmarked 3D reference mesh to substantially match the surface of the landmarked first 3D structural mesh so that corresponding landmark points match. For example, for each of the landmarked points of the landmarked 3D reference mesh, one or more second set of transformation operations may be determined that transform these landmarked points to match the landmarked points of the landmarked first 3D structural mesh. This second set of transformation operations may also be used to assist in transforming the other points of the landmarked 3D reference mesh until the surface of the landmarked 3D reference mesh substantially coincides or matches the surface of the landmarked first 3D structural mesh. This transformation process generates a second 3D reference mesh that substantially matches the 3D surface of the first 3D structural mesh.
[00127] In step 155, adjusting the first reference 3D mesh to match the second reference 3D mesh based on determining and performing a third set of one or more mesh transformation operations. For example, a third set of mesh transformation operations may be determined that operate on the first reference 3D mesh to transform each of the mesh points of the first reference 3D mesh to match the corresponding mesh points of the second reference 3D mesh. This is possible because the first and second reference 3D meshes have the same number of mesh points as the original landmarked reference mesh. As well, both the first and second 3D reference meshes have the same landmark points as the original landmarked reference mesh. The transformation operations that are used to transform the mesh points of the first reference 3D mesh to coincide with the mesh points of the second reference 3D mesh may be stored as a third set of mesh transformation operations. The third set of mesh transformation operations may be used to merge or superimpose the 3D photogrammetry mesh onto the first 3D structural mesh.
[00128] in step 156, superimposing the first 3D photogrammetry mesh and data (e.g. corresponding texture mapping) onto the first 3D structural mesh based on the third set of one or more mesh transformation operations. Given that the first 3D reference mesh was aligned with the 3D photogrammetry mesh, the third set of one or more mesh transformation operations may be applied to the mesh points of the 3D photogrammetry mesh for transforming these mesh points towards the mesh points of the first 3D structural mesh. The texture mapping may also be transformed along with the superimposed 3D photogrammetry mesh using the third set of mesh transformation operations and the like.
[00129] Figure ih is a flow diagram illustrating an example 3D model adjustment process 160 for use in the 3D model generation process 140 of figure if or the 3D model adjust component 133 of figure re or 3D model adjustment component 103 of figure la. The 3D model adjustment process 160 may include the following steps of: [00130] In step 161, receive 3D model with 3D photogrammetry mesh superimposed onto first 3D structural mesh of 3D medical model of bodily portion of 5 the subject. For example, the 3D model may be received from step 156 or 143 and the like of processes 150 and 140, respectively.
[00131] In step 162, receiving data representative of adjustments to one or more regions of the 3D photogrammetry mesh of the bodily portion of the subject. For example, the external appearance (e.g. 3D photogrammetry mesh and texture map) of the 3D model may be displayed to a user or operator on a device. The user or operator may manipulate one or more regions of the displayed 3D model that change the shape and external appearance of the 3D model as desired via one or more predefined manipulation operations (e.g. pull or push etc.), which are received as adjustments to be made to the corresponding regions of the 3D photogrammetry mesh. These adjustments may be used to adjust or transform the 3D photogrammetry mesh of the received 3D model to form a 3D photogrammetry mesh of the desired bodily portion of the subject.
[00132] In step 163, confirming whether the received adjustments to said one or more regions are feasible. For example, prior to displaying the adjustments, the received adjustments to the 3D photogrammetry mesh may be assessed as to whether they are feasible to be applied to one or more of the regions of the 3D model based on the corresponding regions of the first and/or second 3D structures and their properties such as, without limitation, for example elasticity of the skin or outer surface of the first 3D structure, compression of the first 3D structure and/or whether there is sufficient portions of the second 3D structure underlying the first 3D structure that may support an implant to realise the desired bodily portion of the subject. This may be a rule based assessment based on standard skin elasticity properties, closeness of adjustments to landmarked features (e.g. eye sockets, nose, and/or mouth etc.), the type of adjustment and the like. Alternatively, the assessment may include modelling and adjusting the first and second 3D structures to determine whether sufficient volumes may be generated to form implant volumes between the first and second 3D structures of the 3D model (e.g. finite element analysis may be performed, or spring-mass analysis, and/or ML algorithms trained to assess whether changes to the first 3D structural mesh may be made to meet the desired 3D photogrammetry mesh.
[00133] In step 164, displaying update of 3D model representing the desired bodily portion of the subject after feasible adjustments have been applied to the 3D photogrammetry mesh. For example, only those feasible adjustments may be applied and displayed, with infeasible adjustments not being made or an indication being displayed as to why such adjustments are not feasible. This may be overridden though depending on the experience of the user and/or operator when making said adjustments. In which case, all adjustments may be applied to the 3D photogrammetry mesh and the resulting 3D model of the desired bodily portion of the subject may be displayed.
[00134] In step 165, it is determined whether further modifications or adjustments to the 3D model are received. For example, the user may be asked whether further adjustments are to be made. If this is the case (e.g. 'Y'), then the process 160 proceeds to step 162 for receiving the further adjustments and controlling whether they are feasible or not and the like, otherwise (e.g. 'N'), where the user may indicate finalisation of the adjustments made to the 3D photogrammetry mesh for output and/or storage, the process 160 proceeds to step 166 for displaying, storing and/or outputting a finalised 3D model of the desired bodily portion of the subject based on the adjustments 3D photogrammetry mesh that were iteratively performed in steps 162-165.
[00135] In step 166, outputting data representative of a finalised 3D model of a desired bodily portion of the subject, where the finalised 3D model includes at least data representative of: the 3D photogrammetry data of the desired bodily portion of the subject as displayed, and the 3D medical model representative of the original bodily portion of the subject. The 3D photogrammetry data of the desired bodily portion of the subject further includes the final adjusted 3D photogrammetry mesh corresponding to the desired bodily portion of the subject as displayed, and the corresponding texture mapping of the desired bodily portion of the subject. The 3D medical model of the original bodily portion of the subject including: the first 3D structure and corresponding first 3D structural mesh, the second 3D structure underlying the first 3D structure and the corresponding second 3D structural mesh of the bodily portion of the subject. The finalised 3D model of the desired bodily portion of the subject may be stored for further adjustments and/or modifications, or for later processing by implant generation process 100 or implant analysis process 102 and the like. The finalised 3D model of the desired bodily portion of the subject may be displayed to the subject for approval prior to further processing and/or prior to further adjustments using process 160. For example, the finalised 3D model of the desired bodily portion of the subject (e.g. once approved by the subject and/or user), may be input to the implant generation process 100 or implant analysis process 102 for determining data representative of one or more volumes for use in controlling the creation, production and/or manufacture of corresponding implants and the like.
[00136] Figure 2 is a schematic diagram illustrating another example 3D visualisation and implant generation system 200. The 3D visualisation implant generation system 200 is based on the 3D visualisation systems and/or components too, 125 and 130 and/or processes 110, 115, 140, 150, 160 as described with reference to figures ra to rh, the components and steps of which may be further modified/combined as described with reference to figure 2. The 3D visualisation and implant generation system 200 includes one or more processors or processing devices 2202-220g, one or more memory units or storage devices 222, one or more communication interfaces, user input/output systems (e.g. displays) 224a-224b, in which the one or more processors or processing devices 220a-220g are coupled or connected to the one or more memory units or storage devices 222, communication interfaces and user input/output systems 224a-224b Although the implant generation system 200 is described herein with reference to the bodily portion of the subject (e.g., a patient) being a head with a focus on the face of the subject, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that the 3D visualisation and implant generation system 200 may be applied to any other bodily portion of the subject as the application demands. In this example, data from CT scans (e.g. 3D medical model) and 3D photographs (e.g. photogrammetry data) of the face of the subject are combined into a realistic 3D model of the subject that describes the relationship between overlying soft tissue shape (e.g. first 3D structure) and underlying skull form (second 3D structure). Implant design and volume is determined by modelling the changes to the soft tissue shape of the face using, without limitation, for example finite element modelling, mass-spring models, numerical analysis techniques, or even an ML algorithm trained to determine volumes based on the properties and/or limitations of the overlying and underlying structures.
[00137] An advantage of the 3D visualisation and image generation system zoo is provided in the manner of merging the CT scan and 3D photogrammetry data of the face of the subject but retaining the necessary data to represent the subject in a realistic and lifelike manner that allows accurate modification of the external appearance of the face and, where necessary, generate the geometry for the necessary facial implant to achieve the features being represented based on the modifications. Further advantage of the 3D visualisation and image generation system zoo is that the desired changes to the face can be modelled or simulated for feasibility and/or verification that the changes are desired by the subject. A desired 3D model of the bodily portion of the subject may be generated based on the desired changes and further processed for automatically determining one or more volumes and/or shapes for implants to be created, where the implant generation part of system zoo may be configured to output data representative of the determined one or more volumes/shapes that may be used to produce/manufacture one or more corresponding implants that achieve the changes needed to induce anatomical normality.
[0 013 8] The 3D visualisation and implant generation system 200 includes a patient/capture processing system 224 including a photogrammetry processing apparatus 201 including processing device 220a. in this case, the photogrammetry processing apparatus 201 may be configured to capture a plurality of 2D images of the face 202 of the subject from different angles. The plurality of 2D images of the face 202 of the subject may be stored in a data storage system 220 (e.g. patient data storage). The data storage system 220 may be a database management system or other type of data storage repository and the like. The plurality of 2D images of the face 202 may be processed by a 3D photogrammetry pipeline process 203 on processing device 220b for converting the plurality of 2D images of the head/face 202 to create a 3D head exterior, which forms a 3D exterior model with 3D photogrammetry mesh and texture mapping 204, also referred to as 3D photogrammetry model 204 of the face of the subject. The 3D photogrammetry model 204 includes data representative of the 3D photogrammetry mesh describing the 3D shape of the head/face and texture mapping photogrammetry data that may be superimposed, or shrink wrapped onto the 3D photogrammetry mesh to provide a realistic 3D model of the face of the subject.
[00139] In some embodiments, the 3D photogrammetry model 204 may be communicated and displayed by a face manipulation process 205 on one or more other processing devices 220f (e.g. a tablet or other computing device) for display and used by a user or operator to manipulate and change the shape of the face to a desired facial representation of the subject. The 3D model is useful for previewing or experimenting with the possible changes that might be made to the face of the subject. For example, the 3D model may be manipulated by rotating the view of the head from different angles, in which the face manipulation process 205 may include face deformation tools (e.g. interactive controls/tools) enabling changes to be made to the face (e.g. pulling/pushing regions of the 3D photogrammetry mesh) and realistically adjusting the texture mapping to display the desired 3D model of the face of the subject. This may also enable side-by-side comparison between the original 3D model and an adjusted 3D model that may represent a desired face of the subject.
[00140] The patient/capture processing system 224 further includes a medical imaging processing apparatus 206 including processing device and/or hardware 220c.
In this case, the medical imaging processing apparatus 206 may be configured to capture and/or process a CT/MRI image scan data 207 of the face of the subject for creating a 3D full head anatomical model of the face of the subject, also referred to herein as a 3D medical model. The CT/MRI scan data 207 may be stored in data storage system 220 for further processing. The CT/MRI scan data 207 of the face of the subject may be processed by a 3D medical model pipeline process 208 on processing device 22od for converting the CT/MU scan data 207 to create a 3D head anatomical model, which forms a 3D medical model 209 of the face/head of the subject. The 3D medical model pipeline process 208 may be configured to process the CT/Mill scan image data to determine at least a first 3D structure (e.g. soft-tissues/skin of the face) and a second 3D structure (e.g. bone or skull of the head/face of the subject) underlying the first 3D structure. The first 3D structure may include data representative of a first 3D structural mesh representing the outer surface of the first 3D structure. The first 3D structure may also include data representative of the properties of the first 3D structure, e.g. elasticity, compressibility of the skin, fat and/or muscles of the soft-tissue. The second 3D structure may include data representative of a second 3D structural mesh representing the outer surface of the second 3D structure. The second 3D structure may also include data representative of the properties of the second 3D structure, e.g. hardness of the bone and/or cartilage and the like.
[00141] A 3D model an implant processing system 226 including processing device 220g and a display is configured, in this example, to perform or control facial modelling, facial manipulation and facial implant processes as described herein. For example, the bodily portion is the face, and so a facial modelling process may be configured to retrieve the 3D Photogrammetry model 204 and the 3D medical model 209 (e.g. CT Scan/MRI of head (3D)) from storage 222 and merge the 3D Photogrammetry model 204 with the 3D medical model 209 (e.g. CT Scan/MRI of head (3D)) into an accurate medical representation of the face of the subject including accurate texture mapping of the exterior appearance of the subject over the first structure of the 3D medical model 209, as well as, accurate anatomical structures of the head/face of the subject (e.g. first and second structures) and displayed as a 3D model 210 of the face of the subject. The displayed 3D model 210 may be stored in storage 222 as 3D model data 211.
[00142] The 3D visualisation and implant processing system 226 and/or tablet 205 may also be further configured for manipulating the exterior appearance of the facial features of the 3D model 210 using a face manipulation process that enables a user or operator to input one or more manipulations to change the shape of the face of the 3D model 210 to a desired facial representation of the subject. The face manipulation process may be coupled to a modelling engine 215 (e.g. rule-based engine, spring-mass analysis engine, or finite element analysis engine, or ML algorithm) that models, in real-time or non-real-time, and controls the input manipulations to display the realistic changes to the 3D model 210 depending on the properties of the first and second structures for display. The modelling engine 215 may also be configured to determine whether any changes input by the user are feasible in relation to the manipulated regions of the surface of the 3D model 210. Displaying the manipulations using the 3D model 210 is useful for previewing or experimenting with the possible realistic changes that might be made to the face of the subject. The face manipulation process may include 3D manipulation tools that are used to input manipulations (or adjustments) that are received from a user by the system zoo to control the alteration of the surface of the face of the 3D model 210. For example, the 3D photogrammetry mesh of the 3D model 210 may be adjusted by the input manipulations. The 3D model may be further manipulated by rotating the view of the head from different angles, in which the face manipulation process may include face deformation tools (e.g. interactive controls/tools) enabling changes (or adjustments) to be made to the face (e.g. pulling/pushing regions of the 3D photogrammetry mesh) and realistically adjusting the texture mapping to display the desired changes to the 3D model 210 of the face of the subject. This may also enable side-by-side comparison between the original 3D model 210 and the desired adjustments made to the 3D model 210 of the face of the subject. The face manipulation process allows, for example, the patient (or subject), user and/or surgeon to adjust the face using the realistic 3D model 210. The face manipulation process software is configured to accurately display the adjusted face using the 3D model 210 for 3D visualisation by the user and/or subject.
[00143] Although the face manipulation process may use modelling engine 215 to perform modelling in real-time, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that the modelling engine 215 may perform the modelling in non-real-time or off-line whilst the facial manipulations are performed in real-time and/or approximated in real-time to enable the subject, user and/or surgeon to visualize the possible manipulations and/or changes to the 3D model 210 to achieve the desired adjustments and desired face for the subject. The non-real-time modelling to determine whether the desired manipulations/adjustments are feasible, and to determine implant shape may be performed off-line or in non-real-time, after a set of adjustments for manipulating the angles of the face/head in real time have been performed using facial manipulation tools and the like. Alternatively or additionally, modeling engine 215 may perform a lower complexity modelling (e.g. low level mass-spring analysis or ML technique, or facial manipulation rule-based algorithm) capable of being performed in real-time of the desired adjustments/manipulations to the 3D model 210 to determine whether the desired adjustments/manipulations are feasible for the particular subject given the properties of the first 3D structure and underlying second 3D structure of the 3D model of the subject. Once the desired 3D model has been approved, a higher-complexity modelling process (e.g. finite element analysis, or higher level mass-spring analysis or other ML technique) may be performed in non-real-time when using the desired 3D model to more accurately determine the one or more volume shapes for creating one or more implants for the subject.
[00144] Once the 3D model 210 of the face of the subject has been manipulated and desired adjustments to the 3D model 210 have been made, the desired adjustment to the 3D model 210 may be stored as a desired 3D model 212 for further processing.
The desired 3D model 212 may include data representative of a manipulated/adjusted/desired 3D photogrammetry mesh or a 3D photogrammetry mesh of the desired face of the subject. This has been changed from the original 3D photogrammetry mesh of the original 3D model 210 based on the manipulations made by the user or operator. The desired 3D model 212 also includes data representative of the original 3D medical model and the first 3D structure and second 3D structure underlying the first 3D structure, along with first and second 3D structural meshes, respectively and the like.
[00145] Once the desired 3D model 212 of the face of the subject has been generated and/or displayed, the desired 3D model 212 may be used to generate implants for the subject. An implant generation system 228 includes an implant creation pipeline 213 that is configured to generate the implant shape, which may use modelling engine 215 (e.g. finite element analysis, mass-spring analysis, other suitable numerical analysis techniques, and/or ML algorithms) and/or a more accurate version thereof to calculate the necessary changes in the features of the skull (e.g. second 3D structure) so the outer surface of skin of the soft-tissue (e.g. the first 3D structural mesh, which is the outer surface of the first 3D structure) matches the chosen external features of the 3D photogrammetry mesh of the desired 3D model 212. As an example, the implant creation pipeline 213 may be configured to make use of modelling engine 215 (e.g. finite element analysis, mass-spring analysis or other suitable analysis or ML technique) to determine one or more volumes or geometries within the desired 3D model 212 that may be used for generating or creating implants, which once created may be applied to the face of the subject during reconstructive surgery so the subject's face may be anatomically altered achieve the appearance of the desired 3D model 212.
[00146] For example, finite element analysis (or mass-spring analysis process) may be applied to model and calculates the changes in the underlying bone (e.g. second 3D structure) required to change the original soft-tissue surface (e.g. first 3D structural mesh) produce the manipulated soft tissue surface associated with the desired 3D model 212 (e.g. the manipulated/adjusted/desired 3D photogrammetry mesh). Both skull anatomy and soft tissue elasticity place constraints on the size, position and shape of possible implants. The modelling engine 215 may use these factors to as constraints in the finite element analysis process (or mass-spring analysis process) to ensure that the implant design is clinically feasible and to also determine the geometries of the implants. As an example, the implant creation pipeline 213 may use the modelling engine 215 such as, for example, a finite element analysis engine (or mass-spring analysis engine), to calculate the geometry of the implant to match the external features of the desired 3D model 212. The modelling engine 215 may iteratively adjust the surface of the second 3D structure of the 3D medical model (e.g. the second 3D structural mesh) and model any changes to the first 3D structure of the 3D medical model that would result due to the adjustments to the second 3D structure and using the structural properties of the first 3D structure. The second 3D structure may be iteratively adjusted based on increasing outwardly (or inwardly) one or more surface regions of the surface of the second 3D structure. These adjustments to the second 3D structure may be to model the required changes to the first 3D structure resulting from the adjustments to the second 3D structure, which further change the outer surface of the first 3D structure (e.g. the first 3D structural mesh). The adjustments are iterated until the first 3D structural mesh has been changed to match or substantially match the desired 3D photogrammetry mesh (e.g. within an error threshold).
[oo147] For example, the second 3D structural mesh of the second 3D structure may be adjusted and further deformed and/or increased outwardly in one or more regions during the iterative adjustments, where each adjustment to the second 3D structural mesh is translated to the first 3D structural mesh based on modelling how the first 3D structure and first 3D structural mesh deforms in relation to adjustments made to the second 3D structural mesh and the structural properties of the first 3D structure. The second 3D structure and second 3D structural mesh may be iteratively adjusted until the outer surface of the first 3D structure, i.e. the deformed/changed first 3D structural mesh, substantially aligns or matches with the exterior surface of the desired 3D model 212, i.e. the desired 3D photogrammetry mesh. That is, the second 3D structure may be iteratively adjusted until the first 3D structural mesh of the first 3D structure substantially aligns or matches with the desired 3D photogrammetry mesh representing the desired shape of the face of the subject from the desired 3D model 212.
[00148] Once alignment is reached or no further adjustments are possible (e.g. a stopping criterion such as a minimum error threshold is reached, or a maximum number of iterations is reached, and the like), the implant creation pipeline 213 is configured to determine the differences between the final adjusted second 3D structure and the original second 3D structure. For example, determine the differences between the final adjusted second 3D structural mesh and the original second 3D structural mesh, which result in one or more volume shapes. These differences may result in one or more volume shapes or geometries being determined between the outer surface of the final adjusted second 3D structure (i.e. final adjusted second 3D structural mesh) and the outer surface of the original second 3D structure (i.e. original second 3D structural mesh). The data representative of the one or more volumes or geometries may be output and subsequently used to control generation of one or more corresponding implants. For example, data representative of the resulting one or more implant geometries can be output in suitable file format for input into computer aided design/computer aided manufacture software of an implant manufacturer for manufacturing implants of the required geometry. For example, data representative of each of the volumes may be sent in a suitable format to a manufacturer for use in controlling the manufacturing process for producing implants having substantially the same shape and volume. Alternatively, the data representative of each determined volume may be transmitted to a 3D printer for printing one or more implants having substantially the same shape and/or volume. The 3D printer being a medical grade 3D printer operating under medical grade conditions with medical grade implant printing materials and the like.
[00149] In addition, should any adjustments be made to the volume or geometry of an implant by the manufacturer and/or required by surgical processes (e.g. screw fittings or mounting plates/fixings) prior to the manufacture of the implant, the adjusted volume/geometry of the implant may be re-input to the face manipulation process along with the 3D model 210 or 211 for a final visualisation and/or adjustment prior to approving each implant for manufacture or production (e.g. 3D printing, machining, implant molding processes, implant manufacturing processes, and the like).
Further adjustments can then be made to the outer surface of the soft tissue or first 3D structural mesh and the above processes repeated to achieve the best possible implant shape. Thus, the best possible implant shape that allows a subject to achieve, via reconstructive surgery, the desired 3D model 212 of the face may be performed through repeated cycles of the above procedures iteratively. This provides the advantage of allowing the manufacture of a custom made, patient specific, implant that is designed to provide a more predictable surgical outcome via a more efficient and predictable procedure.
[00150] Figure 3a is a schematic diagram illustrating another example 3D model generation system 300 that may be used in the 3D visualisation systems 100 and 130 of figures ta and te and/or processes thereof and/or 3D visualisation and implant generation system 200 and/or processes thereof. In this example, the bodily portion of the subject (or patient) is the head/face of the subject. Although the 3D model generation system 300 is described with respect to the head/face of the subject, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that the 3D model generation system 300 is applicable to any bodily portion of the subject including, without limitation, for example, the neck, torso, chest, abdominals, thigh/buttocks, lower leg, calves, ankles and/or feet of the subject, combinations thereof, any other suitable bodily portion of the subject that may be operated on during cosmetic and/or reconstructive surgery and/or as herein described. In this example, a 3D model of the head/face of a subject is generated that has a medically accurate or realistic external appearance of the face of the subject, but which also includes the underlying 3D anatomical features such as soft-tissue structures and/or bone/cartilage structures, which are also medically accurate.
[00151] Tn order to get the 3D model of the head of the subject with a realistic external appearance that is aligned with anatomical features of the head of the subject, the 3D model generation system 300 is configured to perform various processing operations on a CT scan 301a and 3D Photogrammetry data 302a. In operation 301, a CT scan 3o1a of the head/face of the subject may be received. The CT scan 3ota may be generated by a CT scanner and saved in a Digital Imaging and Communications in Medicine (DICOM) format, which is the standard for the communication and management of medical imaging information and related data. The DICOM CT scan imagery 3o1a may be processed using one or more image processing algorithms to generate skin and bone structures 301b and 3oic and corresponding skin and bone non-homologous meshes 301d and 3o1e, respectively, that, when combined form a 3D medical model. For example, a first 3D model representing the soft-tissue structure of the face may be generated, where the first 3D model 301 includes a first 3D structure 300b (e.g. soft-tissues) and a first 3D structural mesh 301d representing the outer surface (e.g. skin) of the first 3D structure 3o113. As well, a second 3D model representing the underlying bone structure 3oic (e.g. skull) of the face may be generated from the CT scan data 3o1a, where the second 3D model includes a second 3D structure 301c (e.g. skull/bone structures) and a second 3D structural mesh 3ote representing the outer surface (e.g. outer surface of the skull/bone) of the second 3D structure 3mc. The first and second 3D structural meshes 301d and 3me may be non-homologous meshes or arbitrary meshes that describe the surface of the first and second 3D structures 3mb and 3mc as efficiently as possible.
[00152] in addition, in operation 302, a 3D photogrammetric model may be generated from 3D photogrammetry data 302a and/or received that captures the external appearance of the head/face of the subject. The 3D photogrammetry data may be processed to generate a 3D photogrammetry mesh 302b representing the outer surface of the face of the subject and a texture mapping 302c representing the outer appearance/texture of the head of the subject, when combined they may form a 3D photo model of the head of the subject. For example, the texture mapping 302b may be shrink-wrapped over the 3D photogrammetry mesh 302b when generating the 3D photo model of the head of the subject. The 3D photo data 302a may be captured by a imaging capturing apparatus, which may include a number of cameras spaced apart and positioned around the subject (e.g. typically 5 cameras) that capture a 2D or 3D pictures simultaneously, and in which image processing is performed to determine a 3D photo data by transformations, stitching/knitting the captured images of the subject together. Alternatively, a handheld camera may be used that is moved around the subject's head and captures 2D or 3D pictures. Any other image capturing apparatus may be used that is capable of generating a 3D photogrammetry data of the head of the subject (or any other bodily portion) that includes a 3D photogrammetry mesh 3o2b describing the outer surface of the subject's head and a texture mapping 302c of the subject's head representing the skin/texture of the subject. However, the 3D photogrammetry mesh 302b that may be generated is also a non-homologous mesh that may be different to the first structural mesh.
[00153] Each 3D surface of the 3D model may be represented by meshes, which are essentially represented in 3D space by Cartesian (x, y, z) coordinates or points and different shaped polygons or lines between the points joined together to describe the shape of said each 3D surface in 3D space. However, the meshes generated from the 3D photo or CT scans means that the meshes are non-homologous as they are usually generated in a manner that is most efficient to create the 3D shape. Each point in the mesh does not correspond to a particular anatomical point, nor does each point in the mesh correspond to any other point in another mesh. For example, the mesh points of the 3D photogrammetry mesh 302b are generally or mostly different to the mesh points of the first 3D structural mesh 302d. This can result in unacceptable inaccuracies when attempting to apply or superimpose the texture mapping 302c to the 3D medical model inaccurate. In order to generate a medically accurate 3D model 309a that is also realistic in appearance to the head of the subject, the 3D photogrammetry mesh 3o2b is required to be accurately superimposed onto the first 3D structural mesh 301d of the 3D medical model. Thus, in operation 303, a landmarked reference mesh 3o3a is created or retrieved in which a set of mesh points on the landmarked reference mesh each correspond to different anatomical landmark or point on a standard 3D model of the head (or other bodily part). For example, the set of landmark mesh points may correspond to mesh points associated with anatomical landmarks such as salient points, lateral and medial campus of the head, mouth, tip of nose, eye sockets, eyebrow, bridge, forehead, check bones, nostrils, any other part or feature of the face that is common between faces and may be used as a landmark. In addition, image processing may be performed to identify the anatomical landmarks represented by the set of landmark mesh points on the 3D photogrammetry mesh 302b and the first 3D structural mesh 301d (or CT scan mesh).
[043154] In operation 3o4a, a first deformation mapping is defined for deforming or transforming the reference mesh 303a onto the first 3D structural mesh 301d using the set of landmark mesh points of the reference mesh 303a and the identified set of landmark points of the first 3D structural mesh 301d. This results in a first deformed reference mesh 3o6a, which is a homologous landmarked reference mesh that is coincident or substantially coincident with the first 3D structural mesh 301d. This may then be used to define a 3D structural-reference mesh mapping between the mesh points of the first 3D structural mesh 301d and the mesh points of the first deformed reference mesh 3o6a. In operation 304b, a second deformation mapping is defined for deforming or transforming the reference mesh 303a onto the 3D photogrammetry mesh 302b using the set of landmark mesh points of the reference mesh 303 and the identified set of landmark points of the 3D photogrammetry mesh 302b. This results in a second deformed reference mesh 3o6b, which is a homologous landmarked reference mesh that is coincident or substantially coincident with the 3D photogrammetry mesh 3o2b. This may then be used to define a 3D photogrammetry-reference mesh mapping between the mesh points of the 3D photogrammetry mesh 3o2b and the mesh points of the second deformed reference mesh 306b.
[00155] In operation 306, the first 3D structural mesh 301c1 (e.g. CT scan skin mesh) is used as the ground truth as this is the most accurate mesh and so the 3D photogrammetry mesh 302b is deformed or transformed to fit onto the first 3D structural mesh 301d using the first and second reference meshes 3o6a and 3o6b and the corresponding deformation mappings. This is possible because there is a known correspondence between the mesh points of the first deformed reference mesh 3o6a and the second deformed reference mesh 3o6b. Thus, a spatial mapping from the 3D photogrammetry mesh 3o2b to the first 3D structural mesh 3old may be determined by: a) generating set of deformation mappings or operations based on the first and second sets of deformations to transform the mesh points of the second reference mesh 3o6b onto the corresponding mesh points of the first reference mesh 3o6a; and b) using this set of deformation mappings along with the 3D photogrammetry-reference mesh mapping to generate a 3D photogrammetry spatial mapping that transforms the mesh points of the 3D photogrammetry mesh 3o2b to fit the first 3D structural mesh 3old of the 3D medical model; and c) transforming the 3D photogrammetry mesh 3o2b based on the 3D photogrammetry spatial mapping, to form a second 3D photogrammetry mesh 3o2d. Once the second 3D photogrammetry mesh 302d has been transformed and is aligned with the first 3D structural mesh 301d, in operation 307, the texture mapping 302c is transformed applied or overlaid onto the second 3D photogrammetry mesh 3o2d using the applicable mappings to form transformed texture mapping 302e. In operation 308, a textured homologous landmarked skin mesh coincident with the first 3D structural mesh (e.g. skin mesh of the CT scan) is generated from the transformed 3D photogrammetry mesh 302d and texture mapping 302e and combined with the 3D medical model to form 3D model 309a.
[oot56] The 3D model 309a of the head of the subject may include at least data representative of: the 3D medical model including the first 3D structure 3ota and corresponding first 3D structural mesh 3old, the second 3D structure 3oic and the corresponding second 3D structural mesh 301e; and the transformed 3D photogrammetry data including the second 3D photogrammetry mesh 3o2d and the corresponding transformed texture mapping 302e.
[00157] Figure 3b is a schematic diagram illustrating another example 3D model adjustment system 310 that may be used in the 3D visualisation system loo and/or 130 of figures la to th or by face manipulation process of 3D visualisation and implant generation system 200 of figure 2 and the like. The 3D model adjustment system 310 includes a 3D model manipulation apparatus 311 that may receive the 3D model 309a of the head of the subject, and display it to a user. The 3D model 309a is a realistic representation of the head of the subject and anatomically accurate. The 3D model manipulation apparatus 311 receives adjustment inputs from the user and controls how the face of the 3D model 309a may be adjusted in a manner that is cosmetically feasible. This is achieved by controlling how the 3D photogrammetry mesh 3o2d is deformed from user inputs by a suite of tools that allow the user or operator to adjust the 3D model 3o9a of the face to a desired shape of the face. The suite of tools may include facial manipulation operations (e.g. bodily portion manipulation operations) that may be used to pull, push, and deform the shape of the surface of the face to a desired shape.
roca581 As an option, the 3D model adjustment apparatus 311 may be configured to limit the adjustments to the 3D model 3o9a to only those regions of the face that may support implants and/or removal of one or more first or second 3D structures of the face. That is the 3D model adjustment apparatus 311 controls the manipulations to only those realistic modifications that may be cosmetically performed. For example, only certain selectable regions of the face may allowed to be manipulated such as nose, check bone area, jaw line, chin or any other area in which the soft-tissue is supported by bone. For example, a cheek bone region may be selectable and configured to allow manipulations within a bounded area around the cheek bone based on predetermined measurements from other facial features such as the nose or eye socket. The selectable regions of the face of the 3D model 309a may be based on one or more rules in relation to where an implant may be placed and/or where parts of the second 3D structure may be removed and the like. For example, the boundary may not extended to within a predetermined threshold (e.g. icm) of the eye socket, and similarly for the nose and the like. Similar limitations may be applied to other regions such as chin, jaw line and the like. Thus, a user or operator may select an area to "operate on" for manipulating the facial features of the 3D model 3o9a. For example, a cheek region 312a may be selected for manipulation, where the manipulation is controlled by the 3D model adjustment apparatus 311 to be confined to the selected region 312a. In this example, the user, having selected a cheek region 312a, may input a pull action that pulls a region of the cheek region 312a outwards (e.g. see after adjustment). The 3D model adjustment apparatus receives the pull input in relation to the selected region 312a and, in response, is configured to adjust the corresponding region 312a of the 3D photogrammetry mesh 302d and the texture mapping 302e of the 3D model 309a in a corresponding fashion. After adjustment, the adjusted 3D model 309a is displayed to the user with the adjusted 3D photogrammetry mesh/texture. The user may then input further one or more adjustments, in which the 3D adjustment apparatus 311 performs the corresponding manipulations/adjustments to the 3D photogrammetry mesh/texture until a desired shape of the face is achieved with the desired 3D model 313 being displayed to the user. The resulting adjusted 3D photogrammetry mesh and texture mapping form the desired 3D photogrammetry mesh 3o2f and texture mapping 302g. It is noted that the manipulated regions of the face of the desired 3D photogrammetry mesh 3o2f are no longer aligned with the first 3D structural mesh 3old, this is because of the adjustments made to the facial features of the 3D model 3o9a.
[00159] As the 3D model adjustment component 311 receives the data representative of the desired adjustments to one or more regions of the 3D photogrammetry mesh/texture mapping 3o2d/3o2e in relation to the first and second 3D structural meshes representative of the bodily portion of the subject, the desired adjustments are controlled based on modelling how the received adjustments may be achieved in relation to the 3D structure of the bodily portion of the subject while taking into account the structural properties of at least the 3D structure of the bodily portion of the subject. The feasible adjustments, i.e. those adjustments that were able to be applied to the 3D model 309a via modelling that did not violate the constraints of the structural properties of the 3D structure of the bodily portion of the subject, may be kept. Updates to the 3D model 313 may be displayed with those feasible adjustments applied based on adjusting the 3D photogrammetry mesh/texture mapping data 3o2d/3o2e.
[ootho] For example, the 3D model adjustment component 311 may use finite element analysis (FEA) or any other suitable methodology (e.g., mass-spring model/analysis, volumetric analysis, ML modelling and the like) to model the structure of the bodily portion of the subject and determine whether one or more of the adjustments are feasible and/or allowable given the structural properties of the bodily portion of the subject. For example, in general, the structural properties of the skin and soft-tissues are well known (e.g. elastic viscosity properties skin and soft tissue), where skin can be deformed more than soft-tissue. Alternatively or additionally, the structural properties of the first and/or second 3D structures of the desired 3D model 313 may be estimated from the medical scan data of the subject that were taken (e.g. CT/Mill scans) and used to generate the first and second 3D structures and corresponding first and second 3D meshes. Thus, deformations of the first 3D structure may be modelled based on deformations made to the second 3D structure. For example, deform the second 3D structure and mesh 3o1c and 3 ow (e.g., bone) and model the effects that the deformation has on the first 3D structure and first 3D structural mesh 301 and 301d and determine whether the resulting deformed first 3D structural mesh (e.g. the deformed skin) aligns with the desired 3D photogrammetry mesh 3o2f. This deformation/modelling may be repeated iteratively by continuing to deform the second 3D structure and mesh and modelling the resulting deformation on the first 3D structure and mesh until the deformed first 3D structural mesh is aligned with the desired 3D photogrammetry mesh 3o2f (e.g. until a stopping criterion such as, without limitation, for example error between the deformed first 3D structural mesh and desired 3D photogrammetry mesh reaches an error threshold, or number of iterations is reached etc.). if this is achieved, then the adjustments may be determined to be feasible and may be applied to update the desired 3D model 313 of the bodily portion of the subject.
[oolfa] For example, the adjustments to the 3D photogrammetry mesh may result in a desired 3D photogrammetry mesh 3oie, which represents the desired deformation that the soft-tissues of the first 3D structure need to undergo in order for the first 3D structural mesh 3ofd to align with the desired 3D photogrammetry mesh 3021 A FEA process may iteratively adjust the shape of the second structural mesh 3oie (e.g. the bone) while at the same time, in each iteration, modelling how the adjustments deform first 3D structure (e.g. the overlying soft tissues) and corresponding first 3D structural mesh 301d and whether the corresponding deformed first 3D structural mesh is in alignment or fits with the desired 3D photogrammetry mesh 3o2f. This iterative process is continued until alignment of the first 3D structure mesh with the desired 3D photogrammetry mesh 302f is determined to have been reached. This may be determined based on a stopping criterion such as a minimum error threshold calculated between the 3D photogrammetry mesh 3o2f and the deformed first 3D structural mesh and the like. if alignment of the first 3D structure mesh with the desired 3D photogrammetry mesh 3o2f cannot be reached within a minimum error threshold and/or other stopping criterion (e.g. maximum number of iterations), it may be determined that one or more of the adjustments are infeasible and cannot result in a feasible desired 3D model of the bodily portion of the subject. The user or operator may then modify or apply a different set of adjustments that result in only those feasible adjustments that maybe made to the 3D model of the bodily portion of the subject.
[00162] Once alignment is reached, the one or more adjustments are deemed to be feasible and may be applied to update the 3D model 309a to form the desired 3D model 313, which may be displayed to the user. From this, the user or operator may provide further feedback where the 3D model adjust component 311 may receive further set of adjustments for updating the current desired 3D model 313 and the like. The adjustments may be performed in an iterative manner until a set of feasible adjustments resulting in a desired 3D model 313 of the bodily portion of the subject is reached.
[00163] FEA provides the advantage of accurately modelling different materials and structure, which improves the accuracy of determining whether received adjustments to the 3D model 3o9a may be realistically applied and/or supported by implants. The determination whether the adjustments are feasible and/or whether the desired 3D model 313 of the bodily portion of the subject is feasible and/or supported by implants may be determined by FEA in non-real time, i.e. offline, after an initial desired 3D photogrammetry mesh has been approved and/or finalised. However, this may result in a slow process to determine the feasibility of adjustments.
[00164] The FEA method may be applied to the first and second 3D structural meshes (e.g. skin and bone meshes) to generate a prediction of required displacement of the second 3D structural mesh (e.g., skull/bone mesh) that causes the first 3D structural mesh to align with the desired 3D photogrammetry mesh taking in to account the structural properties (e.g., soft tissue and/or bone properties) of at least the first 3D structure in relation to the displacement of the second 3D structural mesh. The YEA method may iteratively be performed to displace the second 3D stnictural mesh whilst deforming the first 3D structural mesh taking into account the properties of at least the first 3D structure with respect to the displacement of the second 3D structural mesh.
[00165] Input parameters and material properties for the FEA method may include, without limitation, for example Young's modulus of the first and/or second 3D structures, Poisson ratio of the first and/or second 3D structures, viscoelasticity of the first and/or second 3D structures and/or any one or more other suitable input properties/values/boundary conditions associated with the first and/or second 3D structures. For example, when the second 3D structure is the skull/bone, the following ranges may be applied to the FEA method in relation to the second 3D structure: Young's Modulus for the bone: a Young's Modulus value approximately in the range of 5000 MPa to 15000 MPa; Poisson ratio for bone: a Poisson ratio value approximately in the range of 0.2 to 0.4. When the first 3D structure is the soft-tissue, the following ranges may be applied to the FEA method in relation to the first 3D structure: Young's Modulus for the soft-tissue: a Young's Modulus value approximately in the range of al MPa to 1 MPa; Poisson ratio for soft-tissue: a Poisson ratio value approximately in the range of 0.45 to 0.5; Viscoelasticity for soft-tissue: a viscoelasticity value approximately in the range of 30% to 95%.
[00166] Although various ranges for Young's modulus, Poisson ratio, and viscoelasticity of the first and/or second 3D structures have been described herein, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that other values for the Young's modulus, Poisson ratio, and viscoelasticity of the first and/or second 3D structures of the bodily portion of the subject may be measured and/or determined and used in the FEA method when modelling the first and second 3D structures/meshes, where such values for Young's modulus, Poisson ratio, and may be within the above ranges and/or even outside the above ranges as measured/determined for that particular subject or the bodily portion of that particular subject and/or as the application demands. Although the input parameters and material properties for the FEA method include Young's modulus, Poisson ratio and viscoelasticity, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that one or more other properties of the materials associated with the first and/or second 3D structures and/or implant materials, may be input and/or applied in the FEA method and/or in addition to Young's modulus, Poisson ratio, viscoelasticity associated with the first and/or second 3D structures and the like as the application demands.
[00167] The FEA method receives as input data representative of the proposed desired 3D photogrammetry mesh, the first 3D structural mesh and second 3D structural mesh and structural properties of at least the first 3D structure. Taking the proposed desired 3D photogrammetry mesh, and applying the FEA method (e.g. in an iterative manner) to deform the second 3D structural mesh (e.g. bone mesh) and cause deformation of the first 3D structural mesh based on the structural properties of the first and second 3D structures, the resulting deformed second 3D structural mesh's displaced position may be calculated along with a deformed first 3D structural mesh based on the structural properties of the first and second 3D structures. This may be iterated until the deformed first 3D structural mesh coincides or meets the desired 3D photogrammetry mesh within a certain error threshold or stopping criterion. During the FEA method, boundary conditions may be set in two directions, outwardly from the second 3D structure to the first 3D structure and longitudinally (e.g. for facial 3D model, these two directions may be from the back of the head facing forwards and from the top of the head towards the floor). Once the FEA method has reached a solution, it may be determined whether the proposed desired 3D model has been achieved within a certain minimum error threshold or other stopping criterion.
[00168] For more accuracy in the FEA method, as an option, the structural properties of the implant material may also be taken into account during the FEA method in relation to the displacement of the second 3D structure. This is because the volume associated with the displacement of the second 3D structure forms the implant volume. Additionally or alternatively as an option, should the second 3D structure be softer than bone or be a combination of bone and cartilage/muscle (e.g., the underlying second 3D structure may be ribs/muscle/cartilage of the chest, whilst first 3D structure comprises the pectoral muscle and/or subcutaneous fat and soft-tissues associated with a breast) or have similar properties to that of the first 3D structure (e.g., the first 3D structure is soft-tissue and the second 3D structure is also soft-tissue underlying the first 3D structure (i.e. a calf implant between calf muscles)), then the structural properties of the second 3D structure may be taken into account by the FEA method as both first and second 3D structures may be deformed or compressed when an implant is inserted therebetween.
[00169] Although FEA is described herein, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that FEA is one of many suitable analysis/modelling/processing techniques that may be applied such as, without limitation, for example mass-spring analysis, one or more ML models trained to estimate feasibility of adjustments to generate a desired 3D model from the original 3D model, and/or any other type of process or technique for estimating feasibility of adjustments and the like, modifications thereto, combinations thereof, and/or as the application demands.
[00170] Although in this example, the 3D model adjustment component 311 was described as using FEA methodology, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person in the art that any other suitable methodology may be applied such as, without limitation, for example FEA, mass-spring model/analysis, volumetric analysis, ML modelling and/or any other type of numerical methodology or analysis methodology suitable to whether one or more adjustments to the 3D model 309a are feasible or not given the structural properties of the bodily portion fo the subject and whether the resulting adjustments can be supported by implants and/or other cosmetic procedures and the like, combinations thereof, modifications thereto and/or as the applications demand. The type of methodology may depend on the application requirements and/or computation speed, e.g. real-time computation may require a lower complexity modelling/analysis for determining feasibility of deformations/manipulations to the 3D photogrammetry mesh, whereas non-real time computation may allow higher complexity modelling/analysis such as FEA and/or other more accurate modelling of whether the bodily portion of the subject and the like may be adjusted as requested. That said, lower complexity modelling/analysis may also be accurate enough to enable determination of the feasibility of the adjustments and the like.
[oorrl] Alternatively or additionally, the 3D model adjustment component 311 may be further modified to use a mass-spring model/analysis methodology instead of using the FEA methodology. For example, the mass-spring model (MSM) approach is based on constructing a framework of points connecting from the second 3D structural mesh (e.g. skull/bone mesh) to corresponding points on the first 3D structural mesh (e.g. the skin surface, or original 3D photogrammetry mesh) using a system of damped springs. Each point is attributed a mass, so that as deformations are made to the first 3D structural mesh (e.g. the skin surface) moving the connected points of the first 3D structural mesh, the corresponding connected points of the second 3D structural mesh also move accordingly, which deform the second 3D structural mesh. Once the first 3D structural mesh has been deformed to match or substantially match the adjustments of the desired 3D photogrammetry mesh, the resulting set of adjustments may be determined to be feasible and the desired 3D model 313 may be displayed to the user for further adjustment or finalisation.
[00172] The MSM method uses the second 3D structural mesh (e.g. bone surface mesh) and the first 3D structural mesh (e.g. skin surface mesh) to derive an evenly distributed system of points therebetween that will be used to attach springs between the second 3D structural mesh and first 3D structural mesh (e.g. bone and skin surfaces). In effect, the MSM methodology approximates/simulates the anisotropic behaviour of the first 3D structure (e.g. soft tissue) and/or deformation of the first 3D structural mesh using suitable dampening values for the springs whilst ensuring the volume between the first 3D structural mesh and second 3D structural mesh is maintained. As the first 3D structural mesh is deformed (e.g. towards the desired 3D photogrammetry mesh) the spring constraints on each of the points connected to the second 3D structural mesh will ensure that these points connected to the bone surface deform and move with the deformed first 3D structural mesh. Once the first 3D structural mesh coincides or meets the desired 3D photogrammetry mesh, the desired adjustments may be determined to be feasible and applied to the 3D model 309a to form the desired 3D model 313. However, if the MSM methodology cannot reach the minimum error threshold, then one or more of the adjustments may be deemed to be infeasible and not applied to the desired 3D model 313.
[0°173] For example, the MSM methodology may be applied in the 3D model adjustment system 310 of figure 3b, which may also be used in the 3D visualisation systems loo and 130 of figures la to ih, and/or 3D visualisation and implant generation system 200 of figure 2 and the like. In the 3D model adjustment system 310, the 3D model adjustment/manipulation apparatus or component 311 may receive the 3D model 3o9a of the head of the subject, and display it to a user. The 3D model 309a is a realistic representation of the head of the subject and anatomically accurate. The 3D model adjustment apparatus 311 receives adjustment inputs from the user and controls how the face of the 3D model 309a may be adjusted in a manner that is cosmetically feasible. This is achieved by controlling how the 3D photogrammetry mesh 302d (or first 3D structural mesh 301d) is deformed from user inputs by a suite of tools that allow the user or operator to adjust the 3D model 309a of the face to a desired shape of the face. The suite of tools may include facial manipulation operations (e.g. bodily portion manipulation operations) that may be used to pull, push, and deform the shape of the surface of the face to a desired shape. The MSM methodology may be applied to connect points on the second 3D structural mesh 3ote with corresponding points on the first 3D structural mesh 301d (or 3D photogrammetry mesh 302d), which are fitted the spring constraints and the like. Thus, whilst the user is adjusting the 3D model 3o9a and hence deforming the 3D photogrammetry mesh 302d and/or first 3D structural mesh 3old, the second 3D structural mesh 3ote is being likewise deformed as described above with respect to the MSM methodology. Thus, once the user has finished with the adjustments and the 3D photogrammetry mesh 302d has been adjusted into a desired 3D photogrammetry mesh, likewise with the corresponding first 3D structural mesh, then the deformed second 3D structural mesh may be used to determine the volumes between the deformed second 3D structural mesh and the original second 3D structural mesh for use in creating/making the required implant volumes.
[00174] Tn another example, using iterative or simulation/modelling techniques to calculate how to deform the second 3D structural mesh (e.g. bone/skull mesh) given a desired 3D photogrammetry mesh (e.g. deformed skin surface mesh) and calculating deformation of the second 3D structural mesh after the desired 3D photogrammetry mesh has been performed, changes to the second 3D structural mesh may be made at the same time as changes to the 3D photogrammetry mesh until the desired 3D photogrammetry mesh is achieved.
[00175] in another example, a volumetric approach may be performed during operation of the 3D model adjustment system 310 of figure 3b in which when a section on the 3D photogrammetry mesh 302d of the 3D model 3o9a of the bodily portion of the subject (e.g. a head model) is selected for editing, a ray may be cast from the selected section that locates the nearest portion of the second 3D structural mesh (e.g. bone mesh) and then the volume of that affected portion of the second 3D structural mesh (e.g. bone structure is deformed).
[00176] For example, in the 3D model adjustment system 310, the 3D model manipulation apparatus 311 may receive the 3D model 309a of the head of the subject, and display it to a user. The 3D model 309a is a realistic representation of the head of the subject and anatomically accurate. The 3D model manipulation apparatus 311 receives adjustment inputs from the user and controls how the face of the 3D model 3o9a may be adjusted in a manner that is cosmetically feasible. This is achieved by controlling how the 3D photogrammetry mesh 302d (or first 3D structural mesh 301d) is deformed from user inputs by a suite of tools that allow the user or operator to adjust the 3D model 3o9a of the face to a desired shape of the face. The suite of tools may include facial manipulation operations (e.g. bodily portion manipulation operations) that may be used to pull, push, and deform the shape of the surface of the face to a desired shape. The volumetric methodology as described above may be applied whilst the user is adjusting the 3D model 3o9a and hence deforming the 3D photogrammetry mesh 3o2d and/or first 3D structural mesh 301d, the second 3D structural mesh 301e is being likewise deformed. For example, when a user attempts to make a deformation to the 3D photogrammetry mesh 302d using the suite of tools including bodily portion manipulation operations, a ray is cast from the position of the scene's camera to the point on the 3D photogrammetry mesh 302d being deformed, and projecting from this point on the 3D photogrammetry mesh along a ray vector towards the second 3D structural mesh (e.g. bone mesh). If this ray vector intersects with the second 3D structural mesh, mesh points within range of where the ray intersects and if their normals are pointing towards the camera, are displaced by the same amount as the deformation to the 3D photogrammetry mesh 302d. This may be iterated as the user selects and deforms further portions of the 3D photogrammetry mesh 3o2d for deformation, where the deformed second 3D structural mesh is further deformed accordingly. The region of points around where the ray vector intersects on the deformed second 3D structural mesh may be adjusted to include a larger region of points or a smaller region of points that are adjusted based on the deformation. Alternatively or additionally, the region of points around where the ray vector intersects may have a gradient fall-off on the amount of distance or deformation that is applied to those points of the deformed second 3D structural mesh. The gradient falloff may be applied by instead of moving all the points in the region of points by the same distance deformation as on the 3D photogrammetry mesh 302d, the deformation distance is proportional to gradient fall off value. Thus, once the user has finished with the adjustments and the 3D photogrammetry mesh 302d has been adjusted into a desired 3D photogrammetry mesh that is cosmetically feasible, then the resulting desired 3D model 313 may be displayed for the user for verification and/or output as data representative of a desired 3D model for further processing. For example, the further processing may include using the original and deformed second 3D structural mesh to determine the volumes between the deformed second 3D structural mesh and the original second 3D structural mesh for use in creating/making the required implant volumes.
[00177] Alternatively, rather than using iterative or simulation/modelling techniques to calculate how to deform the second 3D structural mesh (e.g. bone/skull mesh) given a desired 3D photogrammetry mesh (e.g. deformed skin surface mesh) as described with reference to the 3D model adjustment component 311 in figure 3b, which calculates deformation of the second 3D structural mesh after receiving proposed adjustments to form a desired 3D photogrammetry mesh, changes to the second 3D structural mesh may be made at the same time as changes to the 3D photogrammetry mesh until the desired 3D photogrammetry mesh is achieved. Although in figure 3b the first 3D structural mesh may be deformed to meet the desired 3D photogrammetry mesh, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that these and/or other techniques may be applied in which, when the 3D photogrammetry mesh (or first 3D structural mesh) is deformed in figure 3b by the user until a desired 3D photogrammetry mesh is achieved, corresponding deformations may be made to the second 3D structural mesh to generate a deformed second 3D structural mesh, which can be used to determine whether the requested or desired adjustments are feasible and the resulting desired 3D model 313 is feasible in relation to supporting one or more implants or other cosmetic alterations/applications.
[00178] Once a set of feasible adjustments have been determined by the 3D model adjustment component 311 in relation to user input/requests, the desired 3D model 313 of the head of the subject may include at least data representative of: the 3D medical model including the first 3D structure 3ota and corresponding first 3D structural mesh 301d, the second 3D structure 30ic and the corresponding second 3D structural mesh 301e; and the desired 3D photogrammetry data including the desired 3D photogrammetry mesh 302f and the corresponding desired texture mapping 302g. The desired 3D model 313 may be used by 3D visualisation and/or implant generation systems 100, 130 and/or zoo of figures ta to th and/or 2 for visualising and/or generating one or more implants that may be applied to the subject to achieve the desired face shape.
[o0179] Figure 4a is a schematic diagram illustrating an example implant generation system 400 for use with 3D visualisation system 100 or 130 of figures and/or 3D visualisation and implant system 200 of figure 2. The implant generation system 400 includes a 3D model component 402, an implant analysis component 404 and an implant generation component 406 connected together. The 3D model component 402 is configured to obtain a 3D model of a bodily portion of the subject (e.g., a face, chest or lower leg, etc.), and provide a 3D model of a desired bodily portion of the subject to the implant analysis component 404. The 3D model component 402 may be based on any of the 3D visualisation system too or 130 of figures and/or 3D visualisation and implant system 200 of figure 2, and/or 3D generation, visualisation and adjustment systems 300 and 310 of figures 3a and 3b. The 3D model of the desired bodily portion also includes 3D model of the underlying structures (e.g. soft tissue and bone structures) of the bodily portion of the subject. The implant analysis component 404 processes the 3D model of the desired bodily portion of the subject to analyse the original underlying structures of the subject and adapt these to conform the original 3D model of the bodily portion of the subject to the 3D model of the desired bodily portion of the subject. Differences between the original underlying structures and the adapted structures may be used to form one or more volumes. From this adaptation one or more volumes are determined and/or generated between the adapted and original underlying structures and data representative of the one or more volumes is output for use in generating implants associated with the determined one or more volumes.
[00180] in operation, the 3D model component 402 may receive a 3D medical model of the bodily portion of the subject with underlying 3D structures of the bodily portion of the subject, and 3D photogrammetry data of the bodily portion of the subject defining an external appearance of the bodily portion of the subject. As described with reference to figures la to 3b, the 3D model component 402 may be configured to transform and superimpose the 3D photogrammetry onto the 3D medical model to form a 3D model of the bodily portion of the subject that is medically accurate whilst also realistically representing the external appearance of the bodily portion of the subject. The resulting 3D model may include data representative of the 3D medical -6o-model and transformed 3D photogrammetry that has been superimposed thereon. The 3D model component 402 may be further configured to control adjustment of one or more regions of the 3D photogrammetry of the 3D model of the bodily portion of the subject to a desired shape or size (e.g., adjust the cheek shape of the face, each breast of the chest or calve of a lower leg etc.). Once a desired shape or size of the bodily portion has been achieved, a 3D model representing the desired bodily portion may be output to the implant analysis component 404. The 3D model of the desired bodily portion of the subject may include data representative of at least the adjusted 3D photogrammetry representing the desired bodily portion of the subject and the 3D medical model of the bodily portion of the subject.
[00181] The implant analysis component 404, on receiving the 3D model of the desired bodily portion of the subject, and perform an analysis (e.g. modelling or simulation such as, without limitation, for example finite element analysis/modelling, machine learning (ML) models, and/or spring-mass analysis/modelling or other modelling, simulation and/or numerical analysis techniques) that processes and adapts the underlying 3D structures of the 3D medical model of the desired bodily portion of the subject to conform with the 3D photogrammetry of the 3D model of the desired bodily portion of the subject. From the results of this adaptation, the implant analysis component 404 is further configured to determine data representative of one or more volumes between the adapted underlying 3D structures of the 3D model that are required to realistically conform the 3D medical model of the bodily portion of the subject to the 3D photogrammetry data of the 3D model of the desired bodily portion of the subject. Data representative of the one or more volumes may be output for controlling an implant generation process for generating one or more implants conforming to the determined one or more volumes.
[00182] The implant generation process 406, receives data representative of one or more volumes for use in generating or creating one or more implants for the bodily portion of the subject. The data representative of the one or more volumes may be used to control the manufacture of one or more corresponding implants by an implant manufacture process. For example, the data representative of the one or more volumes may be used to direct a 3D printer to print, using medical printing materials, and output one or more medical implants for use in reconstructive and/or cosmetic surgery on the bodily portion of the subject. The reconstructive or cosmetic surgery performed on the subject using said implants may result in the subject achieving the external appearance desired bodily portion resulting in fewer or no corrective surgeries and the like.
[00183] Figure 4b is a flow diagram illustrating an example implant generation process 410 for generating one or more implants for use in reconstructive or cosmetic surgery in relation to the subject. The implant generation process 410 may include one or more of the following steps of: [00184] In step 412, obtaining a 3D model of a bodily portion of the subject. The 3D model may include data representative of: 3D photogrammetry data representative of the bodily portion of the subject and defining a desired external appearance of the bodily portion of the subject, and a 3D medical model representative of the bodily portion of the subject. The 3D medical model includes data representative of a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject. For example, the first 3D structure may be soft tissue structures such as, without limitation, for example skin and muscle, whilst the second 3D structure underlying the first 3D structure may be bone, cartilage, or skull underlying or supporting the first 3D structure of skin and muscle above. Alternatively, the first 3D structure may be a first soft tissue structure, e.g. skin, subcutaneous fat, muscle, and/or cartilage structures, and the second 3D structure may be a second soft tissue structure underlying the first soft tissue structure, e.g. further muscles, cartilage, or bone structures.
[00185] In step 414, determining one or more volumes generated between the 3D photogrammetry data of the desired bodily portion of the subject and the second 3D structure taking into account the structural properties of at least the first 3D structure.
[00186] For example, the determination may include, processing the 3D model of the desired bodily portion to determine the implant volumes required to achieve the desired bodily portion of the subject. This processing may include adjusting the original first and second 3D structures taking into account the structural properties of at least the first 3D structure and/or second 3D structure until the outer surface of the first 3D structure substantially coincides with the outer surface of the desired bodily portion, e.g. the desired 3D photogrammetry data. As an example, the first 3D structure may represent the soft facial tissue and the second 3D structure may representing the skull or jaw bone underlying the soft facial tissue. In another example, this processing may include an iterative algorithm configured to iteratively adjust the original first and second 3D structures taking into account the structural properties of at least the first 3D structure until the outer surface of the first 3D structure substantially coincides with the outer surface of the desired bodily portion. For example, the iterative algorithm may be configured such that adjustments to the second 3D structure (e.g. bone of the bodily portion such as, without limitation, for example skull, cheek or jaw bone) may be made whilst changes to the first 3D structure (e.g. soft tissue of the bodily portion such as, without limitation, for example facial soft tissue) are modelled based on the structural properties of the first 3D structure and the adjusted second 3D structure due to these adjustments. This process of adjusting the second 3D structure and modelling the effect on the first 3D structure (e.g. compression, stretching, and elasticity of the soft tissues) may be iterated or simulated until the outer surface of first 3D structure substantially coincides with the outer surface of the 3D model of the desired bodily portion of the subject. Based on these adjustments, the final adjusted second 3D structure may be compared with the original second 3D structure, where one or more volume shapes may have been created to achieve the required transformation of the first 3D structure. For example, one or more volumes between the adjusted second 3D structure and the original second 3D structure may be calculated based on determining one or more volume shapes resulting from differences between the adjusted second 3D structure and the original second 3D structure.
[00187] Thus, the first and second 3D structural meshes may be iteratively adjusted whilst modelling or simulating the effects of each of the adjustments on the first 3D structure taking into account the properties of the first 3D structure (e.g. soft tissue elasticity, compressibility, stretching and the like) until the first 3D structural mesh is sufficiently aligned or coincides with the desired 3D photogrammetry mesh of the desired bodily portion of the subject. Based on these adjustments, one or more volumes may be created that achieve the required transformation of the first 3D structural mesh to substantially coincide with the 3D photogrammetry mesh of the desired bodily portion. One or more volumes between the adjusted second 3D structural mesh and the original second 3D structural mesh may be calculated based on determining one or more volume shapes resulting from differences between the final adjusted second 3D structural mesh and the original second 3D structural mesh.
[00188] In step 416, outputting data representative of the determined one or more volumes for controlling the manufacture of one or more corresponding implants for the subject. The determined one or more volumes may be used by a manufacturing process to create one or more medical implants corresponding to said one or more volumes. These may then be applied during reconstructive and/or cosmetic surgery to the corresponding locations on the bodily portion of the subject to achieve the desired bodily portion of the subject. For example, the data representative of the one or more volumes may be output and made into one or more implants for the subject by, without limitation, for example one or more manufacturing techniques, 3D printing techniques, and/or any other implant generation or creation process configured for creating the one or more implants for the subject. For example, the data representative of the one or more volumes may be output to a 3D printer for printing, using medical grade materials and under medical conditions, the corresponding one or more implants for the subject. In another example, the data representative of the one or more volumes may be transmitted to a suitable manufacturer of implants, and used to manufacture the one or more implants for the subject. Although several techniques are described for manufacturing the one or more implants, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that the data representative of the one or more volumes may be used for creating or sculpting using any suitable technique for creating said one or more implants for the subject and/or as the application demands. These implants may then be applied during reconstructive or cosmetic procedures performed on the subject to achieve the desired bodily portion.
[00189] Figure 4c is a flow diagram illustrating an example implant analysis process 420 for use in the implant generation system 400 of figure 4a. The implant analysis process 420 may be used to analyse the differences between the desired bodily portion and the original bodily portion of the subject for determining one or more volumes for use in generating one or more corresponding implants according to the one or more volumes. In this example, the 3D model of the desired bodily portion of the subject may include at last data representative of: 3D photogrammetry mesh representative of the desired bodily portion of the subject and defining the 3D shape of the desired external appearance of the bodily portion of the subject, and the 3D medical model representative of the bodily portion of the subject. In this case, the 3D medical model further includes at least data representative of a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject, and a first 3D structural mesh representing the outer surface of the first 3D structure and a second 3D structural mesh representing the outer surface of the second 3D structure. The implant analysis process 420 may include the following steps of: [00190] in step 421, receiving the 3D model including at least a 3D photogrammetry mesh of the desired bodily portion of the subject, a first 3D structural mesh of the bodily portion of the subject, and a second 3D structural mesh of the bodily portion of the subject underlying first 3D structural mesh.
[00191] In step 422, performing one or more adjustments to the second structural mesh to deform the first 3D structure and the first 3D structural mesh towards the 3D photogrammetry mesh of the desired bodily portion of the subject based on the structural properties of the first 3D structure. For example, this may include modelling the structural properties of the first 3D structure as the first and second 3D structural meshes are adjusted or modified.
[00192] For example, this modelling may be performed using FEA, MSM and/or volumetric analysis, machine learning (ML) algorithms, and any other analytical tool/methodology and the like as described with reference to figures la to 4a for generating implant volumes and/or implants given the received 3D model.
[00193] In step 423, it is determined whether the deformed/adjusted first 3D structural mesh matches or is sufficiently aligned/coincident with the 3D photogrammetry mesh of the desired bodily portion of the subject. If this is the case (e.g. 'Y'), then the process 420 proceeds to step 424, otherwise (e.g. 'N') the process 420 proceeds back to step 422 for further iterative adjustments and the like. This iterative loop may continue until a stopping criterion is achieved, without limitation, for example a maximum number of iterations and/or until a certain error threshold is achieved between how close the first 3D structural mesh matches the 3D photogrammetry mesh of the desired bodily portion of the subject. However, if the first 3D structural mesh cannot converge towards the 3D photogrammetry mesh of the desired bodily portion of the subject within the tolerances associated with a match (or being sufficiently coincident), then this may indicate the desired bodily portion may not be achievable for the subject, but may be used to display the best matching or closest bodily portion to the desired bodily portion of the subject. Once confirmed, the process 420 may proceed to step 424 [00194] In step 424, determining one or more volume shapes based on the differences between the adjusted second 3D structural mesh and the original second 3D structural mesh.
[00195] In step 425, outputting data representative of the one of more volume shapes for controlling manufacture of one or more corresponding implants.
[00196] Figure 4d is a schematic diagram illustrating another example implant analysis system 430 that may be applied to the desired 3D model that may be output from the 3D visualisation systems or components 100, 125, 130, 200 and/or processes 110, 115, 140, 150, 160 as described with reference to figures la to 2 and/or output from the 3D visualisation and adjustment components 300 and 310 of figures 3a and 3b. In this example, the desired 3D model is based on the desired 3D model 313 that is output from the 3D model adjustment system 311 of figure 3h. The implant analysis system 430 includes implant analysis component 431 that receives the desired 3D model 313 and outputs data representative of one or more volumes or geometries that may be used to design/manufacture corresponding implants 433a-433b. The implant analysis component 421 receives the desired 3D model 313 of the head of the subject, which may include at least data representative of: the 3D medical model including the first 3D structure 3ma and corresponding first 3D structural mesh 301d, the second 3D structure 3oic and the corresponding second 3D structural mesh 301e; and the desired 3D photogrammetry data including the desired 3D photogrammetry mesh 302f and the corresponding desired texture mapping 302g.
[00197] In this example, the implant analysis component 421 uses FEA or any other suitable methodology (e.g., mass-spring model/analysis, volumetric analysis, ML modelling/algorithms and the like) to determine the one or more volumes. In this example, an FEA methodology similar to that as described with reference to figure 3b may be used. For example, the structural properties of the skin and soft-tissues are well known (e.g. elastic viscosity properties skin and soft tissue), where skin can be deformed more than soft-tissue. Alternatively or additionally, the structural properties of the first and/or second 3D structures of the desired 3D model 313 may be estimated from the medical scan data of the subject that were taken (e.g. CT/M121 scans) and used to generate the first and second 3D structures and corresponding first and second 3D meshes. Thus, deformations of the first 3D structure may be modelled based on deformations made to the second 3D structure. For example, deform the second 3D structure and mesh 301.dc and 301.e (e.g., bone) and model the effects that the deformation has on the first 3D structure and first 3D structural mesh 301 and 3old and determine whether the resulting deformed first 3D structural mesh (e.g. the deformed skin) aligns with the desired 3D photogrammetry mesh 302f. This deformation/modelling may be repeated iteratively by continuing to deform the second 3D structure and mesh and modelling the resulting deformation on the first 3D structure and mesh until the deformed first 3D structural mesh is aligned with the desired 3D photogrammetry mesh 302f (e.g. until a stopping criterion such as, without limitation, for example error between the deformed first 3D structural mesh and desired 3D photogrammetry mesh reaches an error threshold, or number of iterations is reached etc.), then the implant volumes or geometries 433a and 433b may be determined, where the volumetric differences between the original second structural mesh 3oie (e.g. undeformed bone) and the deformed second structural mesh (e.g. deformed bone) form the one or more implant volumes, shapes or geometries 433a and 433b that are required.
[00198] For example, the desired 3D photogrammetry mesh 3oie represents the desired deformation that the soft-tissues of the first 3D structure need to undergo in order for the first 3D structural mesh 3mA to align with the desired 3D photogrammetry mesh 302f. The finite element analysis may iteratively adjust the shape of the second structural mesh 3ole (e.g. the bone) while at the same time, in each iteration, modelling how the adjustments deform first 3D structure (e.g. the overlying soft tissues) and corresponding first 3D structural mesh 301d and whether the corresponding deformed first 3D structural mesh is in alignment or fits with the desired 3D photogrammetry mesh 302f. This iterative process is continued until alignment of the first 3D structure mesh with the desired 3D photogrammetry mesh 302f is determined to have been reached. This may be determined based on a stopping criterion such as a minimum error threshold calculated between the 3D photogrammetry mesh 302f and the deformed first 3D structural mesh and the like.
[00199] Once alignment is reached, the implant volumes or geometries required are determined by calculating the differences in the volumes between the deformed second 3D structural mesh and the original second 3D structural mesh 3oie may be determined. The resulting volumes may be used to define the implant volumes/geometries required. Data representative of the resulting volumes may be output in a format for use in controlling an implant manufacturing process to design and make implants with the required implant volumes and/or geometries.
[002430] in another example, after the desired 3D photogrammetry mesh 301f has been generated by a user deforming (manipulating/adjusting) the 3D photogrammetry mesh 301b using face manipulation or sculpting tools, the implant volume(s) may be determined based on the desired 3D photogrammetry mesh 3me, the first 3D structural mesh 301d and the second 3D structural mesh 3m.e. This may be achieved using, without limitation, for example an FEA method to simulate and derive one or more implant volumes given the desired 3D photogrammetry mesh 301f and the originally generated first and second 3D meshes 301d and 3me, as well as a range of soft tissue and/or bone properties associated with the first and second 3D structures. FEA provides the advantage of accurately modelling different materials and structure, which improves the accuracy of determining accurate implant volumes from which implants may be created/made. The determination of implant volumes may be determined by FEA in non-real time, i.e. offline, after the desired 3D photogrammetry mesh has been approved and/or finalised.
[00201] The FEA method may be applied to the first and second 3D structural meshes (e.g. skin and bone meshes) to generate a prediction of required displacement of the second 3D structural mesh (e.g., skull/bone mesh) that causes the first 3D structural mesh to align with the desired 3D photogrammetry mesh taking in to account the structural properties (e.g., soft tissue and/or bone properties) of at least the first 3D structure in relation to the displacement of the second 3D structural mesh. The FEA method may iteratively be performed to displace the second 3D structural mesh whilst deforming the first 3D structural mesh taking into account the properties of at least the first 3D structure with respect to the displacement of the second 3D structural mesh. The FEA method can then be used to with the other meshes to derive an implant volume/mesh.
[00202] input parameters and material properties for the FEA method may include, without limitation, for example Young's modulus of the first and/or second 3D structures, Poisson ratio of the first and/or second 3D structures, viscoelasticity of the first and/or second 3D structures and/or any one or more other suitable input properties/values/boundary conditions associated with the first and/or second 3D structures. For example, when the second 3D structure is the skull/bone, the following ranges may be applied to the FEA method in relation to the second 3D structure: Young's Modulus for the bone: a Young's Modulus value approximately in the range of 5000 MPa to 15000 MPa; Poisson ratio for bone: a Poisson ratio value approximately in the range of 0.2 YO 0.4. When the first 3D structure is the soft-tissue, the following ranges may be applied to the FEA method in relation to the first 3D structure: Young's Modulus for the soft-tissue: a Young's Modulus value approximately in the range of 0.1 MPa to 1 MPa; Poisson ratio for soft-tissue: a Poisson ratio value approximately in the range of 0.45 to 0.5; Viscoelasticity for soft-tissue: a viscoelasticity value approximately in the range of 30% to 95%.
[00203] Although various ranges for Young's modulus, Poisson ratio, and viscoelasticity of the first and/or second 3D structures have been described herein, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that other values for the Young's modulus, Poisson ratio, and viscoelasticity of the first and/or second 3D structures of the bodily portion of the subject may be measured and/or determined and used in the FEA method when modelling the first and second 3D structures/meshes, where such values for Young's modulus, Poisson ratio, and may be within the above ranges and/or even outside the above ranges as measured/determined for that particular subject or the bodily portion of that particular subject and/or as the application demands. Although the input parameters and material properties for the FEA method include Young's modulus, Poisson ratio and viscoelasticity, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that one or more other properties of the materials associated with the first and/or second 3D structures and/or implant materials, may be input and/or applied in the FEA method and/or in addition to Young's modulus, Poisson ratio, viscoelasticity associated with the first and/or second 3D structures and the like as the application demands.
[00204] The FEA method receives as input data representative of the desired 3D photogrammetry mesh, the first 3D structural mesh and second 3D structural mesh and structural properties of at least the first 3D structure. Taking the desired 3D photogrammetry mesh, and applying the FEA method (e.g. in an iterative manner) to deform the second 3D structural mesh (e.g. bone mesh) and cause deformation of the first 3D structural mesh based on the structural properties of the first and second 3D structures, the resulting deformed second 3D structural mesh's displaced position may be calculated along with a deformed first 3D structural mesh based on the structural properties of the first and second 3D structures. This may be iterated until the deformed first 3D structural mesh coincides or meets the desired 3D photogrammetry mesh within a certain error threshold or stopping criterion. During the FEA method, boundary conditions should be set in two directions, outwardly from the second 3D structure to the first 3D structure and longitudinally (e.g. for facial 3D model, these two directions may be from the back of the head facing forwards and from the top of the head towards the floor). Once the PEA method has reached a solution, the difference between the deformed second 3D structural mesh (e.g. deformed bone mesh) and the original second 3D structural mesh (e.g. bone mesh) is then used to determine the difference in volume between the deformed second 3D structural mesh (e.g. deformed bone mesh) and the original second 3D structural mesh (e.g. bone mesh). The resulting volume or volumes may be used as the volumetric shapes that form the one or more implants.
[00205] For more accuracy in the FEA method, as an option, the structural properties of the implant material may also be taken into account during the FEA method in relation to the displacement of the second 3D structure. This is because the volume associated with the displacement of the second 3D structure forms the implant volume. Additionally or alternatively as an option, should the second 3D structure be softer than bone or be a combination of bone and cartilage/muscle (e.g., the underlying second 3D structure may be ribs/muscle/cartilage of the chest, whilst first 3D structure comprises the pectoral muscle and/or subcutaneous fat and soft-tissues associated with a breast) or have similar properties to that of the first 3D structure (e.g., the first 3D structure is soft-tissue and the second 3D structure is also soft-tissue underlying the first 3D structure (i.e. a calf implant between calf muscles)), then the structural properties of the second 3D structure may be taken into account by the FEA method as both first and second 3D structures may be deformed or compressed when an implant is inserted therebetween.
[00206] Although FEA is described herein, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that FEA is one of many suitable analysis/modelling/processing techniques that may be applied such as, without limitation, for example mass-spring analysis, one or more ML models trained to estimate implant volumes given a desired 3D model and the original 3D model, and/or any other type of process or technique for estimating implant volumes, modifications thereto, combinations thereof, and/or as the application demands.
[00207] Although in this example, the implant analysis component 321 was described as using FEA methodology, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person in the art that any other suitable methodology may be applied such as, without limitation, for example FEA, mass-spring model/analysis, volumetric analysis, ML modelling and/or any other type 1 0 of numerical methodology or analysis methodology suitable to determine the one or more volumes, combinations thereof, modifications thereto and/or as the applications demand. The type of methodology may depend on the application requirements and/or computation speed, e.g. real-time computation may require a lower complexity modelling/analysis for determining feasibility of deformations/manipulations to the 3D photogrammetry mesh, whereas non-real time computation may allow higher complexity modelling/analysis such as FEA and/or other more accurate modelling for determining implant volumes and the like. That said, lower complexity modelling/analysis may also be accurate enough to enable determination of implant volumes and the like, as implants may be reshaped prior to reconstnictive operations and the like.
[00208] Alternatively or additionally, the implant analysis component 421 may be further modified to use a mass-spring model/analysis methodology instead of using the FEA methodology. For example, the mass-spring model (MSM) approach is based on constructing a framework of points connecting from the second 3D structural mesh (e.g. skull/bone mesh) to corresponding points on the first 3D structural mesh (e.g. the skin surface, or original 3D photogrammetry mesh) using a system of damped springs. Each point is attributed a mass, so that as deformations are made to the first 3D structural mesh (e.g. the skin surface) moving the connected points of the first 3D structural mesh, the corresponding connected points of the second 3D structural mesh also move accordingly, which deform the second 3D structural mesh. Once the first 3D structural mesh has been deformed to match or substantially match the desired 3D photogrammetry mesh, the resulting deformed second 3D structural mesh is used with the original second 3D structural mesh to determine the one or more volumes required for creating/making the implants. The implant analysis component 321 may be further modified to use a mass-spring model/analysis methodology instead of using the FEA methodology. For example, the mass-spring model (MSM) approach is based on constructing a framework of points connecting from the second 3D structural mesh (e.g. skull/bone mesh) to corresponding points on the first 3D structural mesh (e.g. the skin surface, or original 3D photogrammetry mesh) using a system of damped springs. Each point is attributed a mass, so that as deformations are made to the first 3D structural mesh (e.g. the skin surface) moving the connected points of the first 3D structural mesh, the corresponding connected points of the second 3D structural mesh also move accordingly, which deform the second 3D structural mesh. Once the first 3D structural mesh has been deformed to match or substantially match the desired 3D photogrammetry mesh, the resulting deformed second 3D structural mesh is used with the original second 3D structural mesh to determine the one or more volumes required for creating/making the implants.
[o 0209] The MSM method uses the second 3D structural mesh (e.g. bone surface mesh) and the first 3D structural mesh (e.g. skin surface mesh) to derive an evenly distributed system of points therebetween that will be used to attach springs between the second 3D structural mesh and first 3D structural mesh (e.g. bone and skin surfaces). In effect, the MSM methodology approximates/simulates the anisotropic behaviour of the first 3D structure (e.g. soft tissue) and/or deformation of the first 3D structural mesh using suitable dampening values for the springs whilst ensuring the volume between the first 3D structural mesh and second 3D structural mesh is maintained. As the first 3D structural mesh is deformed (e.g. towards the desired 3D photogrammetry mesh) the spring constraints on each of the points connected to the second 3D structural mesh will ensure that these points connected to the bone surface deform and move with the deformed first 3D structural mesh. Once the first 3D structural mesh coincides or meets the desired 3D photogrammetry mesh, the difference between the deformed second 3D structural mesh (e.g. deformed bone mesh) and the original second 3D structural mesh (e.g. bone mesh) is then used to determine the difference in volume between the deformed second 3D structural mesh (e.g. deformed bone mesh) and the original second 3D structural mesh (e.g. bone mesh). The resulting volume or volumes may be used as the volumetric shapes that form the one or more implants.
[002143] Alternatively, rather than using iterative or simulation/modelling techniques to calculate how to deform the second 3D structural mesh (e.g. bone/skull mesh) given a desired 3D photogrammetry mesh (e.g. deformed skin surface mesh) as described with reference to the implant analysis system 421 in figure 4d, which calculates deformation of the second 3D structural mesh after the desired 3D photogrammetry mesh has been performed, changes to the second 3D structural mesh may be made at the same time as changes to the 3D photogrammetry mesh until the desired 3D photogrammetry mesh is achieved. Although in figure 4d the first 3D structural mesh may be deformed to meet the desired 3D photogrammetry mesh, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that these and/or other techniques may be applied in which, when the 3D photogrammetry mesh (or first 3D structural mesh) is deformed in figure 3b by the user until a desired 3D photogrammetry mesh is achieved, corresponding deformations may be made to the second 3D structural mesh to generate a deformed second 3D structural mesh, which can be used to determine volumes for creating/making implants.
[00211] For example, the MSM methodology may be applied in the 3D model adjustment system 310 of figure 3b, which may be used in the 3D visualisation systems 100, 130 of figures la to ih, and/or 3D visualisation and implant generation system 200 of figure 2 and the like. In the 3D model adjustment system 310, the 3D model manipulation apparatus 311 may receive the 3D model 309a of the head of the subject, and display it to a user. The 3D model 309a is a realistic representation of the head of the subject and anatomically accurate. The 3D model manipulation apparatus 311 receives adjustment inputs from the user and controls how the face of the 3D model 309a may be adjusted in a manner that is cosmetically feasible. This is achieved by controlling how the 3D photogrammetry mesh 3o2d (or first 3D structural mesh 301d) is deformed from user inputs by a suite of tools that allow the user or operator to adjust the 3D model 309a of the face to a desired shape of the face. The suite of tools may include facial manipulation operations (e.g. bodily portion manipulation operations) that may be used to pull, push, and deform the shape of the surface of the face to a desired shape. The MSM methodology may be applied to connect points on the second 3D structural mesh 3ole with corresponding points on the first 3D structural mesh 3oul (or 3D photogrammetry mesh 302d), which are fitted the spring constraints and the like. Thus, whilst the user is adjusting the 3D model 309a and hence deforming the 3D photogrammetry mesh 302d and/or first 3D structural mesh 301d, the second 3D structural mesh 3ole is being likewise deformed as described above with respect to the MSM methodology. Thus, once the user has finished with the adjustments and the 3D photogrammetry mesh 302d has been adjusted into a desired 3D photogrammetry mesh, likewise with the corresponding first 3D structural mesh, then the deformed second 3D structural mesh may be used to determine the volumes between the deformed second 3D structural mesh and the original second 3D structural mesh for use in creating/making the required implant volumes.
[00212] In another example, using iterative or simulation/modelling techniques to calculate how to deform the second 3D structural mesh (e.g. bone/skull mesh) given a desired 3D photogrammetry mesh (e.g. deformed skin surface mesh) and calculating deformation of the second 3D structural mesh after the desired 3D photogrammetry mesh has been performed, changes to the second 3D structural mesh may be made at the same time as changes to the 3D photogrammetry mesh until the desired 3D photogrammetry mesh is achieved.
[00213] In another example, a volumetric approach may be performed during operation of the 3D model adjustment system 310 of figure 3b in which when a section on the 3D photogrammetry mesh 3o2d of the 3D model 309a of the bodily portion of the subject (e.g. a head model) is selected for editing, a ray may be cast from the selected section that locates the nearest portion of the second 3D structural mesh (e.g. bone mesh) and then the volume of that affected portion of the second 3D structural mesh (e.g. bone structure is deformed).
[00214] For example, in the 3D model adjustment system 310, the 3D model manipulation apparatus 311 may receive the 3D model 309a of the head of the subject, and display it to a user. The 3D model 309a is a realistic representation of the head of the subject and anatomically accurate. The 3D model manipulation apparatus 311 receives adjustment inputs from the user and controls how the face of the 3D model 309a may be adjusted in a manner that is cosmetically feasible. This is achieved by controlling how the 3D photogrammetry mesh 302d (or first 3D structural mesh 3oid) is deformed from user inputs by a suite of tools that allow the user or operator to adjust the 3D model 309a of the face to a desired shape of the face. The suite of tools may include facial manipulation operations (e.g. bodily portion manipulation operations) that may be used to pull, push, and deform the shape of the surface of the face to a desired shape. The volumetric methodology as described above may be applied whilst the user is adjusting the 3D model 3o9a and hence deforming the 3D photogrammetry mesh 3o2d and/or first 3D structural mesh 301d, the second 3D structural mesh 3ole is being likewise deformed. For example, when a user attempts to make a deformation to the 3D photogrammetry mesh 302d using the suite of tools including bodily portion manipulation operations, a ray is cast from the position of the scene's camera to the point on the 3D photogrammetry mesh 3o2d being deformed, and projecting from this point on the 3D photogrammetry mesh along a ray vector towards the second 3D structural mesh (e.g. bone mesh). if this ray vector intersects with the second 3D structural mesh, mesh points within range of where the ray intersects and if their normals are pointing towards the camera, are displaced by the same amount as the deformation to the 3D photogrammetry mesh 302d. This may be iterated as the user selects and deforms further portions of the 3D photogrammetry mesh 3o2d for deformation, where the deformed second 3D structural mesh is further deformed accordingly. The region of points around where the ray vector intersects on the deformed second 3D structural mesh may be adjusted to include a larger region of points or a smaller region of points that are adjusted based on the deformation. Alternatively or additionally, the region of points around where the ray vector intersects may have a gradient fall-off on the amount of distance or deformation that is applied to those points of the deformed second 3D structural mesh. The gradient falloff may be applied by instead of moving all the points in the region of points by the same distance deformation as on the 3D photogrammetry mesh 302d, the deformation distance is proportional to gradient fall off value. Thus, once the user has finished with the adjustments and the 3D photogrammetry mesh 302d has been adjusted into a desired 3D photogrammetry mesh, likewise with the deformed second 3D structural mesh, then the deformed second 3D structural mesh may be used to determine the volumes between the deformed second 3D structural mesh and the original second 3D structural mesh for use in creating/making the required implant volumes.
[oo215] In other examples, ML algorithms and the like trained for adjusting first and second 3D structural meshes taking into account properties of at least the first 3D structural mesh may be used to determine whether adjustments associated with the desired 3D model 313 are feasible and/or to generate one or more implant volumes from the resulting desired 3D model 313 and the like. For example, an ML model may be based on one or more ML models from the group of: a neural network (NN) comprising a plurality of neural network layers, each neural network layer associated with a set of parameters/weights; a convolutional NN (CNN) comprising a plurality of convolutional layers; a transformer-based ML model comprising a first and second transformer encoder/decoder NNs; an encoder/decoder NN; and/or any other suitable ML model capable of being trained to determine whether desired adjustments made are feasible given a first 3D structural mesh, a second 3D structural mesh and a 3D photogrammetry mesh of the desired bodily portion. Training of the ML model may be based on obtaining a training dataset comprising a plurality of training data instances.
Each training data instance comprising data representative of a first 3D structural mesh, second 3D structural mesh, desired 3D photogrammetry mesh, and appropriately feasible deformed first and second 3D structural meshes. For each training iteration of a plurality of training iterations the following may be performed: one or more training data instances (or a batch of training instances) are applied to the ML model, in which the first, second and desired 3D photogrammetry meshes are input, and which outputs whether the deformations are feasible and/or outputs the deformed first and/or second 3D structural meshes; an estimation of a loss is performed based on a difference between the output deformed first and/or second 3D structural meshes and the corresponding first and second deformed 3D structural meshes of each of the one or more training data instances. The set of weights of the ML model may be updated based on the estimated loss. In each subsequent iteration of the plurality of training iterations further one or more training instances (e.g. further batches of training instances) are retrieved for applying to the ML model, estimating the loss and updating the weights of the ML model and the like. Training the ML model may stop once a stopping criterion is reached, e.g. an error threshold is met, or a maximum number of training iterations is reached, or other performance metric associated with the particular type of ML model is met. The trained ML model may then be used to determine one or more volumes for the implants based on the deformed second 3D structural mesh and the original second 3D structural mesh. Alternatively, this modelling may be performed using other iterative or numerical analytical tools such as, without limitation, for example finite element analysis, mass-spring models, or other types of numerical analytical tools configured for modelling and/or iteratively adjusting the first and second 3D structural meshes taking into account properties of at least the first 3D structural mesh, the first 3D structure associated with the first 3D structural mesh, and/or the second 3D structure/structural mesh until the first 3D structural mesh is substantially aligned/coincident with the 3D photogrammetric mesh of the desired bodily portion of the subject.
[00216] Figures 5a to 5d are schematic diagrams illustrating an example implant adjustment and analysis processes 500, 510 520 and 530 for use in the 3D visualisation systems and/or implant generation systems 100, 130 and 200 of figures la to 2 or 3D model generation/adjustment systems 300 and 310 of figures 3a or 3b and/or implant generation system 400 and/or processes 410, 420 and/or implant analysis processes 430 of figures 4a-4d and the like. Referring to figure 5a, a cross section of a 3D model 500 of a bodily portion of a subject is illustrated that has a 3D photogrammetry mesh 501 coincident with a first 3D structural mesh 502, which is the outer surface of a first 3D structure 502a-502b, and a second 3D structural mesh 504, which is the outer surface of a second 3D structure 505 underlying the first 3D structure 503a-503b. The first 3D structure 503a-503b may be a soft-tissue structure including, without limitation, for example a skin structure 503a and a soft-tissue or muscle structure 5o3b. The second 3D structure 505 may be, without limitation, for example a bone structure or even another soft-tissue muscle structure overlying a bone/cartilage structure.
[00217] Referring to figure 5b, the 3D model 500 of the bodily portion of the subject may be adjusted to a desired 3D model 510 of the bodily portion of the subject as described with reference to figures ra to 4d. In this example, adjustments A may be made to the 3D photogrammetry mesh 501, which in this case deforms a portion of the 3D photogrammetry mesh outwardly away from the first 3D structural mesh 502 to form a desired 3D photogrammetry mesh 5n. In figure 5b, the 3D model adjustment process(es) as described with reference to figures ra to 4d may be performed (e.g. FEA, mass-spring-analysis, volumetric analysis, or any other analysis/ML technique) to model whether the adjustments A are feasible and allowable such that the requested adjustments results in an achievable deformation of the first and second 3D structural meshes 502 and 504 given the structural properties of the first and second 3D structures 503a-503b, 505 and the like. One feasible adjustments A are input, Figure 5c illustrates the desired 3D model 520 (e.g. the 3D model 500 with the desired feasible adjustments A applied) with finalised desired 3D photogrammetry mesh 511. The desired 3D model 520 of the bodily portion of the subject includes at least data representative of: the 3D medical model including the first 3D structure 503a-503b and corresponding first 3D structural mesh 502, the second 3D structure 505 and the corresponding second 3D structural mesh 504; and the desired 3D photogrammetry data including the desired 3D photogrammetry mesh 5n and the corresponding desired texture mapping (not shown). In figure 5d, the implant analysis process as described with reference to figures la to 4d may be performed (e.g. finite element analysis or spring-mass analysis, or any other analysis/ML technique) that determines a deformation of the second 3D structural mesh 504 that causes the first 3D structures 503a-503b and the first 3D structural mesh 502 to be deformed, taking the properties of the first 3D structures 503a-503b into account, to result in a deformed first 3D structural mesh 531 and deformed first 3D structures 533a and 533b, and also a deformed second 3D structural mesh 534. This deformation may be performed iteratively until a deformation of the second 3D structural mesh 534 is found that causes the deformed first 3D structural mesh 531 to align or substantially match the desired 3D photogrammetry mesh 511. Once this is achieved, the deformed second 3D structural mesh 534 is compared with the original second 3D structural mesh 504 to determine whether there are any differences in volumes therebetween. The differences between the second 3D structural meshes 534 and 504 result in volumes or geometries 535 that represent the geometry of the required implants that should be fitted to the bodily portion of the subject during reconstructive surgery to achieve the desired outcome in relation to the desired 3D model 520.
[00218] Although the first and second 3D structures have been described herein with reference to soft-tissues and bone structures, respectively, this is by way of example only and the invention is not so limited, it is to be appreciated by the skilled person that the first and second 3D structures may both be soft-tissue structures such as when implants are required for calf or bicep body portions of the subject, or where implants are required to be placed between two soft tissue structures, as in chest implants and/or pectoral implants. Thus, the soft-tissue of the second 3D structure may be deformed when designing the implant. As well, the properties of the soft-tissue of the second 3D structure and/or proposed implant material may need to be taken into account or modelled when performing the implant analysis process. Alternatively, the first 3D structure may be a soft-tissue structure such as skin, fat, glands and/or muscle-tissue and the second 3D structure may be a combination of soft-tissue and/or bone/cartilage structure, for example, in relation to breast implants. Should the second 3D structure include multiple structures such as soft-tissue and bone structures, then the properties of the soft-tissue structures and/also properties of the implant may also need to be taken into account or modelled during the implant analysis process (e.g. breast implants may be silicon based fluid implants). in this case, the upper layers or soft-tissue layers of the second 3D structure may be adjusted and the properties modelled (and/or the properties of the implants modelled) during the implant analysis process to determine suitable geometries for the implants.
[00219] As described herein, the implant material may be any type of implant material such as, without limitation, for example fluid/semi solid implant materials, soft tissue implant materials, silicon based implant material, rigid/non-rigid implant materials, rigid implant materials and the like. Given this, the above implant analysis processes may be modified to take into account the implant material properties when adjusting the second 3D structure or second 3D structural mesh, where both the first and second 3D structures (and/or implant materials within the adjusted portions of the second 3D structures) are modelled to determine the overall deformation of the first 3D structure and mesh, and determine whether the first 3D mesh aligns with the desired 3D photogrammetry mesh and the like.
[00220] Further modifications to the implant analysis process may include providing indications of the locations of each of the one or more volumes and hence the required locations of the resulting implants. When the implant material is a liquid, fluid or medium such as, without limitation, for example collagen filler, Hyaluronic Acid (HA) fillers, and/or any other type of filler or filler medium and the like, the implant analysis process may output estimated volumes that may assist in determining the volume or amount of implant material (e.g., collagen filler, HA filler or liquid implant fluid and the like) that is to be injected into the subject and also provide an indication of the injection locations and/or depths for placement of the associated implant material (e.g., collagen filler, HA filler or liquid implant fluid and the like).
[00221] The 3D visualisation systems, generation and adjustment processes and/or implant adjustment and analysis processes as described with reference to figures la to 5d may also be applied or used during reconstructive or cosmetic surgery, where modifications may be made to implants and prior to inserting the modifications may be scanned in 3D and placed into the 3D model of the subject to determine whether the first 3D structural mesh still aligns or coincides with the desired 3D model of the subject and/or whether the overall shape of the desired bodily portion of the subject is kept or met. This may be used during surgery to preview what various modifications to the implant will do to the desired 3D model of the subject.
[00222] Figure 6 illustrates a schematic example of a computing system/apparatus 600 for performing any of the methods described herein. The computing system/apparatus shown is an example of a computing device. It will be appreciated by the skilled person that other types of computing devices/systems may alternatively be used to implement the methods described herein, such as a distributed computing system.
[002231 The apparatus (or system) 600 comprises one or more processors 602 (e.g. CPUs). The one or more processors 602 control operation of other components of the system/apparatus 600. The one or more processors 602 may, for example, comprise a general-purpose processor. The one or more processors 602 may be a single core device or a multiple core device. The one or more processors 602 may comprise a Central Processing Unit (CPU) or a graphical processing unit (CPU). Alternatively, the one or more processors 602 may comprise specialized processing hardware, for instance a RISC processor or programmable hardware with embedded firmware. Multiple processors may be included.
[00224] The system/apparatus comprises a working or volatile memory 604a.
The one or more processors 602 may access the volatile memory 604a in order to process data and may control the storage of data in memory. The volatile memory 604a may comprise RAM of any type, for example, Static RAM (SRAM), Dynamic RAM (DRAM), or it may comprise Flash memory, such as an SD-Card.
[00225] The system/apparatus comprises a non-volatile memory 604b. The non-volatile memory 6o4b stores a set of operation instructions such as operating system instructions 6o5a and/or implant generation instructions and the like 6o5b for controlling the operation of the processors 602 in the form of computer readable instructions. The non-volatile memory 6o4b may be a memory of any kind such as a Read Only Memory (ROM), a Flash memory or a magnetic drive memory.
[00226] The one or more processors 602 are configured to execute operating instructions 6o5a-6o5b to cause the system/apparatus to perform any of the methods or processes described herein. The operating instructions 6o5a-6o5b may comprise code (i.e. drivers) relating to the hardware components of the system/apparatus 600, as well as code relating to the basic operation of the system/apparatus 600. Generally speaking, the one or more processors 602 execute one or more instructions of the operating instructions 6o5a-6o5b, which are stored permanently or semi-permanently in the non-volatile memory 604, using the volatile memory 6o4a to store temporarily data generated during execution of said operating instructions 605a-605b.
1 0 [00227] The computer system 600 may comprise one or more network/apparatus interfaces 6o6 for connection to a network/apparatus, e.g. a transceiver unit which maybe wired or wireless. The network/apparatus interface 606 may also operate as a connection to other apparatus such as device/apparatus which is not network side apparatus. Thus, direct connection between devices/apparatus without network participation is possible. As an option, the computer system 600 may include a user input 6w and a display 6o8 connected or coupled to one or more of the processors 602.
[00228] in some example embodiments, the computer system 600 may also be associated with external software applications implementing one or more of the methods and/or processes described herein. These maybe applications stored on a remote server device/apparatus or one or more remote servers/apparatus such as a cloud-based platform / cloud computing platform and may run partly or exclusively on the remote server device/apparatus and the like. These applications may be termed cloud-hosted applications. The computer system 600 may be in communication with the remote server devices/apparatus in order to utilize the software applications stored there when implementing the methods and/or processes as described herein.
[00229] Figure 7 shows tangible media 700, specifically a removable memory unit 702, storing computer-readable code which when run by a computer may perform methods according to example embodiments described above. The removable memory unit 702 may be a memory stick, e.g. a USB memory stick, having internal memory 704 storing the computer-readable code. The internal memory 704 may be accessed by a computer system via a connector 706. Other forms of tangible storage media maybe used. Tangible media 700 can be any device/apparatus capable of storing data/information which data/information can be exchanged between devices/apparatus/network.
[00230] implementations of the methods or processes described herein may be realized as in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These may include computer program products (such as software stored on e.g. magnetic discs, optical disks, memory, Programmable Logic Devices) comprising computer readable instructions that, when executed by a computer, such as that described in relation to Figure 6, cause the computer to perform one or more of the methods described herein.
[00231] Any system feature as described herein may also be provided as a method or process feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure. In particular, method aspects may be applied to system aspects, and vice versa.
[00232] Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination. It should also be appreciated that particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
[00233] Although several embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles of this disclosure, the scope of which is defined in the claims and their equivalents.
Claims (25)
- Claims 1. A computer-implemented method of visualising a desired bodily portion of a subject for use in cosmetic or reconstructive surgery, the method comprising: obtaining three-dimensional, 3D, imaging data of a bodily portion of the 5 subject; generating a 3D model of the bodily portion of the subject based on the 3D imaging data and data representative of a 3D structure representative of the bodily portion of the subject; adjusting the external appearance of the 3D model of the bodily portion of the subject until a desired outcome is reached whilst taking into account data representative of the structural properties of the 3D structure of the bodily portion of the subject; and outputting data representative of a 3D model of the desired bodily portion of the subject based on the desired outcome.
- 2. The computer-implemented method of claim 1, wherein generating the 3D model of the bodily portion of the subject further comprising: receiving data representative of the 3D structure representative of the bodily portion of the subject; and superimposing the 3D imaging data onto the 3D structure representative of the bodily portion of the subject.
- 3. The computer-implemented method of any of claims 1 or 2, wherein adjusting the external appearance of the 3D model further comprising: receiving data representative of desired adjustments to one or more regions of the 3D imaging data in relation to the 3D structure representative of the bodily portion of the subject; and controlling adjustments to the 3D imaging data relative to the 3D structure representative of the bodily portion of the subject to only those adjustments that arc feasible taking into account the structural properties of at least the 3D structure of the bodily portion of the subject.
- 4. The computer-implemented method of any preceding claim, wherein adjusting the external appearance of the 3D model taking into account the structural properties of at least the 3D structure of the bodily portion of the subject further comprising modelling the structural properties of the 3D structure until the outer surface of the 3D structure substantially coincides with the desired adjustment with the 3D imaging data.
- 5. The computer-implemented method of claim 4, wherein modelling the structural properties of the 3D structure until the outer surface of the 3D structure substantially coincides with the desired adjustment further comprising one or more from the group of: adjusting the 3D structure using finite element analysis and structural properties of the 3D structure; adjusting the 3D structure using mass-spring analysis and structural properties of the 3D structure; and adjusting the 3D structure based on modelling the elasticity and/or compression of the structural properties of the 3D structure.
- 6. The computer-implemented method of any preceding claim, wherein the 3D structure of the bodily portion of the subject comprises an outer surface structure and one or more underlying structures representative of the bodily portion of the subject.
- 7. The computer-implemented method of claim 6, wherein the outer surface structure comprises skin of the bodily portion of the subject, the one or more underlying structures comprising one or more from the group of: soft-tissue of the bodily portion of the subject; and/or bone associated with the bodily portion of the subject.
- 8. The computer-implemented method of any preceding claim, wherein the bodily portion of the subject comprises at least one from the group of: a head portion of the subject comprising at least a face or facial region of the subject; a neck portion of the subject comprising at least a neck region of the subject; a torso portion of the subject comprising at least a shoulder, chest, abdominal and/or pelvis region of the subject; a limb portion of the subject comprising at least an upper extremity or arm region including one or more of an upper arm, forearm or hand region of the subject, a lower extremity region including one or more of a hip, thigh, knee, leg, ankle or foot region of the subject.
- 9- The computer-implemented method of any preceding claim, wherein obtaining the 3D imaging data further comprising receiving, from an image capturing apparatus, 3D photogrammetry data of the bodily portion of the subject, the 3D photogrammetry data defining an external 3D appearance of the bodily portion of the subject; and generating the 3D model of the bodily portion of the subject further comprising: receiving a 3D structure representative of the bodily portion of the subject; and superimposing the 3D photogrammetry data onto the 3D structure representative of the bodily portion of the subject.to.
- The computer-implemented method of claim 9, wherein adjusting the external appearance of the 3D model further comprising: receiving data representative of desired adjustments to one or more regions of the 3D photogrammetry data in relation to the 3D structure representative of the bodily portion of the subject; controlling adjustments to the 3D photogrammetry data relative to the 3D structure representative of the bodily portion of the subject to only those adjustments that are feasible taking into account the structural properties of at least the 3D structure of the bodily portion of the subject; displaying updates to the 3D model with those feasible adjustments applied based on adjusting the 3D photogrammetry data; outputting an updated 3D model comprising at least the adjusted 3D photogrammetry data defining the desired external appearance of the bodily portion of the subject and the 3D structure representative of the bodily portion of the subject.it.
- The computer-implemented method of claim to, wherein controlling the adjustments of the 3D photogrammetry data in relation to the properties of the 3D structure of the bodily portion of the subject further comprising: identifying one or more implantable regions based on the 3D structure of the bodily portion of the subject and structural properties thereof; receiving data representative of desired adjustments in relation to one or more identified implantable regions; determining the feasibility of each desired adjustment in relation to the desired adjusted one or more implantable regions based on modelling the desired adjustments of the implantable regions in relation to the structural properties of the 3D structure of the bodily portion of the subject; in response to determining a desired adjustment of an implantable region is infeasible, indicating the infeasibility of the desired adjustment and/or limiting the adjustment to a feasible adjustment for the implantable region, otherwise allowing the adjustment until the desired outcome is reached.
- 12. The computer-implemented method of any preceding claim, wherein obtaining the 3D imaging data further comprising: receiving, from an image capturing apparatus, 3D photogrammetry data of the bodily portion of the subject requiring the one or more implants, the 3D photogrammetry data defining an external 3D appearance of the bodily portion of the subject; and receiving, from a medical image capturing apparatus, the 3D structure representing the bodily portion of the subject comprising data representative of medical imagery of the bodily portion of the subject for generating a 3D medical model of the portion of the subject, the 3D medical model comprising a first 3D structure and the second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject.
- 13. The computer-implemented method of claim 12, wherein generating the 3D model of the bodily portion of the subject further comprising superimposing the 3D photogrammetry data onto the first 3D structure of the 3D medical model, and outputting data representative of a 3D model of a bodily portion of the subject, the 3D model comprising: 3D photogrammetry data representative of the bodily portion of the subject and defining a desired external appearance of the bodily portion of the subject; and the 3D medical model representative of the bodily portion of the subject, the 3D medical model comprising a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject.
- 14. The computer-implemented method of claims 12 or 13, the first 3D structure comprising a first 3D structural mesh of the outer surface of the first 3D structure, the 3D photogrammetry data of the bodily portion comprising a 3D photogrammetry mesh 5 and 3D photogrammetry texture map data, wherein superimposing the 3D photogrammetry data onto the 3D structural mesh further comprising: receiving a reference 3D mesh comprising a plurality of landmarked features associated with the bodily portion of the subject; identifying the plurality of landmarked features on each of the first 3D structural mesh and the 3D photogrammetry mesh; generating a first reference 3D mesh based on superimposing the reference 3D mesh onto the first 3D structural mesh using the reference and identified landmarks of the reference 3D mesh and first 3D structural mesh; generating a second reference 3D mesh based on superimposing the reference 3D mesh onto the 3D photogrammetry mesh using the reference and identified landmarks of the reference 3D mesh and 3D photogrammetry mesh; and adjusting the second reference 3D mesh to match the first reference 3D mesh based on performing one or more mesh transformation operations; superimposing the 3D photogrammetry mesh onto the first 3D structural mesh based on the one or more performed mesh transformation operations; and superimposing the 3D photogrammetry texture map data onto the superimposed 3D photogrammetry mesh to form the 3D model of the bodily portion of the subject.
- 15. The computer-implemented method of any of claim 14, adjusting the 3D model further comprising: controlling adjustments to the superimposed 3D photogrammetry mesh and corresponding 3D photogrammetry texture map data taking into account the structural properties of at least the first 3D structure of the bodily portion of the subject until a desired outcome is reached; and generating the 3D model based on data representative of: the adjusted 3D photogrammetry mesh and 3D photogrammetry texture map data defining the desired external appearance of the bodily portion of the subject; and the 3D medical model comprising data representative of the first and second 3D structures and corresponding first and second 3D structural meshes.
- 16. The computer-implemented method of any preceding claim, further comprising using the output 3D model for generating one or more implants for the subject for use in cosmetic or reconstructive surgery based on the following steps of: receiving the output 3D model of a desired bodily portion of the subject, the 3D model comprising: 3D photogrammetry data representative of the bodily portion of the subject and defining a desired external appearance of the bodily portion of the subject; and a 3D medical model representative of the bodily portion of the subject, the 3D medical model comprising a first 3D structure and a second 3D structure underlying the first 3D structure in relation to the bodily portion of the subject; determining one or more volumes generated between the 3D photogrammetry data of the desired bodily portion of the subject and the second 3D structure taking into account the structural properties of at least the first 3D structure; and outputting data representative of the determined one or more volumes for controlling the manufacture of one or more corresponding implants for the subject.
- 17. The computer-implemented method of claim 16, wherein determining the one or more volumes further comprising: adjusting the first and second 3D structures taking into account the structural properties of at least the first 3D structure until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data; and calculating one or more volumes generated between the adjusted second 3D structure and the original second 3D structure based on determining one or more volume shapes resulting from differences between the adjusted second 3D structure and the original second 3D structure.
- 18. The computer-implemented method of claim 17, wherein adjusting the first and second 3D structures taking into account at least the properties of the first 3D structure further comprising iteratively increasing or decreasing regions of the outer surface of the second 3D structure whilst modelling the structural properties of the first 3D structure until the outer surface of the first 3D structure substantially coincides with the desired 3D photogrammetry data.
- 19. The computer-implemented method of any of claims 16 to 18, wherein the first 3D structure comprises a first 3D structural mesh representing the outer surface of the first 3D structure, the second 3D structure comprises a second 3D structural mesh representing the outer surface of the second 3D structure, and the 3D photogrammetry data of the desired bodily portion comprises a desired 3D photogrammetry mesh, and determining the one or more volumes further comprising: adjusting the first and second 3D structural meshes taking into account at least the properties of the first 3D structure until the first 3D structural mesh substantially coincides with the desired 3D photogrammetry mesh; and calculating any volumes generated between the adjusted second 3D structural mesh and the original second 3D structural mesh.
- 20. The computer-implemented method of any of claims 16 to 19, wherein: adjusting the first and second 3D structural meshes taking into account at least the properties of the first 3D structure further comprising iteratively increasing outwardly or decreasing inwardly the second 3D structural mesh whilst modelling the structural properties of the first 3D structure until the first 3D structural mesh substantially coincides with the desired 3D photogrammetry mesh; and calculating the one or more volumes generated between the adjusted second 3D structural mesh and the original second 3D structural mesh further comprising determining one or more volume shapes resulting from differences between the adjusted second 3D structural mesh and the original second 3D structural mesh.
- 21. The computer-implemented method of any of claims 16 to zo, wherein taking into account the structure of the first and second 3D structures further comprising performing finite element analysis or mass-spring analysis on the first and second 3D structures to adjust the boundaries of the first 3D structural mesh and second 3D structural mesh based on modelling the structural properties of the first 3D structure and until the first 3D structural mesh substantially coincides with the desired 3D photogrammetry mesh.
- 22. The computer-implemented method of any of claims 16 to 21, further comprising one or more from the group of: controlling the manufacturing one or more medical implants based on the output data representative of the one or more implants; 3D printing one or more medical implants using the data representative of the one or more implant volumes; and controlling a manufacturing process at an implant manufacturer for producing one or more medical implants using the data representative of the one or more implant 10 volumes.
- 23. The computer-implemented method of any of claims 12 to 22, wherein the medical imaging apparatus comprises one or more from the group of: ultrasound or ultrasonic scanning apparatus; Computerized Tomography, CT, scanning apparatus; Photon Counting CT scanning apparatus; Magnetic Resonance Imaging, MRI, apparatus; any other medical imaging apparatus capable of capturing the first and second 3D structures of the bodily portion of the subject.
- 24. A non-transitory tangible computer-readable medium comprising data or instruction code, which when executed on one or more processor(s), causes at least one of the one or more processor(s) to perform the steps of the computer-implemented method of any of claims 1 to 23.
- 25. A system for visualising a desired bodily portion of a subject for use in cosmetic or reconstructive surgery, the system comprising one or more processors, a memory and a communication interface, the memory comprising instructions that, when executed by the one or more processors, cause the system to perform operations comprising: obtaining three-dimensional, 3D, imaging data of a bodily portion of the subject; generating a 3D model of the bodily portion of the subject based on the 3D imaging data and data representative of a 3D structure representative of the bodily portion of the subject; adjusting the external appearance of the 3D model of the bodily portion of the subject until a desired outcome is reached whilst taking into account data representative of the structural properties of the 3D structure of the bodily portion of the subject; and outputting data representative of a 3D model of the desired bodily portion of the subject based on the desired outcome.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2212272.5A GB2621846A (en) | 2022-08-23 | 2022-08-23 | 3D visualisation system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2212272.5A GB2621846A (en) | 2022-08-23 | 2022-08-23 | 3D visualisation system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202212272D0 GB202212272D0 (en) | 2022-10-05 |
| GB2621846A true GB2621846A (en) | 2024-02-28 |
Family
ID=83902290
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2212272.5A Pending GB2621846A (en) | 2022-08-23 | 2022-08-23 | 3D visualisation system |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2621846A (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070276501A1 (en) * | 2006-05-25 | 2007-11-29 | Spinemedica Corp. | Patient-specific spinal implants and related systems and methods |
| US20150198943A1 (en) * | 2012-03-08 | 2015-07-16 | Brett Kotlus | 3d design and fabrication system for implants |
| CN108904099A (en) * | 2018-06-29 | 2018-11-30 | 中国医学科学院整形外科医院 | A kind of personalized implant design method simulated based on bone and skin deformation |
-
2022
- 2022-08-23 GB GB2212272.5A patent/GB2621846A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070276501A1 (en) * | 2006-05-25 | 2007-11-29 | Spinemedica Corp. | Patient-specific spinal implants and related systems and methods |
| US20150198943A1 (en) * | 2012-03-08 | 2015-07-16 | Brett Kotlus | 3d design and fabrication system for implants |
| CN108904099A (en) * | 2018-06-29 | 2018-11-30 | 中国医学科学院整形外科医院 | A kind of personalized implant design method simulated based on bone and skin deformation |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202212272D0 (en) | 2022-10-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11963889B2 (en) | System for designing and fabricating a customised device | |
| JP4979682B2 (en) | Method and system for pre-operative prediction | |
| Schendel et al. | Three-dimensional imaging and computer simulation for office-based surgery | |
| US20170360578A1 (en) | System and method for producing clinical models and prostheses | |
| KR101744079B1 (en) | The face model generation method for the Dental procedure simulation | |
| JP2003530177A (en) | Computer aided bone lengthening | |
| JP7686078B2 (en) | DEVICE AND METHOD FOR PRE-OR-INTRODUCTION REGISTRATION OF IMAGE MODELS TO AUGMENTED REALITY SYSTEMS - Patent application | |
| Georgii et al. | A computational tool for preoperative breast augmentation planning in aesthetic plastic surgery | |
| Schendel et al. | 3D orthognathic surgery simulation using image fusion | |
| JP2021508510A (en) | Computer implementation methods and systems for manufacturing orthopedic devices | |
| CN111096835A (en) | An orthosis design method and system | |
| CN116264995A (en) | Personalized 3D modeling method and computer equipment for scoliosis orthopedic brace | |
| CN112826641A (en) | Design method and related equipment of guide plate for total hip replacement | |
| US20240041529A1 (en) | Device for modeling cervical artificial disc based on artificial intelligence and method thereof | |
| WO2002003304A2 (en) | Predicting changes in characteristics of an object | |
| CN119700293B (en) | Preoperative planning method and planning equipment for hip joint spacer prosthesis | |
| GB2621846A (en) | 3D visualisation system | |
| GB2621847A (en) | Implant generation system | |
| WO2025251725A1 (en) | Bending method and system for body surface model | |
| Scharver et al. | Pre-surgical cranial implant design using the PARIS/spl trade/prototype | |
| TWI552729B (en) | A system and method for image correction design of jaw jaw surgery | |
| CN114595522A (en) | Three-dimensional image dynamic correction evaluation and correction tool aided design method and system thereof | |
| CN111227933A (en) | A prediction and real-time rendering system for mandibular angle osteotomy | |
| KR20190140720A (en) | Method and device for modelling and producing implant for orbital wall | |
| Moreira et al. | Pectus excavatum postsurgical outcome based on preoperative soft body dynamics simulation |