[go: up one dir, main page]

WO2025018776A1 - Deep learning-based surgery plan system for reverse shoulder arthroplasty, and driving method thereof - Google Patents

Deep learning-based surgery plan system for reverse shoulder arthroplasty, and driving method thereof Download PDF

Info

Publication number
WO2025018776A1
WO2025018776A1 PCT/KR2024/010244 KR2024010244W WO2025018776A1 WO 2025018776 A1 WO2025018776 A1 WO 2025018776A1 KR 2024010244 W KR2024010244 W KR 2024010244W WO 2025018776 A1 WO2025018776 A1 WO 2025018776A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image information
ray image
surgical plan
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/010244
Other languages
French (fr)
Korean (ko)
Inventor
유주연
김무섭
김현주
차하영
강성빈
윤도군
김양수
김종호
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industry Academic Cooperation Foundation of Catholic University of Korea
Original Assignee
Industry Academic Cooperation Foundation of Catholic University of Korea
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industry Academic Cooperation Foundation of Catholic University of Korea filed Critical Industry Academic Cooperation Foundation of Catholic University of Korea
Publication of WO2025018776A1 publication Critical patent/WO2025018776A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides

Definitions

  • the present invention relates to a surgical planning system for reverse total shoulder arthroplasty based on deep learning and a method for operating the same. Specifically, the present invention proposes a surgical planning system that automatically establishes a surgical plan before surgery for reverse total shoulder arthroplasty, and a comprehensive surgical planning system for reverse total shoulder arthroplasty that can automatically design and produce a patient-specific surgical guide (Patient Specific Instrument) for performing a fast and accurate surgery according to the established surgical plan.
  • a patient-specific surgical guide Principal Specific Instrument
  • Performing a reverse total shoulder arthroplasty can present a number of challenges and potential complications, and without preoperative planning, it can be difficult for the surgeon to accurately determine the appropriate size and position of the prosthetic components.
  • inadequate implant sizing can result in instability, limited range of motion, or joint inconsistency, which can lead to suboptimal results, potential complications, and challenges with intraoperative landmark identification, determining appropriate bone resection, and optimizing soft tissue balance.
  • the present invention proposes a surgical planning system that automatically establishes a surgical plan prior to surgery for retrograde total shoulder arthroplasty that can solve the aforementioned problems.
  • the present invention proposes a comprehensive retrograde total shoulder arthroplasty surgical planning system that can automatically design and produce a patient-specific surgical guide (Patient Specific Instrument) for performing a fast and accurate surgery according to an established surgical plan.
  • a patient-specific surgical guide Principal Specific Instrument
  • Deep learning within the system applied to the present invention proposes a system that determines the cutting level of the acromion and recommends products from specific companies to proceed with templating.
  • the present invention proposes a system and method for designing a patient-specific surgical guide using a Boolean algorithm, thereby using 3D printing to produce a guide that can be used in surgery without a separate design work.
  • a deep learning-based surgical plan generation method related to an example of the present invention for realizing the above-described task may include: a first step of acquiring X-ray image information; a second step of performing preprocessing for analysis of a pre-designated area of the X-ray image information; a third step of analyzing the pre-processed X-ray image using a convolutional neural network (CNN) and extracting and learning feature information from the analysis result; a fourth step of repeating the first to third steps and generating template data for the pre-designated area based on the repeatedly learned feature information; a fifth step of acquiring first X-ray image information of a user; a sixth step of extracting first feature information related to the user from the feature information using the first X-ray image information and the template data; and a seventh step of generating a surgical plan related to the user based on the first feature information.
  • CNN convolutional neural network
  • the surgery involving the user may include a reverse total shoulder arthroplasty, and the pre-designated area may include a shoulder joint area.
  • the preprocessing in the second step can be applied to enable analysis of anatomical information about the shoulder joint area.
  • the feature information may include at least one of anatomical structure information related to the user, bone structure information, angle information of a pre-specified joint, muscle and/or tissue analysis information, and information related to the size and position of an implant structure to be applied to the shoulder joint.
  • the generated surgical plan may include at least one of information determining the type, size, and location of the implant to be applied to the shoulder joint, information on the cutting range and cutting angle for applying the retrograde rotator cuff arthroplasty, information on prediction of potential complications after the surgery, information on prediction of the user's range of motion related to the shoulder joint area after the surgery, and information on an evaluation score for the suitability of the established surgical plan determined based on pre-stored surgical data.
  • a CT image on which a surgery has been completed in advance is acquired
  • a cutting level applied to the completed surgery is measured based on the CT image
  • a portion of the area related to the completed surgery is extracted from the X-ray image
  • the learning can be performed after labeling the preprocessed X-ray image based on the cutting level and the extracted portion of the area.
  • a first shape is automatically determined according to the size of the selected area among a plurality of pre-stored component shapes, and the template data can be generated based on the first shape.
  • the method may further include: an 8th step of acquiring CT image information of the user; a 9th step of automatically generating 3D image information by 3D-reconstructing the CT image information; and a 10th step of receiving the first X-ray image information and the template data, and generating STL information by using the automatically generated 3D image information, the first X-ray image information, and the template data together; an 11th step of inputting the generated STL data into a 3D printer; and a 12th step of outputting a surgical guide to which the surgical plan has been simulated in advance based on the STL data from the 3D printer.
  • the PSI size related to the CT image information can be changed using the cutting level on the surgical plan, and the three-dimensional image information can be automatically generated by performing a two-dimensional Boolean operation on the image information with the changed PSI size to fit the user's skeleton.
  • a deep learning-based surgical plan generation device for realizing the above-described task includes: an acquisition unit for acquiring X-ray image information; a preprocessing unit for performing preprocessing for analyzing a pre-designated area of the X-ray image information; and a control unit for analyzing the preprocessed X-ray image using a convolutional neural network (CNN) and extracting and learning feature information from the analysis result; wherein, after repeating the operations of the acquisition unit, the preprocessing unit, and the control unit, the control unit generates template data for the pre-designated area based on the repeatedly learned feature information, and when the acquisition unit additionally acquires first X-ray image information of the user, the control unit can extract first feature information related to the user from the feature information by using the first X-ray image information and the template data, and generate a surgical plan related to the user based on the first feature information.
  • CNN convolutional neural network
  • deep learning is used to make decisions with high accuracy on areas that are difficult for surgeons to determine, such as cutting levels, and to provide recommendations for the most suitable artificial joint product for a patient based on images alone, as well as templating, thereby resolving the overall aspects of surgical planning.
  • the time required to establish a surgical plan and perform the surgical procedure can be drastically shortened, and the gap in the surgeon's experience and skills can be reduced, allowing even less experienced surgeons to perform the surgery accurately, while reducing the patient's pain due to the surgery.
  • FIG. 1 illustrates an example of a block diagram of a deep learning-based surgical plan generation system in relation to the present invention.
  • FIG. 2 illustrates an example of a flowchart explaining a deep learning-based surgical plan generation method in relation to the present invention.
  • FIG. 3 illustrates an example of the UI configuration and function of a surgical planning system for retrograde total shoulder arthroplasty in relation to the present invention.
  • FIG. 4 illustrates an example of a cutting level determination deep learning model overview diagram related to the present invention.
  • FIG. 5 illustrates an example of a templated deep learning model overview diagram related to the present invention.
  • FIG. 6 illustrates an example of a Boolean-based PSI automatic design algorithm in relation to the present invention.
  • Reverse shoulder arthroplasty is an orthopedic surgical procedure used to treat certain shoulder conditions associated with severe arthritis, irreparable rotator cuff tears, or complex fractures.
  • reverse total shoulder arthroplasty can reverse the anatomical structure of the shoulder joint.
  • Reverse shoulder arthroplasty aims to relieve pain, restore shoulder function, and improve range of motion in individuals with limited mobility or strength due to rotator cuff dysfunction or severe arthritis.
  • This procedure bypasses the need for rotator cuff function by utilizing the deltoid muscle as the primary exercise.
  • the need for a preoperative surgical planning system for retrograde arthroscopic shoulder arthroplasty lies in its ability to improve surgical outcomes, enhance patient safety, and optimize the overall surgical process.
  • Deep learning algorithms can analyze medical images and extract details with a high level of accuracy.
  • the system can quickly process large amounts of medical image data, enabling faster preoperative planning.
  • Surgical planning systems that use deep learning models can analyze complex patterns in medical images to provide surgeons with useful insights and recommendations.
  • Performing a reverse total shoulder arthroplasty can present many challenges and potential complications, and without preoperative planning, it can be difficult for the surgeon to accurately determine the appropriate size and position of the prosthetic components.
  • inadequate implant sizing can result in instability, limited range of motion, or joint inconsistency, which can lead to suboptimal results, potential complications, and challenges with intraoperative landmark identification, determining appropriate bone resection, and optimizing soft tissue balance.
  • the present invention proposes a surgical planning system that automatically establishes a surgical plan prior to surgery for retrograde total shoulder arthroplasty that can solve these problems.
  • the present invention proposes a comprehensive retrograde total shoulder arthroplasty surgical planning system that can automatically design and produce a patient-specific surgical guide (Patient Specific Instrument) for performing a fast and accurate surgery according to an established surgical plan.
  • a patient-specific surgical guide Principal Specific Instrument
  • Deep learning within the system applied to the present invention proposes a system that determines the cutting level of the acromion and recommends products from specific companies to proceed with templating.
  • the present invention proposes a system and method for designing a patient-specific surgical guide using a Boolean algorithm, thereby proceeding with a guide that can be used in surgery using 3D printing without a separate design work.
  • FIG. 1 illustrates an example of a block diagram of a deep learning-based surgical plan generation system in relation to the present invention.
  • a deep learning-based surgical plan generation system (1) may include an acquisition unit (10) that acquires X-ray image information, a preprocessing unit (20) that performs preprocessing for analysis of a pre-designated area among the X-ray image information, and a control unit (30) that analyzes the preprocessed X-ray image using a convolutional neural network (CNN) and extracts and learns feature information from the analysis results.
  • acquisition unit (10) that acquires X-ray image information
  • a preprocessing unit (20) that performs preprocessing for analysis of a pre-designated area among the X-ray image information
  • a control unit (30) that analyzes the preprocessed X-ray image using a convolutional neural network (CNN) and extracts and learns feature information from the analysis results.
  • CNN convolutional neural network
  • control unit (30) After repeating the operations of the acquisition unit (10), preprocessing unit (20), and control unit (30), the control unit (30) generates template data for the pre-designated area based on the repeatedly learned feature information.
  • the control unit (30) uses the first X-ray image information and template data to extract the first feature information related to the user from among the feature information, and generates a surgical plan related to the user based on the first feature information.
  • control unit (30) can obtain the user's CT image information and automatically generate 3D image information by 3D reconstructing the CT image information.
  • control unit (30) or the 3D printing unit (2) may receive the first X-ray image information and the template data, and use the automatically generated 3D image information, the first X-ray image information, and the template data together to generate STL information.
  • the 3D printer section (2) can output a surgical guide that simulates the surgical plan in advance based on the STL data.
  • FIG. 2 illustrates an example of a flowchart explaining a deep learning-based surgical plan generation method in relation to the present invention.
  • the surgery involved herein may include reverse rotator cuff arthroplasty, and the pre-specified area may include the shoulder joint area.
  • a step (S1) of acquiring X-ray image information is performed.
  • a step (S2) of performing preprocessing for analysis of a pre-designated area of X-ray image information is performed.
  • preprocessing can be applied to enable analysis of anatomical information about the shoulder joint area.
  • a step (S3) is performed in which the preprocessed X-ray image is analyzed using a convolutional neural network (CNN), and feature information is extracted and learned from the analysis results.
  • CNN convolutional neural network
  • the feature information may include at least one of anatomical structure information related to the user, bone structure information, angle information of a pre-specified joint, muscle and/or tissue analysis information, and information related to the size and position of an implant structure to be applied to the shoulder joint.
  • steps 1 to 3 are repeated, and a step (S4) of generating template data for a pre-designated area based on the repeatedly learned feature information is performed.
  • a cutting level applied to the completed surgery is measured based on the CT image, a portion of the area related to the completed surgery is extracted from the X-ray image, and the learning can be performed after labeling the preprocessed X-ray image based on the cutting level and the extracted portion of the area.
  • a first shape may be automatically determined based on the size of the selected area from among a plurality of pre-stored component shapes, and the template data may be generated based on the first shape.
  • a step (S5) of acquiring the user's first X-ray image information is performed.
  • a step (S6) of extracting first feature information related to the user from among feature information is performed using the first X-ray image information and template data.
  • a step (S7) of generating a surgical plan related to the user based on the first feature information is performed.
  • the surgical plan generated herein may include at least one of information determining the type, size, and location of an implant to be applied to the shoulder joint, information on the cutting range and cutting angle for applying the retrograde rotator cuff arthroplasty, information on prediction of potential complications after the surgery, information on prediction of the user's range of motion related to the shoulder joint area after the surgery, and information on an evaluation score for the suitability of the established surgical plan determined based on pre-stored surgical data.
  • a step (S8) of acquiring the user's CT image information and automatically generating 3D image information by reconstructing the CT image information in 3D is performed.
  • the PSI size related to the CT image information can be changed using the cutting level on the surgical plan, and the image information with the PSI size changed can be subjected to a two-dimensional Boolean operation to automatically generate the three-dimensional image information to fit the user's skeleton.
  • the first X-ray image information and template data are received, and STL information is generated using the automatically generated 3D image information, the first X-ray image information, and the template data together (S10), and the generated STL data is input to a 3D printer (S11).
  • a step (S12) of printing a surgical guide that simulates a surgical plan in advance based on STL data on a 3D printer can be performed.
  • the deep learning-based reverse rotator cuff arthroplasty preoperative surgical planning system proposed by the present invention described above is a cutting-edge technology solution that helps surgeons plan and optimize surgical procedures.
  • Reverse shoulder arthroplasty is a complex orthopedic surgery performed to treat shoulder conditions such as severe arthritis or rotator cuff tears. It involves reversing the normal anatomical structure of the shoulder joint to improve its function.
  • the present invention utilizes X-ray images to capture detailed anatomical information about a patient's shoulder joint, and these images are used as input data for a deep learning algorithm.
  • Deep learning algorithms especially convolutional neural networks (CNNs), excel at analyzing preprocessed medical images and extracting meaningful features from them, and have proven effective in a variety of medical imaging tasks, extracting relevant features that are important for surgical planning.
  • CNNs convolutional neural networks
  • These characteristics may include bone structure, joint angles, implant position, and muscle/tissue analysis.
  • the system Based on the extracted features, the system generates a detailed preoperative surgical plan, which may include determining the optimal implant type, size and position, cutting range and angle, predicting potential complications, predicting postoperative range of motion, and assessing the overall feasibility of the surgery.
  • the present invention proposes a comprehensive surgical planning system for retrograde total shoulder arthroplasty that automatically establishes a surgical plan prior to surgery for retrograde total shoulder arthroplasty and automatically designs and produces a patient-specific surgical guide (Patient Specific Instrument) according to the established surgical plan for fast and accurate surgery.
  • a patient-specific surgical guide Principal Specific Instrument
  • Deep learning within the system can determine the cutting level of the shoulder and even recommend products from specific companies to proceed with templating.
  • a guide that can be used in surgery can be produced using 3D printing without separate design work.
  • FIG. 3 illustrates an example of the UI configuration and function of a surgical planning system for retrograde total shoulder arthroplasty in relation to the present invention.
  • Figure 3 (a) illustrates examples of UI configurations in which image information (110) and surgical guides are designed (120) according to the final surgical plan.
  • Fig. 3 (b) illustrates examples of steps S1 to S9 described above.
  • steps S10a and S10b are shown separately.
  • steps S11 and S12 are displayed over time.
  • FIG. 4 illustrates an example of a cutting level determination deep learning model overview diagram related to the present invention.
  • control unit (30) for measuring the cutting level by configuring a CT image on which surgery has been completed in advance in three dimensions and extracting and labeling a portion of an X-ray image of the same patient and then learning it is illustrated in relation to step S3.
  • Figure 4 (a) illustrates an example of the results of a cutting level determination deep learning model.
  • FIG. 5 illustrates an example of a templated deep learning model overview diagram in relation to the present invention.
  • step S3 an example of the operation of the control unit (30) for automatically determining the size and model of each shoulder joint image and templating the prepared component shape is illustrated.
  • Figure 5 (a) illustrates an example of the application result of a templating deep learning model.
  • FIG. 6 illustrates an example of a Boolean-based PSI automatic design algorithm in relation to the present invention.
  • FIG. 6 illustrates the application of the Boolean-based PSI automatic design algorithm related to steps S8, S9, and S10.
  • Each step of (a) to (J) represents an example of a process of changing a previously stored basic PSI size using a cutting level determined from deep learning, performing 2D Boolean to fit the patient's skeleton, and reconstructing it in 3D.
  • the present invention can accurately determine a part that is difficult for a surgeon to determine, such as a cutting level, by using deep learning, and can resolve the overall part of the surgical plan by recommending the most suitable artificial joint product for the patient and even performing templating using only images.
  • the time required to establish a surgical plan and perform the surgical procedure can be drastically shortened, and the gap in the surgeon's experience and skills can be reduced, allowing even less experienced surgeons to perform the surgery accurately, while reducing the patient's pain due to the surgery.
  • system and its control method described above are not limited to the configuration and method of the embodiments described above, and the embodiments may be configured by selectively combining all or part of each embodiment so that various modifications can be made.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Urology & Nephrology (AREA)
  • Robotics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The method for generating a deep learning-based surgery plan, related to an embodiment of the present invention may include: a first step of acquiring X-ray image information; a second step of performing preprocessing for analysis of a predetermined region of the X-ray image information; a third step of analyzing the preprocessed X-ray image by using a convolutional neural network and extracting feature information from the result of the analysis and learning same; a fourth step of repeating the first step to the third step and generating template data for the predetermined region, on the basis of the repeatedly learned feature information; a fifth step of obtaining first X-ray image information of a user; a sixth step of extracting first feature information related to the user from the feature information by using the first X-ray image information and the template data; and a seventh step of generating a surgery plan related to the user, on the basis of the first feature information.

Description

딥러닝 기반 역행성 견관절 전치환술을 위한 수술계획시스템 및 그 구동 방법Deep learning-based surgical planning system for retrograde total shoulder arthroplasty and its operating method

본 발명은 딥러닝 기반 역행성 견관절 전치환술을 위한 수술계획시스템 및 그 구동 방법에 관한 것으로, 구체적으로 본 발명은 역행성 견관절 전치환술을 위해 수술 전에 자동으로 수술계획을 수립해주는 수술계획시스템과 빠르고 정확한 수술을 진행하기 위한 환자맞춤형 서지컬 가이드 (Patient Specific Instrument)를 수립된 수술계획에 맞춰 자동으로 설계 및 생산을 진행해 줄 수 있는 종합적인 역행성 견관절 전치환술 수술계획시스템을 제안하고자 한다.The present invention relates to a surgical planning system for reverse total shoulder arthroplasty based on deep learning and a method for operating the same. Specifically, the present invention proposes a surgical planning system that automatically establishes a surgical plan before surgery for reverse total shoulder arthroplasty, and a comprehensive surgical planning system for reverse total shoulder arthroplasty that can automatically design and produce a patient-specific surgical guide (Patient Specific Instrument) for performing a fast and accurate surgery according to the established surgical plan.

역행성 견관절 전치환술을 시행하면 여러 가지 어려움과 잠재적인 문제가 발생할 수 있고, 수술 전 계획이 없으면 외과의가 보철 부품의 적절한 크기와 위치를 정확하게 결정하기 어려울 수 있다. Performing a reverse total shoulder arthroplasty can present a number of challenges and potential complications, and without preoperative planning, it can be difficult for the surgeon to accurately determine the appropriate size and position of the prosthetic components.

또한, 임플란트 사이징이 부적절하면 불안정성, 운동 범위 제한 또는 관절 불일치가 발생하여 최적의 결과를 얻지 못하고 잠재적인 합병증을 유발할 수 있고, 수술 중 랜드마크 식별, 적절한 뼈 절제 결정, 연조직 균형 최적화 등의 어려움에 직면할 수 있다. Additionally, inadequate implant sizing can result in instability, limited range of motion, or joint inconsistency, which can lead to suboptimal results, potential complications, and challenges with intraoperative landmark identification, determining appropriate bone resection, and optimizing soft tissue balance.

나아가 이로 인해 수술 시간이 길어지고 합병증, 환자의 불편함, 수술 피로의 위험이 증가할 수 있다. Furthermore, this may result in longer surgical times and increased risk of complications, patient discomfort, and surgical fatigue.

따라서 이를 해결할 수 있는 방법 및 시스템에 대한 니즈가 높아지고 있는 실정이다.Accordingly, the need for methods and systems that can solve this problem is increasing.

본 발명은 전술한 문제점을 해소할 수 있는 역행성 견관절 전치환술을 위해 수술 전에 자동으로 수술계획을 수립해주는 수술계획시스템을 제안한다.The present invention proposes a surgical planning system that automatically establishes a surgical plan prior to surgery for retrograde total shoulder arthroplasty that can solve the aforementioned problems.

본 발명은 빠르고 정확한 수술을 진행하기 위한 환자맞춤형 서지컬 가이드 (Patient Specific Instrument)를 수립된 수술계획에 맞춰 자동으로 설계 및 생산을 진행해 줄 수 있는 종합적인 역행성 견관절 전치환술 수술계획시스템을 제안한다.The present invention proposes a comprehensive retrograde total shoulder arthroplasty surgical planning system that can automatically design and produce a patient-specific surgical guide (Patient Specific Instrument) for performing a fast and accurate surgery according to an established surgical plan.

본 발명에 적용되는 시스템 내 딥러닝은 견봉의 커팅 레벨을 결정해주고 특정 기업의 제품을 추천하여 템플레이팅까지 진행하는 시스템을 제안한다.Deep learning within the system applied to the present invention proposes a system that determines the cutting level of the acromion and recommends products from specific companies to proceed with templating.

나아가 본 발명은 불리안 알고리즘을 이용하여 환자맞춤형 서지컬 가이드를 설계해주어 3D 프린팅을 이용하여 수술에 사용될 수 있는 가이드를 별도의 디자인 작업 없이 진행하는 시스템 및 방법을 제안한다.Furthermore, the present invention proposes a system and method for designing a patient-specific surgical guide using a Boolean algorithm, thereby using 3D printing to produce a guide that can be used in surgery without a separate design work.

본 발명이 제안하는 기술은 다음과 같이 정리될 수 있다.The technology proposed by the present invention can be summarized as follows.

1) 딥러닝 기반 임플란트 추천 및 절제 범위 결정1) Deep learning-based implant recommendation and resection range determination

2) 2D 및 3D 의료영상 처리 기술2) 2D and 3D medical image processing technology

한편, 본 발명에서 이루고자 하는 기술적 과제들은 이상에서 언급한 기술적 과제들로 제한되지 않으며, 언급하지 않은 또 다른 기술적 과제들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.Meanwhile, the technical problems to be achieved in the present invention are not limited to the technical problems mentioned above, and other technical problems not mentioned can be clearly understood by a person having ordinary knowledge in the technical field to which the present invention belongs from the description below.

상술한 과제를 실현하기 위한 본 발명의 일예와 관련된 딥러닝 기반 수술계획 생성 방법은, 엑스레이 이미지 정보를 획득하는 제 1 단계; 상기 엑스레이 이미지 정보 중 미리 지정된 영역의 분석을 위해 전처리 (Pre-processing)을 수행하는 제 2 단계; 상기 전처리된 엑스레이 이미지를 컨볼루션 신경망(CNN)을 사용하여 분석하고, 상기 분석 결과로부터 특징 정보를 추출하여 학습하는 제 3 단계; 상기 제 1 단계 내지 상기 제 3 단계를 반복하고, 상기 반복 학습된 특징 정보를 기초로 상기 미리 지정된 영역에 대한 템플릿 데이터를 생성하는 제 4 단계; 사용자의 제 1 엑스레이 이미지 정보를 획득하는 제 5 단계; 상기 제 1 엑스레이 이미지 정보와 상기 템플릿 데이터를 이용하여, 상기 특징 정보 중 상기 사용자와 관련된 제 1 특징 정보를 추출하는 제 6 단계; 및 상기 제 1 특징 정보를 기초로 상기 사용자와 관련된 수술 계획을 생성하는 제 7 단계;를 포함할 수 있다.A deep learning-based surgical plan generation method related to an example of the present invention for realizing the above-described task may include: a first step of acquiring X-ray image information; a second step of performing preprocessing for analysis of a pre-designated area of the X-ray image information; a third step of analyzing the pre-processed X-ray image using a convolutional neural network (CNN) and extracting and learning feature information from the analysis result; a fourth step of repeating the first to third steps and generating template data for the pre-designated area based on the repeatedly learned feature information; a fifth step of acquiring first X-ray image information of a user; a sixth step of extracting first feature information related to the user from the feature information using the first X-ray image information and the template data; and a seventh step of generating a surgical plan related to the user based on the first feature information.

또한, 상기 사용자와 관련된 수술은 역행성 견관절 전치환술을 포함하고, 상기 미리 지정된 영역은 어깨 관절 영역을 포함할 수 있다.Additionally, the surgery involving the user may include a reverse total shoulder arthroplasty, and the pre-designated area may include a shoulder joint area.

또한, 상기 제 2 단계에서의 전처리는, 상기 어깨 관절 영역에 대한 해부학적 정보를 분석 가능하도록 적용할 수 있다.Additionally, the preprocessing in the second step can be applied to enable analysis of anatomical information about the shoulder joint area.

또한, 상기 특징 정보는, 상기 사용자와 관련된 해부학적 구조 정보, 뼈 구조 정보, 미리 지정된 관절의 각도 정보, 근육 및/또는 조직 분석 정보 및 상기 어깨 관절에 적용될 임플란트 구조물의 크기 및 위치와 관련된 정보 중 적어도 하나를 포함할 수 있다.Additionally, the feature information may include at least one of anatomical structure information related to the user, bone structure information, angle information of a pre-specified joint, muscle and/or tissue analysis information, and information related to the size and position of an implant structure to be applied to the shoulder joint.

또한, 상기 생성된 수술 계획에는, 상기 어깨 관절에 적용될 임플란트의 종류, 크기, 위치를 결정한 정보, 상기 역행성 견관절 전치환술 적용을 위한 커팅 범위 및 커팅 각도 정보, 상기 수술 이후 잠재적 합병즉 예측 정보, 상기 수술 이후 상기 어깨 관절 영역과 관련된 상기 사용자의 운동 범위 예측 정보 및 미리 저장된 수술 데이터를 기반으로 판단된 상기 수립된 수술 계획의 적합성 평가점수 정보 중 적어도 하나를 포함할 수 있다.In addition, the generated surgical plan may include at least one of information determining the type, size, and location of the implant to be applied to the shoulder joint, information on the cutting range and cutting angle for applying the retrograde rotator cuff arthroplasty, information on prediction of potential complications after the surgery, information on prediction of the user's range of motion related to the shoulder joint area after the surgery, and information on an evaluation score for the suitability of the established surgical plan determined based on pre-stored surgical data.

또한, 상기 제 1 단계에서 사전에 수술이 완료된 CT 영상을 함께 획득하고, 상기 제 3 단계에서, 상기 CT 영상을 기반으로 상기 완료된 수술에 적용되는 커팅 레벨을 계측하고, 상기 엑스레이 이미지에서 상기 완료된 수술과 관련된 일부 영역을 발췌하며, 상기 커팅 레벨과 상기 발췌한 일부 영역을 기초로 상기 전처리된 엑스레이 이미지를 라벨링 한 후 상기 학습을 진행할 수 있다.In addition, in the first step, a CT image on which a surgery has been completed in advance is acquired, and in the third step, a cutting level applied to the completed surgery is measured based on the CT image, a portion of the area related to the completed surgery is extracted from the X-ray image, and the learning can be performed after labeling the preprocessed X-ray image based on the cutting level and the extracted portion of the area.

또한, 상기 제 4 단계에서, 상기 엑스레이 이미지 정보 중 외부 사용자가 선택한 영역을 기초로, 미리 저장된 복수의 컨포넌트 도형 중 상기 선택한 영역의 크기에 따라 제 1 도형이 자동으로 결정되고, 상기 제 1 도형을 기초로 상기 템플릿 데이터를 생성할 수 있다,In addition, in the fourth step, based on the area selected by the external user among the X-ray image information, a first shape is automatically determined according to the size of the selected area among a plurality of pre-stored component shapes, and the template data can be generated based on the first shape.

또한, 상기 제 7 단계 이후, 상기 사용자의 CT 이미지 정보를 획득하는 제 8 단계; 상기 CT 이미지 정보를 3차원 재구성하여 3차원 이미지 정보를 자동으로 생성하는 제 9 단계; 및 상기 제 1 엑스레이 이미지 정보와 상기 템플릿 데이터를 수신하고, 상기 자동으로 생성된 3차원 이미지 정보, 상기 제 1 엑스레이 이미지 정보 및 상기 템플릿 데이터를 함께 이용하여 STL 정보를 생성하는 제 10 단계; 상기 생성된 STL 데이터를 3D 프린터에 입력하는 제 11 단계; 및 상기 3D 프린터에서 상기 STL 데이터를 기반으로 상기 수술 계획을 미리 모의 적용한 서지컬 가이드를 출력하는 제 12 단계;를 더 포함할 수 있다.In addition, after the 7th step, the method may further include: an 8th step of acquiring CT image information of the user; a 9th step of automatically generating 3D image information by 3D-reconstructing the CT image information; and a 10th step of receiving the first X-ray image information and the template data, and generating STL information by using the automatically generated 3D image information, the first X-ray image information, and the template data together; an 11th step of inputting the generated STL data into a 3D printer; and a 12th step of outputting a surgical guide to which the surgical plan has been simulated in advance based on the STL data from the 3D printer.

또한, 상기 제 9 단계에서, 상기 수술 계획 상의 커팅 레벨을 이용하여 상기 CT 이미지 정보와 관련된 PSI 크기를 변경하고, 상기 PSI 크기가 변경된 이미지 정보를 상기 사용자의 골격에 맞도록 2차원 불리안 연산을 수행하여 상기 3차원 이미지 정보를 자동으로 생성할 수 있다.In addition, in the ninth step, the PSI size related to the CT image information can be changed using the cutting level on the surgical plan, and the three-dimensional image information can be automatically generated by performing a two-dimensional Boolean operation on the image information with the changed PSI size to fit the user's skeleton.

한편, 상술한 과제를 실현하기 위한 본 발명의 다른 일예와 관련된 딥러닝 기반 수술계획 생성장치는 엑스레이 이미지 정보를 획득하는 획득부; 상기 엑스레이 이미지 정보 중 미리 지정된 영역의 분석을 위해 전처리 (Pre-processing)을 수행하는 전처리부; 및 상기 전처리된 엑스레이 이미지를 컨볼루션 신경망(CNN)을 사용하여 분석하고, 상기 분석 결과로부터 특징 정보를 추출하여 학습하는 제어부; 를 포함하고, 상기 획득부, 전처리부 및 제어부의 동작을 반복한 후, 상기 제어부가 상기 반복 학습된 특징 정보를 기초로 상기 미리 지정된 영역에 대한 템플릿 데이터를 생성하고, 상기 획득부가 사용자의 제 1 엑스레이 이미지 정보를 추가 획득하는 경우, 상기 제어부가 상기 제 1 엑스레이 이미지 정보와 상기 템플릿 데이터를 이용하여, 상기 특징 정보 중 상기 사용자와 관련된 제 1 특징 정보를 추출하고, 상기 제 1 특징 정보를 기초로 상기 사용자와 관련된 수술 계획을 생성할 수 있다.Meanwhile, a deep learning-based surgical plan generation device related to another example of the present invention for realizing the above-described task includes: an acquisition unit for acquiring X-ray image information; a preprocessing unit for performing preprocessing for analyzing a pre-designated area of the X-ray image information; and a control unit for analyzing the preprocessed X-ray image using a convolutional neural network (CNN) and extracting and learning feature information from the analysis result; wherein, after repeating the operations of the acquisition unit, the preprocessing unit, and the control unit, the control unit generates template data for the pre-designated area based on the repeatedly learned feature information, and when the acquisition unit additionally acquires first X-ray image information of the user, the control unit can extract first feature information related to the user from the feature information by using the first X-ray image information and the template data, and generate a surgical plan related to the user based on the first feature information.

본 발명에서는 딥런닝을 이용하여 커팅 레벨과 같은 집도의가 결정하기 어려운 부분을 정확도 높게 결정을 내려줄 수 있고 영상만으로 환자에게 가장 적합한 인공관절 제품에 대한 추천은 물론 템플레이팅까지 진행해 주어 수술계획에 대한 전반적인 부분을 해결해 줄 수 있다. In the present invention, deep learning is used to make decisions with high accuracy on areas that are difficult for surgeons to determine, such as cutting levels, and to provide recommendations for the most suitable artificial joint product for a patient based on images alone, as well as templating, thereby resolving the overall aspects of surgical planning.

뿐만 아니라, 환자 맞춤형 서지컬 가이드에 대한 디자인 제공을 통하여 별도의 디자인 작업 없이 3D 프린팅을 이용하여 제작할 수 있고 곧바로 환자 수술에 적용하여 수술계획대로 수술을 진행할 수 있게 도움을 줄 수 있다. In addition, by providing designs for patient-tailored surgical guides, they can be produced using 3D printing without separate design work and applied directly to patient surgery, helping to ensure that the surgery can proceed according to the surgical plan.

이 경우, 수술계획 수립 시간 및 수술 진행 시간이 획기적으로 단축될 수 있고 집도의의 경력과 실력의 편차를 줄여 보다 경험이 많지 않은 집도의도 정확하게 수술이 가능하며 환자 또한 수술로 인한 고통이 줄어들 수 있다.In this case, the time required to establish a surgical plan and perform the surgical procedure can be drastically shortened, and the gap in the surgeon's experience and skills can be reduced, allowing even less experienced surgeons to perform the surgery accurately, while reducing the patient's pain due to the surgery.

한편, 본 발명에서 얻을 수 있는 효과는 이상에서 언급한 효과들로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.Meanwhile, the effects obtainable from the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by a person having ordinary skill in the art to which the present invention belongs from the description below.

도 1은 본 발명과 관련하여, 딥러닝 기반 수술계획 생성 시스템에 대한 블록구성도의 일례를 도시한 것이다.FIG. 1 illustrates an example of a block diagram of a deep learning-based surgical plan generation system in relation to the present invention.

도 2는 본 발명과 관련하여, 딥러닝 기반 수술계획 생성 방법을 설명하는 순서도의 일례를 도시한 것이다.FIG. 2 illustrates an example of a flowchart explaining a deep learning-based surgical plan generation method in relation to the present invention.

도 3은 본 발명과 관련하여, 역행성 견관절 전치환술을 위한 수술계획시스템의 UI 구성 및 기능의 일례를 도시한 것이다.FIG. 3 illustrates an example of the UI configuration and function of a surgical planning system for retrograde total shoulder arthroplasty in relation to the present invention.

도 4는 본 발명과 관련하여, 커팅 레벨 결정 딥러닝 모델 개요도 의 일례를 도시한 것이다.FIG. 4 illustrates an example of a cutting level determination deep learning model overview diagram related to the present invention.

도 5는 본 발명과 관련하여, 템플레이팅 딥러닝 모델 개요도 의 일례를 도시한 것이다.FIG. 5 illustrates an example of a templated deep learning model overview diagram related to the present invention.

도 6은 본 발명과 관련하여, Boolean기반의 PSI 자동 설계 알고리즘 의 일례를 도시한 것이다.FIG. 6 illustrates an example of a Boolean-based PSI automatic design algorithm in relation to the present invention.

역행성 견관절 전치환술은 심한 관절염, 회복 불가능한 회전근개 파열 또는 복잡한 골절과 관련된 특정 어깨 질환을 치료하는 데 사용되는 정형외과적 수술 절차이다. Reverse shoulder arthroplasty is an orthopedic surgical procedure used to treat certain shoulder conditions associated with severe arthritis, irreparable rotator cuff tears, or complex fractures.

견봉을 인공 볼로 교체하고 관절 사이를 인공 소켓으로 교체하는 기존의 견관절 전치환술과 달리 역방향 견관절 전치환술은 어깨 관절의 해부학적 구조를 반대로 바꿔줄 수 있다. Unlike conventional total shoulder arthroplasty, which replaces the acromion with an artificial ball and the joint space with an artificial socket, reverse total shoulder arthroplasty can reverse the anatomical structure of the shoulder joint.

역행성 견관절 전치환술은 회전근개 기능 장애 또는 심한 관절염으로 인해 이동성이나 근력이 제한된 개인의 통증을 완화하고 어깨 기능을 회복하며 운동 범위를 개선하는 것을 목표로 한다. Reverse shoulder arthroplasty aims to relieve pain, restore shoulder function, and improve range of motion in individuals with limited mobility or strength due to rotator cuff dysfunction or severe arthritis.

이 시술은 삼각근을 주 운동으로 활용함으로써 회전근개 기능의 필요성을 우회한다. This procedure bypasses the need for rotator cuff function by utilizing the deltoid muscle as the primary exercise.

역행성 견관절 전치환술을 위한 수술 전 수술계획시스템의 필요성은 수술 결과를 개선하고 환자의 안전을 강화하며 전반적인 수술 과정을 최적화할 수 있는 능력에 있다. The need for a preoperative surgical planning system for retrograde arthroscopic shoulder arthroplasty lies in its ability to improve surgical outcomes, enhance patient safety, and optimize the overall surgical process.

이러한 시스템에 딥러닝을 적용하면 몇 가지 강점이 있다. Applying deep learning to these systems has several advantages.

딥러닝 알고리즘은 의료 이미지를 분석하고 높은 수준의 정확도로 세부 정보를 추출할 수 있다. Deep learning algorithms can analyze medical images and extract details with a high level of accuracy.

이를 통해 해부학적 구조, 임플란트 크기 및 위치를 정밀하게 측정할 수 있어 수술계획의 정확도가 향상될 수 있다. This allows for precise measurement of anatomical structures, implant size, and location, which can improve the accuracy of surgical planning.

또한, 대규모 데이터 세트와 이전 수술 사례를 학습하여 환자의 특정 해부학, 병리 및 원하는 결과를 기반으로 개인화된 치료 계획을 제공할 수 있다. Additionally, by learning from large data sets and previous surgical cases, we can provide personalized treatment plans based on the patient's specific anatomy, pathology, and desired outcomes.

이러한 개별화된 접근 방식은 더 나은 수술 결과와 환자 만족도로 이어진다. This individualized approach leads to better surgical outcomes and patient satisfaction.

이 경우, 시스템은 대량의 의료 이미지 데이터를 빠르게 처리할 수 있어 수술 전 계획을 더 빠르게 수립할 수 있다. In this case, the system can quickly process large amounts of medical image data, enabling faster preoperative planning.

이는 외과의의 귀중한 시간을 절약하고 수술 워크플로우를 간소화하여 환자의 대기 시간을 줄이고 병원 효율성을 개선하는 데 도움이 된다. This saves surgeons valuable time, streamlines surgical workflow, helps reduce patient waiting times and improve hospital efficiency.

딥러닝 모델이 적용된 수술계획시스템은 의료 이미지의 복잡한 패턴을 분석하여 외과의에게 유용한 인사이트와 권장 사항을 제공할 수 있다. Surgical planning systems that use deep learning models can analyze complex patterns in medical images to provide surgeons with useful insights and recommendations.

이를 통해 임플란트 선택, 수술 기법, 잠재적 합병증에 대한 정보에 입각한 의사 결정을 내릴 수 있어 오류 위험을 줄이고 환자 안전을 개선할 수 있다.This allows for informed decisions about implant selection, surgical technique, and potential complications, reducing the risk of error and improving patient safety.

역행성 견관절 전치환술을 시행하면 여러 가지 어려움과 잠재적인 문제가 발생할 수 있고, 수술 전 계획이 없으면 외과의가 보철 부품의 적절한 크기와 위치를 정확하게 결정하기 어려울 수 있다. Performing a reverse total shoulder arthroplasty can present many challenges and potential complications, and without preoperative planning, it can be difficult for the surgeon to accurately determine the appropriate size and position of the prosthetic components.

또한, 임플란트 사이징이 부적절하면 불안정성, 운동 범위 제한 또는 관절 불일치가 발생하여 최적의 결과를 얻지 못하고 잠재적인 합병증을 유발할 수 있고, 수술 중 랜드마크 식별, 적절한 뼈 절제 결정, 연조직 균형 최적화 등의 어려움에 직면할 수 있다. Additionally, inadequate implant sizing can result in instability, limited range of motion, or joint inconsistency, which can lead to suboptimal results, potential complications, and challenges with intraoperative landmark identification, determining appropriate bone resection, and optimizing soft tissue balance.

나아가 이로 인해 수술 시간이 길어지고 합병증, 환자의 불편함, 수술 피로의 위험이 증가할 수 있다. Furthermore, this may result in longer surgical times and increased risk of complications, patient discomfort, and surgical fatigue.

본 발명은 이러한 문제점을 해소할 수 있는 역행성 견관절 전치환술을 위해 수술 전에 자동으로 수술계획을 수립해주는 수술계획시스템을 제안한다.The present invention proposes a surgical planning system that automatically establishes a surgical plan prior to surgery for retrograde total shoulder arthroplasty that can solve these problems.

본 발명은 빠르고 정확한 수술을 진행하기 위한 환자맞춤형 서지컬 가이드 (Patient Specific Instrument)를 수립된 수술계획에 맞춰 자동으로 설계 및 생산을 진행해 줄 수 있는 종합적인 역행성 견관절 전치환술 수술계획시스템을 제안한다.The present invention proposes a comprehensive retrograde total shoulder arthroplasty surgical planning system that can automatically design and produce a patient-specific surgical guide (Patient Specific Instrument) for performing a fast and accurate surgery according to an established surgical plan.

본 발명에 적용되는 시스템 내 딥러닝은 견봉의 커팅 레벨을 결정해주고 특정 기업의 제품을 추천하여 템플레이팅까지 진행하는 시스템을 제안한다.Deep learning within the system applied to the present invention proposes a system that determines the cutting level of the acromion and recommends products from specific companies to proceed with templating.

나아가 본 발명은 불리안 알고리즘을 이용하여 환자맞춤형 서지컬 가이드를 설계해주어 3D 프린팅을 이용하여 수술에 사용될 수 있는 가이드를 별도의 디자인 작업 없이 진행하는 시스템 및 방법을 제안한다.Furthermore, the present invention proposes a system and method for designing a patient-specific surgical guide using a Boolean algorithm, thereby proceeding with a guide that can be used in surgery using 3D printing without a separate design work.

본 발명이 제안하는 기술은 다음과 같이 정리될 수 있다.The technology proposed by the present invention can be summarized as follows.

1) 딥러닝 기반 임플란트 추천 및 절제 범위 결정1) Deep learning-based implant recommendation and resection range determination

2) 2D 및 3D 의료영상 처리 기술2) 2D and 3D medical image processing technology

도 1은 본 발명과 관련하여, 딥러닝 기반 수술계획 생성 시스템에 대한 블록구성도의 일례를 도시한 것이다.FIG. 1 illustrates an example of a block diagram of a deep learning-based surgical plan generation system in relation to the present invention.

도 1을 참조하면, 딥러닝 기반 수술계획 생성 시스템 (1)은 엑스레이 이미지 정보를 획득하는 획득부 (10), 엑스레이 이미지 정보 중 미리 지정된 영역의 분석을 위해 전처리 (Pre-processing)을 수행하는 전처리부 (20) 및 전처리된 엑스레이 이미지를 컨볼루션 신경망(CNN)을 사용하여 분석하고, 상기 분석 결과로부터 특징 정보를 추출하여 학습하는 제어부 (30)를 포함할 수 있다.Referring to FIG. 1, a deep learning-based surgical plan generation system (1) may include an acquisition unit (10) that acquires X-ray image information, a preprocessing unit (20) that performs preprocessing for analysis of a pre-designated area among the X-ray image information, and a control unit (30) that analyzes the preprocessed X-ray image using a convolutional neural network (CNN) and extracts and learns feature information from the analysis results.

여기서 획득부 (10), 전처리부 (20) 및 제어부 (30)의 동작을 반복한 후, 제어부 (30)가 반복 학습된 특징 정보를 기초로 상기 미리 지정된 영역에 대한 템플릿 데이터를 생성하게 된다.Here, after repeating the operations of the acquisition unit (10), preprocessing unit (20), and control unit (30), the control unit (30) generates template data for the pre-designated area based on the repeatedly learned feature information.

또한, 획득부 (10)가 사용자의 제 1 엑스레이 이미지 정보를 추가 획득하는 경우, 제어부 (30)가 제 1 엑스레이 이미지 정보와 템플릿 데이터를 이용하여, 특징 정보 중 상기 사용자와 관련된 제 1 특징 정보를 추출하고, 제 1 특징 정보를 기초로 상기 사용자와 관련된 수술 계획을 생성하게 된다.In addition, when the acquisition unit (10) additionally acquires the user's first X-ray image information, the control unit (30) uses the first X-ray image information and template data to extract the first feature information related to the user from among the feature information, and generates a surgical plan related to the user based on the first feature information.

한편, 제어부 (30)는 사용자의 CT 이미지 정보를 획득하고, CT 이미지 정보를 3차원 재구성하여 3차원 이미지 정보를 자동으로 생성할 수 있다.Meanwhile, the control unit (30) can obtain the user's CT image information and automatically generate 3D image information by 3D reconstructing the CT image information.

또한, 제어부 (30) 또는 3D 프린팅부 (2)는 제 1 엑스레이 이미지 정보와 템플릿 데이터를 수신하고, 상기 자동으로 생성된 3차원 이미지 정보, 상기 제 1 엑스레이 이미지 정보 및 상기 템플릿 데이터를 함께 이용하여 STL 정보를 생성할 수 있다.Additionally, the control unit (30) or the 3D printing unit (2) may receive the first X-ray image information and the template data, and use the automatically generated 3D image information, the first X-ray image information, and the template data together to generate STL information.

생성된 STL 데이터를 3D 프린터부 (2)에 입력되면, 3D 프린터부 (2)에서 상기 STL 데이터를 기반으로 수술 계획을 미리 모의 적용한 서지컬 가이드를 출력할 수도 있다.When the generated STL data is input into the 3D printer section (2), the 3D printer section (2) can output a surgical guide that simulates the surgical plan in advance based on the STL data.

한편, 도 2는 본 발명과 관련하여, 딥러닝 기반 수술계획 생성 방법을 설명하는 순서도의 일례를 도시한 것이다.Meanwhile, FIG. 2 illustrates an example of a flowchart explaining a deep learning-based surgical plan generation method in relation to the present invention.

여기서 사용자와 관련된 수술은 역행성 견관절 전치환술을 포함하고, 미리 지정된 영역은 어깨 관절 영역을 포함할 수 있다.The surgery involved herein may include reverse rotator cuff arthroplasty, and the pre-specified area may include the shoulder joint area.

도 2를 참조하면, 가장 먼저, 엑스레이 이미지 정보를 획득하는 단계 (S1)가 진행된다.Referring to Fig. 2, first, a step (S1) of acquiring X-ray image information is performed.

이후, 엑스레이 이미지 정보 중 미리 지정된 영역의 분석을 위해 전처리 (Pre-processing)을 수행하는 단계 (S2)가 진행된다.Afterwards, a step (S2) of performing preprocessing for analysis of a pre-designated area of X-ray image information is performed.

여기서 전처리는 어깨 관절 영역에 대한 해부학적 정보를 분석 가능하도록 적용될 수 있다.Here, preprocessing can be applied to enable analysis of anatomical information about the shoulder joint area.

또한, 전처리된 엑스레이 이미지를 컨볼루션 신경망(CNN)을 사용하여 분석하고, 분석 결과로부터 특징 정보를 추출하여 학습하는 단계 (S3)가 진행된다.In addition, a step (S3) is performed in which the preprocessed X-ray image is analyzed using a convolutional neural network (CNN), and feature information is extracted and learned from the analysis results.

여기서 특징 정보는, 상기 사용자와 관련된 해부학적 구조 정보, 뼈 구조 정보, 미리 지정된 관절의 각도 정보, 근육 및/또는 조직 분석 정보 및 상기 어깨 관절에 적용될 임플란트 구조물의 크기 및 위치와 관련된 정보 중 적어도 하나를 포함할 수 있다.Here, the feature information may include at least one of anatomical structure information related to the user, bone structure information, angle information of a pre-specified joint, muscle and/or tissue analysis information, and information related to the size and position of an implant structure to be applied to the shoulder joint.

이후, 제 1 단계 내지 제 3 단계를 반복하고, 반복 학습된 특징 정보를 기초로 미리 지정된 영역에 대한 템플릿 데이터를 생성하는 단계 (S4)가 진행된다.Thereafter, steps 1 to 3 are repeated, and a step (S4) of generating template data for a pre-designated area based on the repeatedly learned feature information is performed.

한편, 상기 제 1 단계에서 사전에 수술이 완료된 CT 영상을 함께 획득하는 경우, 제 3 단계에서, CT 영상을 기반으로 상기 완료된 수술에 적용되는 커팅 레벨을 계측하고, 엑스레이 이미지에서 상기 완료된 수술과 관련된 일부 영역을 발췌하며, 커팅 레벨과 상기 발췌한 일부 영역을 기초로 상기 전처리된 엑스레이 이미지를 라벨링 한 후 상기 학습을 진행할 수도 있다.Meanwhile, in the case where a CT image of a previously completed surgery is acquired together in the first step, in the third step, a cutting level applied to the completed surgery is measured based on the CT image, a portion of the area related to the completed surgery is extracted from the X-ray image, and the learning can be performed after labeling the preprocessed X-ray image based on the cutting level and the extracted portion of the area.

또한, 엑스레이 이미지 정보 중 외부 사용자가 선택한 영역을 기초로, 미리 저장된 복수의 컨포넌트 도형 중 상기 선택한 영역의 크기에 따라 제 1 도형이 자동으로 결정되고, 상기 제 1 도형을 기초로 상기 템플릿 데이터를 생성할 수도 있다.In addition, based on an area selected by an external user from among X-ray image information, a first shape may be automatically determined based on the size of the selected area from among a plurality of pre-stored component shapes, and the template data may be generated based on the first shape.

또한, 사용자의 제 1 엑스레이 이미지 정보를 획득하는 단계(S5)가 수행된다.Additionally, a step (S5) of acquiring the user's first X-ray image information is performed.

이후, 제 1 엑스레이 이미지 정보와 템플릿 데이터를 이용하여, 특징 정보 중 사용자와 관련된 제 1 특징 정보를 추출하는 단계 (S6)가 수행된다.Thereafter, a step (S6) of extracting first feature information related to the user from among feature information is performed using the first X-ray image information and template data.

또한, 제 1 특징 정보를 기초로 사용자와 관련된 수술 계획을 생성하는 단계 (S7)가 진행된다.Additionally, a step (S7) of generating a surgical plan related to the user based on the first feature information is performed.

여기서 생성된 수술 계획에는, 어깨 관절에 적용될 임플란트의 종류, 크기, 위치를 결정한 정보, 상기 역행성 견관절 전치환술 적용을 위한 커팅 범위 및 커팅 각도 정보, 상기 수술 이후 잠재적 합병즉 예측 정보, 상기 수술 이후 상기 어깨 관절 영역과 관련된 상기 사용자의 운동 범위 예측 정보 및 미리 저장된 수술 데이터를 기반으로 판단된 상기 수립된 수술 계획의 적합성 평가점수 정보 중 적어도 하나를 포함할 수 있다.The surgical plan generated herein may include at least one of information determining the type, size, and location of an implant to be applied to the shoulder joint, information on the cutting range and cutting angle for applying the retrograde rotator cuff arthroplasty, information on prediction of potential complications after the surgery, information on prediction of the user's range of motion related to the shoulder joint area after the surgery, and information on an evaluation score for the suitability of the established surgical plan determined based on pre-stored surgical data.

또한, 사용자의 CT 이미지 정보를 획득하고 (S8), CT 이미지 정보를 3차원 재구성하여 3차원 이미지 정보를 자동으로 생성하는 단계(S9)가 수행된다.Additionally, a step (S8) of acquiring the user's CT image information and automatically generating 3D image information by reconstructing the CT image information in 3D is performed.

여기서 상기 수술 계획 상의 커팅 레벨을 이용하여 상기 CT 이미지 정보와 관련된 PSI 크기를 변경하고, 상기 PSI 크기가 변경된 이미지 정보를 상기 사용자의 골격에 맞도록 2차원 불리안 연산을 수행하여 상기 3차원 이미지 정보를 자동으로 생성할 수도 있다.Here, the PSI size related to the CT image information can be changed using the cutting level on the surgical plan, and the image information with the PSI size changed can be subjected to a two-dimensional Boolean operation to automatically generate the three-dimensional image information to fit the user's skeleton.

이후, 제 1 엑스레이 이미지 정보와 템플릿 데이터를 수신하고, 자동으로 생성된 3차원 이미지 정보, 제 1 엑스레이 이미지 정보 및 템플릿 데이터를 함께 이용하여 STL 정보를 생성하고 (S10), 생성된 STL 데이터를 3D 프린터에 입력된다 (S11).Thereafter, the first X-ray image information and template data are received, and STL information is generated using the automatically generated 3D image information, the first X-ray image information, and the template data together (S10), and the generated STL data is input to a 3D printer (S11).

나아가 3D 프린터에서 STL 데이터를 기반으로 수술 계획을 미리 모의 적용한 서지컬 가이드를 출력하는 단계 (S12)가 수행될 수 있다.Furthermore, a step (S12) of printing a surgical guide that simulates a surgical plan in advance based on STL data on a 3D printer can be performed.

전술한 본 발명이 제안하는 딥러닝 기반 역행성 견관절 전치환술 수술 전 수술계획시스템은 외과의가 수술 절차를 계획하고 최적화하는 데 도움을 주는 첨단 기술 솔루션이다. The deep learning-based reverse rotator cuff arthroplasty preoperative surgical planning system proposed by the present invention described above is a cutting-edge technology solution that helps surgeons plan and optimize surgical procedures.

역방향 견관절 전치환술은 중증 관절염이나 회전근개 파열과 같은 어깨 질환을 치료하기 위해 시행되는 복잡한 정형외과 수술로, 어깨 관절의 정상적인 해부학적 구조를 뒤집어 기능을 개선하는 수술이다. Reverse shoulder arthroplasty is a complex orthopedic surgery performed to treat shoulder conditions such as severe arthritis or rotator cuff tears. It involves reversing the normal anatomical structure of the shoulder joint to improve its function.

본 발명은 X-레이 영상을 활용하여 환자의 어깨 관절에 대한 자세한 해부학적 정보를 캡처하고 이러한 이미지는 딥러닝 알고리즘의 입력 데이터로 사용된다. The present invention utilizes X-ray images to capture detailed anatomical information about a patient's shoulder joint, and these images are used as input data for a deep learning algorithm.

딥러닝 알고리즘, 특히 컨볼루션 신경망(CNN)을 사용하여 사전 처리된 의료 이미지를 분석하고 이미지에서 의미 있는 특징을 추출하는 데 탁월하며 다양한 의료 영상 작업에서 효과가 입증되었는데 수술 계획에 중요한 관련 특징을 추출한다. Deep learning algorithms, especially convolutional neural networks (CNNs), excel at analyzing preprocessed medical images and extracting meaningful features from them, and have proven effective in a variety of medical imaging tasks, extracting relevant features that are important for surgical planning.

이러한 특징에는 뼈 구조, 관절 각도, 임플란트 위치, 근육/조직 분석 등이 포함될 수 있다. These characteristics may include bone structure, joint angles, implant position, and muscle/tissue analysis.

추출된 특징을 기반으로 시스템은 상세한 수술 전 수술 계획을 생성하는데 이 계획에는 최적의 임플란트 종류, 크기 및 위치 결정, 커팅 범위 및 각도, 잠재적 합병증 예측, 수술 후 운동 범위 예측, 수술의 전반적인 타당성 평가 등이 포함될 수 있다.Based on the extracted features, the system generates a detailed preoperative surgical plan, which may include determining the optimal implant type, size and position, cutting range and angle, predicting potential complications, predicting postoperative range of motion, and assessing the overall feasibility of the surgery.

본 발명에서는 역행성 견관절 전치환술을 위해 수술 전에 자동으로 수술계획을 수립해주는 수술계획시스템과 빠르고 정확한 수술을 진행하기 위한 환자맞춤형 서지컬 가이드 (Patient Specific Instrument)를 수립된 수술계획에 맞춰 자동으로 설계 및 생산을 진행해 줄 수 있는 종합적인 역행성 견관절 전치환술 수술계획시스템을 제안하다,The present invention proposes a comprehensive surgical planning system for retrograde total shoulder arthroplasty that automatically establishes a surgical plan prior to surgery for retrograde total shoulder arthroplasty and automatically designs and produces a patient-specific surgical guide (Patient Specific Instrument) according to the established surgical plan for fast and accurate surgery.

시스템 내 딥러닝은 견봉의 커팅 레벨을 결정해주고 특정 기업의 제품을 추천하여 템플레이팅까지 진행할 수 있다. Deep learning within the system can determine the cutting level of the shoulder and even recommend products from specific companies to proceed with templating.

나아가 불리안 알고리즘을 이용하여 환자맞춤형 서지컬 가이드를 설계해주어 3D 프린팅을 이용하여 수술에 사용될 수 있는 가이드를 별도의 디자인 작업 없이 진행할 수 있다.Furthermore, by using a Boolean algorithm to design a patient-specific surgical guide, a guide that can be used in surgery can be produced using 3D printing without separate design work.

한편, 도 3은 본 발명과 관련하여, 역행성 견관절 전치환술을 위한 수술계획시스템의 UI 구성 및 기능의 일례를 도시한 것이다.Meanwhile, FIG. 3 illustrates an example of the UI configuration and function of a surgical planning system for retrograde total shoulder arthroplasty in relation to the present invention.

도 3의 (a)는 최종적인 수술계획에 따른 이미지 정보 (110) 및 서지컬 가이드가 설계 (120)된 UI 구성의 일례들 도시한다.Figure 3 (a) illustrates examples of UI configurations in which image information (110) and surgical guides are designed (120) according to the final surgical plan.

또한, 도 3의 (b)는 전술한 S1 단계 내지 S9 단계의 예시들이 도시된다.Additionally, Fig. 3 (b) illustrates examples of steps S1 to S9 described above.

제 1 엑스레이 이미지 정보와 템플릿 데이터를 수신하고, 자동으로 생성된 3차원 이미지 정보, 제 1 엑스레이 이미지 정보 및 템플릿 데이터를 함께 이용하여 STL 정보를 생성하는 S10 단계와 관련하여, S10a 단계와 S10b 단계를 나누어 표시하였다.In relation to step S10 of receiving first X-ray image information and template data and generating STL information by using automatically generated 3D image information, first X-ray image information, and template data together, steps S10a and S10b are shown separately.

또한, S11 단계 및 S12 단계를 시간의 흐름에 따라 표시하였다.Additionally, steps S11 and S12 are displayed over time.

또한, 도 4는 본 발명과 관련하여, 커팅 레벨 결정 딥러닝 모델 개요도의 일례를 도시한 것이다.In addition, FIG. 4 illustrates an example of a cutting level determination deep learning model overview diagram related to the present invention.

도 4의 (a) 및 (b)를 참조하면, S3 단계 관련하여, 사전에 수술이 완료된 CT영상을 3차원으로 구성하여 커팅 레벨을 계측하고 같은 환자의 X선 영상의 일부를 발췌, 라벨링 후 학습하는 제어부 (30) 동작의 일례를 도시하였다.Referring to (a) and (b) of FIG. 4, an example of the operation of the control unit (30) for measuring the cutting level by configuring a CT image on which surgery has been completed in advance in three dimensions and extracting and labeling a portion of an X-ray image of the same patient and then learning it is illustrated in relation to step S3.

도 4의 (a)는 커팅 레벨 결정 딥러닝 모델의 결과 일례를 도시한 것이다.Figure 4 (a) illustrates an example of the results of a cutting level determination deep learning model.

도 5는 본 발명과 관련하여, 템플레이팅 딥러닝 모델 개요도의 일례를 도시한 것이다.FIG. 5 illustrates an example of a templated deep learning model overview diagram in relation to the present invention.

도 5의 (a) 및 (b)를 참조하면, S3 단계 관련하여, 사용자가 선택한 영역에서 발췌한 영상을 Input으로 견관절 영상별로 사이즈와 모델이 자동으로 결정되고 준비된 컴포넌트 도형을 템플레이팅 하는 제어부 (30) 동작의 일례를 도시하였다.Referring to (a) and (b) of FIG. 5, in relation to step S3, an example of the operation of the control unit (30) for automatically determining the size and model of each shoulder joint image and templating the prepared component shape is illustrated.

도 5의 (a)는 템플레이팅 딥러닝 모델의 적용 결과 일례를 도시한 것이다.Figure 5 (a) illustrates an example of the application result of a templating deep learning model.

도 6은 본 발명과 관련하여, Boolean기반의 PSI 자동 설계 알고리즘의 일례를 도시한 것이다.FIG. 6 illustrates an example of a Boolean-based PSI automatic design algorithm in relation to the present invention.

도 6은 S8, S9, S10 단계와 관련된 Boolean기반의 PSI 자동 설계 알고리즘의 적용을 도시한 것이다.Figure 6 illustrates the application of the Boolean-based PSI automatic design algorithm related to steps S8, S9, and S10.

(a) 내지 (J)의 각 단계는 딥러닝으로부터 결정된 커팅 레벨을 이용하여 기존에 저장된 기본 PSI크기를 변경하고 환자 골격에 맞게 2D Boolean을 수행하고 3D로 재구성하는 과정의 일례를 나타낸 것이다.Each step of (a) to (J) represents an example of a process of changing a previously stored basic PSI size using a cutting level determined from deep learning, performing 2D Boolean to fit the patient's skeleton, and reconstructing it in 3D.

전술한 본 발명의 시스템 및 방법을 적용하는 경우, 본 발명에서는 딥런닝을 이용하여 커팅 레벨과 같은 집도의가 결정하기 어려운 부분을 정확도 높게 결정을 내려줄 수 있고 영상만으로 환자에게 가장 적합한 인공관절 제품에 대한 추천은 물론 템플레이팅까지 진행해 주어 수술계획에 대한 전반적인 부분을 해결해 줄 수 있다. When applying the system and method of the present invention described above, the present invention can accurately determine a part that is difficult for a surgeon to determine, such as a cutting level, by using deep learning, and can resolve the overall part of the surgical plan by recommending the most suitable artificial joint product for the patient and even performing templating using only images.

뿐만 아니라, 환자 맞춤형 서지컬 가이드에 대한 디자인 제공을 통하여 별도의 디자인 작업 없이 3D 프린팅을 이용하여 제작할 수 있고 곧바로 환자 수술에 적용하여 수술계획대로 수술을 진행할 수 있게 도움을 줄 수 있다. In addition, by providing designs for patient-tailored surgical guides, they can be produced using 3D printing without separate design work and applied directly to patient surgery, helping to ensure that the surgery can proceed according to the surgical plan.

이 경우, 수술계획 수립 시간 및 수술 진행 시간이 획기적으로 단축될 수 있고 집도의의 경력과 실력의 편차를 줄여 보다 경험이 많지 않은 집도의도 정확하게 수술이 가능하며 환자 또한 수술로 인한 고통이 줄어들 수 있다.In this case, the time required to establish a surgical plan and perform the surgical procedure can be drastically shortened, and the gap in the surgeon's experience and skills can be reduced, allowing even less experienced surgeons to perform the surgery accurately, while reducing the patient's pain due to the surgery.

한편, 본 발명에서 얻을 수 있는 효과는 이상에서 언급한 효과들로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.Meanwhile, the effects obtainable from the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by a person having ordinary skill in the art to which the present invention belongs from the description below.

또한, 상기와 같이 설명된 시스템 및 그 제어방법은 상기 설명된 실시예들의 구성과 방법이 한정되게 적용될 수 있는 것이 아니라, 상기 실시예들은 다양한 변형이 이루어질 수 있도록 각 실시예들의 전부 또는 일부가 선택적으로 조합되어 구성될 수도 있다.In addition, the system and its control method described above are not limited to the configuration and method of the embodiments described above, and the embodiments may be configured by selectively combining all or part of each embodiment so that various modifications can be made.

Claims (10)

엑스레이 이미지 정보를 획득하는 제 1 단계;Step 1: Obtaining X-ray image information; 상기 엑스레이 이미지 정보 중 미리 지정된 영역의 분석을 위해 전처리 (Pre-processing)을 수행하는 제 2 단계;A second step of performing pre-processing for analysis of a pre-designated area among the above X-ray image information; 상기 전처리된 엑스레이 이미지를 컨볼루션 신경망(CNN)을 사용하여 분석하고, 상기 분석 결과로부터 특징 정보를 추출하여 학습하는 제 3 단계;A third step of analyzing the above preprocessed X-ray image using a convolutional neural network (CNN) and extracting and learning feature information from the analysis results; 상기 제 1 단계 내지 상기 제 3 단계를 반복하고, 상기 반복 학습된 특징 정보를 기초로 상기 미리 지정된 영역에 대한 템플릿 데이터를 생성하는 제 4 단계;A fourth step of repeating the first to third steps and generating template data for the pre-designated area based on the repeatedly learned feature information; 사용자의 제 1 엑스레이 이미지 정보를 획득하는 제 5 단계;Step 5 of acquiring the user's first X-ray image information; 상기 제 1 엑스레이 이미지 정보와 상기 템플릿 데이터를 이용하여, 상기 특징 정보 중 상기 사용자와 관련된 제 1 특징 정보를 추출하는 제 6 단계; 및A sixth step of extracting first feature information related to the user from among the feature information using the first X-ray image information and the template data; and 상기 제 1 특징 정보를 기초로 상기 사용자와 관련된 수술 계획을 생성하는 제 7 단계;를 포함하는 딥러닝 기반 수술계획 생성 방법.A deep learning-based surgical plan generation method comprising a seventh step of generating a surgical plan related to the user based on the first feature information. 제 1항에 있어서,In paragraph 1, 상기 사용자와 관련된 수술은 역행성 견관절 전치환술을 포함하고,The surgery associated with the above user includes retrograde total shoulder arthroplasty, 상기 미리 지정된 영역은 어깨 관절 영역을 포함하는 딥러닝 기반 수술계획 생성 방법.The above pre-designated area is a deep learning-based surgical plan generation method including the shoulder joint area. 제 2항에 있어서,In the second paragraph, 상기 제 2 단계에서의 전처리는,The preprocessing in the second step is as follows: 상기 어깨 관절 영역에 대한 해부학적 정보를 분석 가능하도록 적용하는 딥러닝 기반 수술계획 생성 방법.A method for generating a deep learning-based surgical plan that can analyze anatomical information about the above shoulder joint area. 제 3항에 있어서,In the third paragraph, 상기 특징 정보는,The above feature information is, 상기 사용자와 관련된 해부학적 구조 정보, 뼈 구조 정보, 미리 지정된 관절의 각도 정보, 근육 및/또는 조직 분석 정보 및 상기 어깨 관절에 적용될 임플란트 구조물의 크기 및 위치와 관련된 정보 중 적어도 하나를 포함하는 딥러닝 기반 수술계획 생성 방법.A method for generating a deep learning-based surgical plan, comprising at least one of anatomical structure information related to the user, bone structure information, angle information of a predetermined joint, muscle and/or tissue analysis information, and information related to the size and position of an implant structure to be applied to the shoulder joint. 제 4항에 있어서,In paragraph 4, 상기 생성된 수술 계획에는, In the above generated surgical plan, 상기 어깨 관절에 적용될 임플란트의 종류, 크기, 위치를 결정한 정보, 상기 역행성 견관절 전치환술 적용을 위한 커팅 범위 및 커팅 각도 정보, 상기 수술 이후 잠재적 합병즉 예측 정보, 상기 수술 이후 상기 어깨 관절 영역과 관련된 상기 사용자의 운동 범위 예측 정보 및 미리 저장된 수술 데이터를 기반으로 판단된 상기 수립된 수술 계획의 적합성 평가점수 정보 중 적어도 하나를 포함하는 딥러닝 기반 수술계획 생성 방법A deep learning-based surgical plan generation method including at least one of information determining the type, size, and location of the implant to be applied to the shoulder joint, information on the cutting range and cutting angle for applying the retrograde total shoulder arthroplasty, information predicting potential complications after the surgery, information predicting the user's range of motion related to the shoulder joint area after the surgery, and information evaluating the suitability of the established surgical plan based on pre-stored surgical data. 제 5항에 있어서,In paragraph 5, 상기 제 1 단계에서 사전에 수술이 완료된 CT 영상을 함께 획득하고,In the above step 1, CT images of previously completed surgery are acquired together, 상기 제 3 단계에서,In the third step above, 상기 CT 영상을 기반으로 상기 완료된 수술에 적용되는 커팅 레벨을 계측하고,Based on the above CT image, the cutting level applied to the completed surgery is measured, 상기 엑스레이 이미지에서 상기 완료된 수술과 관련된 일부 영역을 발췌하며,Some areas related to the completed surgery are extracted from the above X-ray image, 상기 커팅 레벨과 상기 발췌한 일부 영역을 기초로 상기 전처리된 엑스레이 이미지를 라벨링 한 후 상기 학습을 진행하는 딥러닝 기반 수술계획 생성 방법.A deep learning-based surgical plan generation method for labeling the preprocessed X-ray image based on the above-described cutting level and the extracted partial area, and then performing the learning. 제 6항에 있어서,In paragraph 6, 상기 제 4 단계에서,In the above 4th step, 상기 엑스레이 이미지 정보 중 외부 사용자가 선택한 영역을 기초로, 미리 저장된 복수의 컨포넌트 도형 중 상기 선택한 영역의 크기에 따라 제 1 도형이 자동으로 결정되고, 상기 제 1 도형을 기초로 상기 템플릿 데이터를 생성하는 딥러닝 기반 수술계획 생성 방법.A deep learning-based surgical plan generation method, wherein a first shape is automatically determined based on the size of the selected region among a plurality of pre-stored component shapes based on an area selected by an external user from the above X-ray image information, and the template data is generated based on the first shape. 제 7항에 있어서,In Article 7, 상기 제 7 단계 이후,After step 7 above, 상기 사용자의 CT 이미지 정보를 획득하는 제 8 단계;Step 8 of obtaining CT image information of the above user; 상기 CT 이미지 정보를 3차원 재구성하여 3차원 이미지 정보를 자동으로 생성하는 제 9 단계; 및 Step 9 of automatically generating 3D image information by reconstructing the above CT image information in 3D; and 상기 제 1 엑스레이 이미지 정보와 상기 템플릿 데이터를 수신하고, 상기 자동으로 생성된 3차원 이미지 정보, 상기 제 1 엑스레이 이미지 정보 및 상기 템플릿 데이터를 함께 이용하여 STL 정보를 생성하는 제 10 단계;A 10th step of receiving the first X-ray image information and the template data, and generating STL information by using the automatically generated 3D image information, the first X-ray image information, and the template data together; 상기 생성된 STL 데이터를 3D 프린터에 입력하는 제 11 단계; 및Step 11 of inputting the above generated STL data into a 3D printer; and 상기 3D 프린터에서 상기 STL 데이터를 기반으로 상기 수술 계획을 미리 모의 적용한 서지컬 가이드를 출력하는 제 12 단계;를 더 포함하는 딥러닝 기반 수술계획 생성 방법.A method for generating a deep learning-based surgical plan, further comprising a 12th step of outputting a surgical guide that has applied the surgical plan in advance based on the STL data from the 3D printer. 제 8항에 있어서,In Article 8, 상기 제 9 단계에서,In the above step 9, 상기 수술 계획 상의 커팅 레벨을 이용하여 상기 CT 이미지 정보와 관련된 PSI 크기를 변경하고,The PSI size related to the CT image information is changed using the cutting level in the above surgical plan, 상기 PSI 크기가 변경된 이미지 정보를 상기 사용자의 골격에 맞도록 2차원 불리안 연산을 수행하여 상기 3차원 이미지 정보를 자동으로 생성하는 딥러닝 기반 수술계획 생성 방법.A deep learning-based surgical plan generation method that automatically generates three-dimensional image information by performing two-dimensional Boolean operations on image information with the PSI size changed to fit the user's skeleton. 엑스레이 이미지 정보를 획득하는 획득부;An acquisition unit for acquiring X-ray image information; 상기 엑스레이 이미지 정보 중 미리 지정된 영역의 분석을 위해 전처리 (Pre-processing)을 수행하는 전처리부; 및A preprocessing unit that performs preprocessing for analysis of a pre-designated area among the above X-ray image information; and 상기 전처리된 엑스레이 이미지를 컨볼루션 신경망(CNN)을 사용하여 분석하고, 상기 분석 결과로부터 특징 정보를 추출하여 학습하는 제어부;를 포함하고,A control unit that analyzes the above-mentioned preprocessed X-ray image using a convolutional neural network (CNN) and extracts and learns feature information from the analysis results; 상기 획득부, 전처리부 및 제어부의 동작을 반복한 후, 상기 제어부가 상기 반복 학습된 특징 정보를 기초로 상기 미리 지정된 영역에 대한 템플릿 데이터를 생성하고,After repeating the operations of the above acquisition unit, preprocessing unit and control unit, the control unit generates template data for the pre-designated area based on the repeatedly learned feature information. 상기 획득부가 사용자의 제 1 엑스레이 이미지 정보를 추가 획득하는 경우,If the above acquisition unit additionally acquires the user's first X-ray image information, 상기 제어부가 상기 제 1 엑스레이 이미지 정보와 상기 템플릿 데이터를 이용하여, 상기 특징 정보 중 상기 사용자와 관련된 제 1 특징 정보를 추출하고, 상기 제 1 특징 정보를 기초로 상기 사용자와 관련된 수술 계획을 생성하는 딥러닝 기반 수술계획 생성장치.A deep learning-based surgical plan generation device, wherein the control unit extracts first feature information related to the user from among the feature information using the first X-ray image information and the template data, and generates a surgical plan related to the user based on the first feature information.
PCT/KR2024/010244 2023-07-18 2024-07-17 Deep learning-based surgery plan system for reverse shoulder arthroplasty, and driving method thereof Pending WO2025018776A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020230093287A KR102799343B1 (en) 2023-07-18 2023-07-18 Deep Learning based Pre-operative surgical planning system for reverse total shoulder replacement and its operation method
KR10-2023-0093287 2023-07-18

Publications (1)

Publication Number Publication Date
WO2025018776A1 true WO2025018776A1 (en) 2025-01-23

Family

ID=94282334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/010244 Pending WO2025018776A1 (en) 2023-07-18 2024-07-17 Deep learning-based surgery plan system for reverse shoulder arthroplasty, and driving method thereof

Country Status (2)

Country Link
KR (1) KR102799343B1 (en)
WO (1) WO2025018776A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210104325A1 (en) * 2018-06-19 2021-04-08 Tornier, Inc. Neural network for diagnosis of shoulder condition
US20210315642A1 (en) * 2019-02-05 2021-10-14 Smith & Nephew, Inc. Computer-assisted arthroplasty system
US20220249168A1 (en) * 2019-06-28 2022-08-11 Formus Labs Limited Orthopaedic pre-operative planning system
US20230027978A1 (en) * 2019-12-03 2023-01-26 Howmedica Osteonics Corp. Machine-learned models in support of surgical procedures

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102467242B1 (en) 2020-12-11 2022-11-16 가톨릭관동대학교산학협력단 Total hip arthroplasty simulation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210104325A1 (en) * 2018-06-19 2021-04-08 Tornier, Inc. Neural network for diagnosis of shoulder condition
US20210315642A1 (en) * 2019-02-05 2021-10-14 Smith & Nephew, Inc. Computer-assisted arthroplasty system
US20220249168A1 (en) * 2019-06-28 2022-08-11 Formus Labs Limited Orthopaedic pre-operative planning system
US20230027978A1 (en) * 2019-12-03 2023-01-26 Howmedica Osteonics Corp. Machine-learned models in support of surgical procedures

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANDREA K; CHRISTINE LEITNER; HERBERT LEITOLD; ALEXANDER PROSSER: "Advances in Databases and Information Systems", vol. 11404 Chap.3, 9 January 2019, SPRINGER INTERNATIONAL PUBLISHING , Cham , ISBN: 978-3-319-10403-4, article KULYK PAUL; VLACHOPOULOS LAZAROS; FüRNSTAHL PHILIPP; ZHENG GUOYAN: "Fully Automatic Planning of Total Shoulder Arthroplasty Without Segmentation: A Deep Learning Based Approach", pages: 22 - 34, XP047499437, 032682, DOI: 10.1007/978-3-030-11166-3_3 *

Also Published As

Publication number Publication date
KR20250012989A (en) 2025-01-31
KR102799343B1 (en) 2025-04-23

Similar Documents

Publication Publication Date Title
WO2019132168A1 (en) System for learning surgical image data
CN109069097B (en) Dental three-dimensional data processing device and method thereof
WO2020242019A1 (en) Medical image processing method and device using machine learning
WO2023229415A1 (en) Ar image provision method and ar image provision device
WO2021006472A1 (en) Multiple bone density displaying method for establishing implant procedure plan, and image processing device therefor
WO2020159276A1 (en) Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image
Mohamadipanah et al. Can deep learning algorithms help identify surgical workflow and techniques?
CN109875683B (en) Method for establishing osteotomy face prediction model in mandibular angle osteotomy
WO2024225789A1 (en) System and method for providing comprehensive medical service based on medical images
US11766234B2 (en) System and method for identifying and navigating anatomical objects using deep learning networks
CN110706825A (en) Orthopedic medical platform system and method based on three-dimensional modeling and 3D printing
CN109620406B (en) Display and registration method for total knee arthroplasty
WO2025018776A1 (en) Deep learning-based surgery plan system for reverse shoulder arthroplasty, and driving method thereof
DE102005056997A1 (en) Simulation system for surgical interventions in human and veterinary medicine
CN112259196A (en) Image quality evaluation system and method based on structured template
WO2022139029A1 (en) Artificial-intelligence-based artificial intervertebral disc modeling apparatus, and method therefor
CN115188232A (en) Medical teaching comprehensive training system and method based on MR-3D printing technology
Rohith et al. Exploring deep learning techniques for MRI brain tumor image segmentation: a survey
WO2021251777A1 (en) Method and system for full-body ct scan 3d modeling
TWM668118U (en) Measurement system for personalized bone and articular cartilage morphology
WO2024080612A1 (en) Device and method for obtaining reconstructed ct image and diagnosing fracture by using x-ray image
CN115270566B (en) Auxiliary assessment method and device for damaged mandible
CN117877707A (en) Remote control method and system for medical imaging equipment
WO2023234476A1 (en) System and method for predicting success or not of complete repair when repairing torn rotator cuff
WO2024172332A1 (en) Database-based real-time patient-specific mitral valve plasty simulation method and simulation device therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24843491

Country of ref document: EP

Kind code of ref document: A1