[go: up one dir, main page]

CN114708171B - A three-dimensional image fusion method and device based on computer tomography - Google Patents

A three-dimensional image fusion method and device based on computer tomography Download PDF

Info

Publication number
CN114708171B
CN114708171B CN202111590868.8A CN202111590868A CN114708171B CN 114708171 B CN114708171 B CN 114708171B CN 202111590868 A CN202111590868 A CN 202111590868A CN 114708171 B CN114708171 B CN 114708171B
Authority
CN
China
Prior art keywords
frame
projection
image
dimensional
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111590868.8A
Other languages
Chinese (zh)
Other versions
CN114708171A (en
Inventor
罗亮
张海平
范美仁
徐来明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cgn Begood Technology Co ltd
Original Assignee
Cgn Begood Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cgn Begood Technology Co ltd filed Critical Cgn Begood Technology Co ltd
Priority to CN202111590868.8A priority Critical patent/CN114708171B/en
Publication of CN114708171A publication Critical patent/CN114708171A/en
Application granted granted Critical
Publication of CN114708171B publication Critical patent/CN114708171B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本发明公开一种基于计算机断层扫描的三维图像融合方法及装置,方法包括:获取二维的至少一个CT切片图像,并根据至少一个CT切片图像进行三维几何结构体重构;对三维几何结构体进行旋转不同角度,并对当前视角下的三维几何结构体进行投影,使形成投影图像;对不同角度的投影图像中的某一目标物进行识别,并基于标注框对某一目标物进行标注;判断正投影角度的投影图像中的标注框与某一角度的投影图像中的标注框的差异指标是否小于预设阈值;若小于预设阈值,则将某一角度的投影图像中的标注框与正投影角度的投影图像中的标注框进行融合。实现三维违禁品的智能识别与自动标注,大大减少了安检人员的工作量,大幅提升了安检效率。

The present invention discloses a three-dimensional image fusion method and device based on computer tomography, the method comprising: obtaining at least one two-dimensional CT slice image, and reconstructing a three-dimensional geometric structure according to at least one CT slice image; rotating the three-dimensional geometric structure at different angles, and projecting the three-dimensional geometric structure under the current viewing angle to form a projection image; identifying a certain target object in the projection images at different angles, and marking the certain target object based on a marking frame; judging whether the difference index between the marking frame in the projection image at the orthographic projection angle and the marking frame in the projection image at a certain angle is less than a preset threshold; if it is less than the preset threshold, the marking frame in the projection image at a certain angle is fused with the marking frame in the projection image at the orthographic projection angle. Intelligent recognition and automatic marking of three-dimensional contraband are realized, which greatly reduces the workload of security personnel and greatly improves security inspection efficiency.

Description

Three-dimensional image fusion method and device based on computed tomography
Technical Field
The invention belongs to the technical field of computed tomography, and particularly relates to a three-dimensional image fusion method and device based on computed tomography.
Background
X-ray imaging technology is widely used in the current customs, airports, subways and other places to scan large freight containers and small pedestrian cases and check whether contraband exists inside.
Because the X-ray image is a two-dimensional image generated by overlapping projections of all layers, objects entrained in the case can not be well displayed, and particularly under the condition of shielding, great difficulty is brought to the inspection of security personnel.
Disclosure of Invention
The invention provides a three-dimensional image fusion method and device based on computed tomography, which are used for at least solving one of the technical problems.
The three-dimensional image fusion method based on computer tomography comprises the steps of obtaining at least one CT slice image in two dimensions, carrying out three-dimensional geometric structure reconstruction according to the CT slice image, rotating the three-dimensional geometric structure body by different angles based on an X axis, a Y axis or a Z axis, projecting the three-dimensional geometric structure body under the current view angle to form a projection image, identifying a certain target object in the projection image by different angles according to an image identification method, marking the certain target object based on a marking frame, judging whether the difference index of the marking frame in the projection image at a front projection angle and the marking frame in the projection image at a certain angle is smaller than a preset threshold value or not based on a matching algorithm, and if the difference index of a source marking frame in the projection image at a front projection angle and the target marking frame in the projection image at a certain angle is smaller than the preset threshold value, fusing the marking frame in the projection image at a certain angle and the marking frame in the projection image at a front projection angle, so that the three-dimensional marking of the target object is obtained.
The invention provides a three-dimensional image fusion device based on computed tomography, which comprises a reconstruction module, a projection module, a recognition module and a judgment module, wherein the reconstruction module is configured to acquire at least one CT slice image in two dimensions and carry out three-dimensional geometric structure reconstruction according to the at least one CT slice image, the projection module is configured to rotate the three-dimensional geometric structure by different angles based on an X axis, a Y axis or a Z axis and project the three-dimensional geometric structure under the current view angle to form a projection image, the recognition module is configured to recognize a certain target object in the projection image at different angles according to an image recognition method and label the certain target object based on a label frame, the judgment module is configured to judge whether the difference index of the label frame in the projection image at the forward projection angle and the label frame in the projection image at the certain angle is smaller than a preset threshold or not based on a matching algorithm, and the fusion module is configured to fuse the difference index of the source label frame in the projection image at the forward projection angle and the target label frame in the projection image at the certain angle to label the certain angle with the target label frame in the projection image at the certain angle.
In a third aspect, an electronic device is provided comprising at least one processor and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the computed tomography-based three-dimensional image fusion method of any of the embodiments of the present invention.
In a fourth aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of the three-dimensional image fusion method based on computed tomography according to any of the embodiments of the present invention.
According to the three-dimensional image fusion method and device based on the computed tomography, the two-dimensional slice images acquired by the CT equipment are utilized, three-dimensional data are reconstructed through the fusion algorithm, intelligent identification and automatic labeling of three-dimensional contraband are realized, the workload of security inspection personnel is greatly reduced, and the security inspection efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a three-dimensional image fusion method based on computed tomography according to an embodiment of the present invention;
FIG. 2 is a perspective view of a three-dimensional geometry rotated about a Z-axis according to one embodiment of the present invention;
FIG. 3 is a perspective view of a three-dimensional geometry at all angles provided by an embodiment of the present invention;
FIG. 4 is a perspective view of a three-dimensional geometry rotated 0 degrees about the X-axis according to one embodiment of the present invention;
FIG. 5 is a perspective view of a three-dimensional geometry for performing a first cut in three dimensions according to an embodiment of the present invention;
FIG. 6 is a perspective view of a three-dimensional geometry rotated 30 degrees about the X-axis according to one embodiment of the present invention;
FIG. 7 is a perspective view of a three-dimensional geometry performing a second cut in three dimensions according to an embodiment of the present invention;
FIG. 8 is a block diagram of a three-dimensional image fusion apparatus based on computed tomography according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to FIG. 1, a flow chart of a three-dimensional image fusion method based on computed tomography according to the present application is shown.
As shown in fig. 1, in step S101, at least one CT slice image in two dimensions is acquired, and a three-dimensional geometry reconstruction is performed from the at least one CT slice image;
in step S102, the three-dimensional geometric structure is rotated by different angles based on the X-axis, the Y-axis or the Z-axis, and the three-dimensional geometric structure under the current viewing angle is projected, so that a projection image is formed;
In step S103, identifying a certain target object in the projection images with different angles according to an image identification method, and labeling the certain target object based on a labeling frame;
In step S104, determining whether a difference index between a labeling frame in the projection image of the front projection angle and a labeling frame in the projection image of a certain angle is smaller than a preset threshold based on a matching algorithm;
In step S105, if the difference index between the source labeling frame in the projection image at the orthographic projection angle and the target labeling frame in the projection image at a certain angle is smaller than the preset threshold, the labeling frame in the projection image at a certain angle and the labeling frame in the projection image at the orthographic projection angle are fused, so as to obtain the three-dimensional labeling frame of the certain target object.
According to the method, two-dimensional CT slice images are read, three-dimensional point data are generated, the three-dimensional point data rotate around an X axis, a Y axis or a Z axis for different angles, a screenshot under the current visual angle is generated, a projection image file is generated, AI is adopted to identify the projection image, the coordinates, types and confidence of marking frames of contraband are marked automatically, finally, each marking frame in the projection image with different angles is matched, the multi-angle marking frame of each contraband is obtained, then the three-dimensional marking frames of the contraband are obtained through fusion, intelligent identification and automatic marking of the three-dimensional contraband are achieved, the workload of security personnel is greatly reduced, and security inspection efficiency is greatly improved.
In a specific embodiment, the matching algorithm is a single rotation axis projection matching algorithm, specifically:
The dashed box in fig. 1 is the source box, the black box is the target box, and one of the three black boxes that best matches the dashed box is found.
Coordinates (313 221 502 283) of the source annotation box, minimum min=313, maximum max=502.
The coordinates of the three black target labeling frames are
Box1 coordinates= (338 290 577 356) min1=338, max1=577
Box2 coordinates = (315 190 502 253) min2=315, max2=502
Box3 coordinates = (354 241 604 304) min3=354, max3=604
Because of the rotation up and down, the abscissa of the article will remain unchanged. Defining a difference index V:
V=(minSrc-minDst)2+(maxSrc-maxDst)2,
wherein V is a difference index between a labeling frame in the projection image at a forward projection angle and a labeling frame in the projection image at a certain angle, minSrc is a minimum abscissa of a source labeling frame, minDst is a minimum abscissa of a target labeling frame, maxSrc is a maximum abscissa of a source labeling frame, and maxDst is a maximum abscissa of a target labeling frame.
Obtaining
V1=(313-338)^2+(502-577)^2=6250
V2=(313-315)^2+(502-502)^2=4
V3=(313-354)^2+(502-604)^2=12085
V2 is seen to be the smallest, so the box (315 190 502 253) in fig. 1 is the best match to the dashed box (313 221 502 283) in fig. 1.
If the minimum V is still greater than a threshold (500 for this routine), then no box is considered to match the dashed box in FIG. 1.
And sequentially searching for the labeling frames which can be matched with the dotted line frames in fig. 1 in projection views of other angles, and obtaining a labeling frame list of all angles of a certain contraband:
3 0 313 221 502 283
3 20 313 188 502 252
3 40 313 177 503 242
3 60 315 190 502 253
3 80 315 225 500 288
3 100 317 276 497 339
3 120 315 339 503 402
3 140 313 404 503 469
3 160 316 466 502 529
the first field is a rotation axis, 1 represents an x-axis, the second field is a rotation angle, and the last four fields are four coordinates of a two-dimensional label frame.
It can be seen that the abscissas of all the matched frames are approximately equal.
In another embodiment, the matching algorithm is a multi-axis projection matching algorithm, specifically:
Matching of projection views between different rotation axes requires multiple matching with projection views of four angles x90, y0, z90 as an intermediary.
The x90 can be obtained by exchanging the abscissa of the labeling frame of z 0.
For example, if the source labeling frame is z60 (338 290 577 356) and the target labeling frame is x30, then the matching box1 (338 423 576 482) of the source labeling frame is first found in z0, then the matching box2 (423 338 481 575) of box1 is found in x90 after the abscissa is exchanged, and finally the matching box3 (422 242 482 401) of box2 is found in x30, that is, the matching is sequentially carried out according to the sequence of z60- > z0- > x90- > x 30.
The box (422 242 482 401) in the final x30 projection is the box that best matches the box (338 290 577 356) in the z60 projection.
A matching list of all angles of this contraband is obtained in turn as follows:
1 0 422 252 483 313
1 30 422 242 482 401
1 60 422 273 481 501
1 90 423 338 481 575
3 0 338 423 576 482
3 30 339 356 579 417
3 60 338 290 577 356
3 90 343 252 584 313
fusion begins after matching. A two-dimensional label box matching list of certain contraband is taken as an example, and the detailed process of the fusion algorithm is introduced.
1 0 422 252 483 313
1 30 422 242 482 401
1 60 422 273 481 501
1 90 423 338 481 575
3 0 338 423 576 482
3 30 339 356 579 417
3 60 338 290 577 356
3 90 343 252 584 313
Labeling frame in x 0 degree projection diagram
First treating a first angle 10 422 252 483 313
The box marked (422 252 483 313) in the x 0 degree projection is the black box in fig. 4.
Cutting is performed in three dimensions as shown in fig. 5.
The box (422 242 482 401) in the x 30 degree projection is the black box in fig. 6.
The second cut is made in three dimensions as shown in fig. 7.
And after all the matched angles are completely cut, calculating the coordinate ranges of all the cutting points to obtain a three-dimensional coordinate frame 160 356 16 228 428 76.
Referring to fig. 8, a block diagram of a three-dimensional image fusion apparatus based on computed tomography according to the present application is shown.
As shown in fig. 8, the three-dimensional image fusion apparatus 200 includes a reconstruction module 210, a projection module 220, an identification module 230, a judgment module 240, and a fusion module 250.
The reconstruction module 210 is configured to acquire at least one two-dimensional CT slice image and reconstruct a three-dimensional geometric structure according to the at least one CT slice image, the projection module 220 is configured to rotate the three-dimensional geometric structure by different angles based on an X axis, a Y axis or a Z axis and project the three-dimensional geometric structure under the current view angle to form a projection image, the identification module 230 is configured to identify a certain target object in the projection image by different angles according to an image identification method and label the certain target object based on a labeling frame, the judgment module 240 is configured to judge whether a difference index between a labeling frame in the projection image of an orthographic projection angle and a labeling frame in the projection image of a certain angle is smaller than a preset threshold value or not based on a matching algorithm, and the fusion module 250 is configured to fuse a source labeling frame in the projection image of a certain angle and a target labeling frame in the projection image of a certain angle to the three-dimensional labeling frame of a certain angle if the difference index between the source labeling frame in the projection image of an orthographic projection image of a certain angle is smaller than the preset threshold value.
It should be understood that the modules depicted in fig. 8 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations and features described above for the method and the corresponding technical effects are equally applicable to the modules in fig. 8, and are not described here again.
In other embodiments, embodiments of the present invention further provide a computer-readable storage medium storing computer-executable instructions for performing the computer tomography-based three-dimensional image fusion method in any of the above-described method embodiments;
As one embodiment, the computer-readable storage medium of the present invention stores computer-executable instructions configured to:
Acquiring at least one two-dimensional CT slice image, and reconstructing a three-dimensional geometric structure according to the at least one CT slice image;
rotating the three-dimensional geometric structure by different angles based on an X axis, a Y axis or a Z axis, and projecting the three-dimensional geometric structure under the current view angle to form a projection image;
Identifying a certain target object in the projection images of different angles according to an image identification method, and marking the certain target object based on a marking frame;
Judging whether a difference index of a marking frame in the projection image of the front projection angle and a marking frame in the projection image of a certain angle is smaller than a preset threshold value or not based on a matching algorithm;
if the difference index between the source annotation frame in the projection image at the orthographic projection angle and the target annotation frame in the projection image at a certain angle is smaller than a preset threshold, fusing the annotation frame in the projection image at a certain angle with the annotation frame in the projection image at the orthographic projection angle, so that the three-dimensional annotation frame of the certain target object is obtained.
The computer-readable storage medium may include a storage program area that may store an operating system, an application program required for at least one function, and a storage data area that may store data created according to the use of the three-dimensional image fusion apparatus based on computer tomography, and the like. In addition, the computer-readable storage medium may include high-speed random access memory, and may also include memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the computer readable storage medium optionally includes a memory remotely located with respect to the processor, the remote memory being connectable to the computer tomography-based three-dimensional image fusion apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 9, the device includes a processor 310 and a memory 320. The electronic device may further comprise input means 330 and output means 340. The processor 310, memory 320, input device 330, and output device 340 may be connected by a bus or other means, for example in fig. 9. Memory 320 is the computer-readable storage medium described above. The processor 310 executes various functional applications of the server and data processing, i.e., implements the three-dimensional image fusion method based on computer tomography of the above-described method embodiments, by running nonvolatile software programs, instructions, and modules stored in the memory 320. The input device 330 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the computer tomography-based three-dimensional image fusion device. The output device 340 may include a display device such as a display screen.
The electronic equipment can execute the method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in this embodiment may be found in the methods provided in the embodiments of the present invention.
As an embodiment, the electronic device is applied to a three-dimensional image fusion device based on computer tomography and used for a client, and comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can:
Acquiring at least one two-dimensional CT slice image, and reconstructing a three-dimensional geometric structure according to the at least one CT slice image;
rotating the three-dimensional geometric structure by different angles based on an X axis, a Y axis or a Z axis, and projecting the three-dimensional geometric structure under the current view angle to form a projection image;
Identifying a certain target object in the projection images of different angles according to an image identification method, and marking the certain target object based on a marking frame;
Judging whether a difference index of a marking frame in the projection image of the front projection angle and a marking frame in the projection image of a certain angle is smaller than a preset threshold value or not based on a matching algorithm;
if the difference index between the source annotation frame in the projection image at the orthographic projection angle and the target annotation frame in the projection image at a certain angle is smaller than a preset threshold, fusing the annotation frame in the projection image at a certain angle with the annotation frame in the projection image at the orthographic projection angle, so that the three-dimensional annotation frame of the certain target object is obtained.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product, which may be stored in a computer-readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the various embodiments or methods of some parts of the embodiments.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention, and not for limiting the same, and although the present invention has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present invention.

Claims (6)

1. A three-dimensional image fusion method based on computed tomography, comprising:
Acquiring at least one two-dimensional CT slice image, and reconstructing a three-dimensional geometric structure according to the at least one CT slice image;
rotating the three-dimensional geometric structure by different angles based on an X axis, a Y axis or a Z axis, and projecting the three-dimensional geometric structure under the current view angle to form a projection image;
Identifying a certain target object in the projection images of different angles according to an image identification method, and marking the certain target object based on a marking frame;
Judging whether a difference index of a marking frame in the projection image of the front projection angle and a marking frame in the projection image of a certain angle is smaller than a preset threshold value or not based on a matching algorithm;
If the difference index between the source annotation frame in the projection image at the orthographic projection angle and the target annotation frame in the projection image at a certain angle is smaller than a preset threshold, fusing the annotation frame in the projection image at a certain angle with the annotation frame in the projection image at the orthographic projection angle to obtain a three-dimensional annotation frame of the certain target, wherein the expression for calculating the difference index is as follows:
,
in the formula, Is a difference index of a labeling frame in the projection image of the orthographic projection angle and a labeling frame in the projection image of a certain angle,For the minimum abscissa of the source annotation frame,For the minimum abscissa of the target annotation frame,For the maximum abscissa of the source annotation frame,And marking the maximum abscissa of the frame for the target.
2. The three-dimensional image fusion method based on computed tomography according to claim 1, wherein the matching algorithm is a single rotation axis projection matching algorithm.
3. The three-dimensional image fusion method based on computed tomography according to claim 1, wherein the matching algorithm is a multi-axis projection matching algorithm.
4. A three-dimensional image fusion apparatus based on computed tomography, comprising:
a reconstruction module configured to acquire at least one CT slice image in two dimensions and to perform a three-dimensional geometry reconstruction from the at least one CT slice image;
A projection module configured to rotate the three-dimensional geometric structure by different angles based on an X-axis, a Y-axis, or a Z-axis, and to project the three-dimensional geometric structure at a current viewing angle so as to form a projected image;
the identification module is configured to identify a certain target object in the projection images of different angles according to an image identification method, and label the certain target object based on a labeling frame;
The judging module is configured to judge whether the difference index of the marking frame in the projection image of the orthographic projection angle and the marking frame in the projection image of a certain angle is smaller than a preset threshold value or not based on a matching algorithm;
the fusion module is configured to fuse the labeling frame in the projection image at a certain angle with the labeling frame in the projection image at the front projection angle if the difference index between the source labeling frame in the projection image at the front projection angle and the target labeling frame in the projection image at a certain angle is smaller than a preset threshold, so as to obtain the three-dimensional labeling frame of the certain target object, wherein the expression for calculating the difference index is as follows:
,
in the formula, Is a difference index of a labeling frame in the projection image of the orthographic projection angle and a labeling frame in the projection image of a certain angle,For the minimum abscissa of the source annotation frame,For the minimum abscissa of the target annotation frame,For the maximum abscissa of the source annotation frame,And marking the maximum abscissa of the frame for the target.
5. An electronic device comprising at least one processor and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-3.
6. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method of any one of claims 1 to 3.
CN202111590868.8A 2021-12-23 2021-12-23 A three-dimensional image fusion method and device based on computer tomography Active CN114708171B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111590868.8A CN114708171B (en) 2021-12-23 2021-12-23 A three-dimensional image fusion method and device based on computer tomography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111590868.8A CN114708171B (en) 2021-12-23 2021-12-23 A three-dimensional image fusion method and device based on computer tomography

Publications (2)

Publication Number Publication Date
CN114708171A CN114708171A (en) 2022-07-05
CN114708171B true CN114708171B (en) 2025-04-01

Family

ID=82167799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111590868.8A Active CN114708171B (en) 2021-12-23 2021-12-23 A three-dimensional image fusion method and device based on computer tomography

Country Status (1)

Country Link
CN (1) CN114708171B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453063B (en) * 2023-06-12 2023-09-05 中广核贝谷科技有限公司 Target detection and recognition method and system based on fusion of DR image and projection image
CN117495949B (en) * 2023-12-19 2025-03-07 北京百度网讯科技有限公司 Image labeling method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901489A (en) * 2012-12-27 2014-07-02 清华大学 Method and device for inspecting object, and a display method

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1336811A (en) * 1997-10-10 2002-02-20 模拟技术公司 Calculating computed tomography scanning target detection
US6088423A (en) * 1998-06-05 2000-07-11 Vivid Technologies, Inc. Multiview x-ray based system for detecting contraband such as in baggage
CN101071111B (en) * 2006-05-08 2011-05-11 清华大学 Multi-vision aviation container safety inspection system and method
CN101470082B (en) * 2007-12-27 2011-03-30 同方威视技术股份有限公司 Object detection device and detection method thereof
CN103903297B (en) * 2012-12-27 2016-12-28 同方威视技术股份有限公司 3D data processing and recognition method
US10119923B2 (en) * 2015-10-19 2018-11-06 L3 Security & Detection Systems, Inc. Systems and methods for image reconstruction at high computed tomography pitch
CN107192726B (en) * 2017-05-05 2019-11-12 北京航空航天大学 Fast and high-resolution three-dimensional cone beam computed tomography method and device for plate and shell objects
CN110148084B (en) * 2019-05-21 2023-09-19 智慧芽信息科技(苏州)有限公司 Method, device, equipment and storage medium for reconstructing 3D model from 2D image
CN110796620B (en) * 2019-10-29 2022-05-17 广州华端科技有限公司 Interlayer artifact suppression method and device for breast tomographic reconstruction image
CN111008676B (en) * 2019-12-27 2022-07-12 哈尔滨工业大学 A security inspection method and security inspection system
CN113643360B (en) * 2020-05-11 2024-12-27 同方威视技术股份有限公司 Target object positioning method, device, equipment, medium and program product
CN112884855A (en) * 2021-01-13 2021-06-01 中广核贝谷科技有限公司 Processing method and device for security check CT reconstructed image
CN113538372B (en) * 2021-07-14 2022-11-15 重庆大学 Three-dimensional target detection method and device, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901489A (en) * 2012-12-27 2014-07-02 清华大学 Method and device for inspecting object, and a display method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Inspection of Complex Objects Using Multiple X-ray Views》;Dominggo Mery et al.;《IEEE/ASME Transactions on Mechatronics》;20150228;第1-11页 *

Also Published As

Publication number Publication date
CN114708171A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN114708171B (en) A three-dimensional image fusion method and device based on computer tomography
CN111932673A (en) Object space data augmentation method and system based on three-dimensional reconstruction
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
CN111476776A (en) Chest lesion position determination method, system, readable storage medium and device
US20250200880A1 (en) Image processing method and apparatus, and device
CN117876609A (en) A method, system, device and storage medium for multi-feature three-dimensional face reconstruction
CN113379932A (en) Method and device for generating human body three-dimensional model
Liu et al. Relative pose estimation of uncooperative spacecraft using 2D–3D line correspondences
Niemirepo et al. Open3DGen: open-source software for reconstructing textured 3D models from RGB-D images
CN117173072A (en) Weak laser image enhancement method and device based on deep learning
Hosseini et al. Single-view 3D reconstruction of surface of revolution
Boyne et al. Found: Foot optimization with uncertain normals for surface deformation using synthetic data
CN113643360B (en) Target object positioning method, device, equipment, medium and program product
van Ruitenbeek et al. Multi-view damage inspection using single-view damage projection
CN112561914B (en) Image processing method, system, computing device and storage medium
Sayour et al. HAC-SLAM: Human Assisted Collaborative 3D-SLAM Through Augmented Reality
CN110427847B (en) Method and equipment for acquiring three-dimensional model
Preissler et al. Feature detection in unorganized pointclouds
US20220114773A1 (en) Generation system and generation method for perspective image
El-Dawy et al. MonoLite3D: Lightweight 3D Object Properties Estimation
CN116704156B (en) Model generation method, electronic equipment and model generation system
CN113989303A (en) A method and device for rapidly segmenting objects in three-dimensional CT images
Xu et al. Accelerating Outlier-robust Rotation Estimation by Stereographic Projection
Wei et al. OL-Aug: online LiDAR data augmentation for 3D detection
Heng et al. 3D reconstruction of a strongly reflective surface based on binocular line-structured light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant