Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For a better understanding of the present invention, methods, apparatus and systems for material decomposition according to embodiments of the present invention will be described in detail below with reference to the accompanying drawings, and it should be noted that these embodiments are not intended to limit the scope of the present disclosure.
Fig. 1 is a flowchart illustrating a substance decomposition method according to an embodiment of the present invention. As shown in fig. 1, the substance decomposition method 100 in the present embodiment includes the steps of:
step S110, collecting projection data of the calibration phantom in two or more than two energy windows, and determining the multi-energy data of the calibration phantom corresponding to the material decomposition implementation mode of the designated domain according to the projection data and the material decomposition implementation mode of the designated domain.
And S120, setting ideal monoenergetic, and obtaining ideal monoenergetic data of the calibrated die body under the ideal monoenergetic through a preset method.
And S130, training a corresponding neural network according to the multi-energy data, wherein the trained neural network reflects the mapping relation between the multi-energy data and the ideal mono-energy data.
And step S140, decomposing the substance to be decomposed based on the projection data of the substance to be decomposed in the specified energy window and the trained neural network.
According to the material decomposition method provided by the embodiment of the invention, the neural network is trained to decompose the material, so that the material is identified, and accurate image reconstruction is carried out.
In the embodiment of the invention, the artificial neural network is a large-scale multi-parameter optimization tool, and can learn hidden features which are difficult to summarize in data by depending on a large amount of training data, so that a material decomposition task is completed.
In step S110, the substance decomposition implementation of the specified domain includes projection domain substance decomposition and image domain substance decomposition.
Among the multi-energy decompositions, the common decomposition modes are double-effect decomposition and decomposition of the base material. The energy spectrum CT image reconstruction is based on the two decomposition modes, the spatial distribution of the decomposition coefficients is obtained by combining the spatial structure information reconstruction of the conventional CT, and the virtual single-energy CT image and the electron density and equivalent atomic number distribution image are further obtained. The energy spectrum CT image reconstruction has the characteristics of nonlinearity, multiple resolvability, high dimension and the like, and is difficult to directly solve, and the existing solving method comprises the following steps: projection domain decomposition and image domain decomposition.
The projection domain decomposition method is a preprocessing method, decomposition is completed in a projection domain, and specifically, a decomposition coefficient image can be obtained by using a traditional CT reconstruction method after a decomposition coefficient is obtained by using projection data; the image domain decomposition method is a post-processing method, and decomposition is carried out in an image domain, specifically, firstly, a traditional CT reconstruction method can be used for reconstructing linear attenuation coefficient images under different energy spectrums, and then a linear equation system is directly solved for solving decomposition coefficients.
In the embodiment of the invention, the quantity of energy windows for acquiring projection data is not limited, and the material composition and concentration of a calibration phantom are not limited. Specifically, the calibration phantom comprises at least two base materials.
The following describes in detail the processes of material decomposition in the projection domain and material decomposition in the image domain according to an embodiment of the present invention by specific embodiments, respectively.
In some embodiments, the material decomposition of the specified region in step S110 is implemented as projection region material decomposition, and when the material decomposition is performed on the calibration phantom in the projection region, the multi-energy data of the calibration phantom is the projection data of the calibration phantom in the specified energy window, and the ideal mono-energy data of the calibration phantom is the ideal mono-energy projection data.
In this embodiment, the neural network trained using the projection data may be referred to as a projection domain neural network.
In some embodiments, when the ideal monoenergetics are set in step S120, the number of the ideal monoenergetics is not limited by the number of the energy windows when the projection data are acquired, the set number of the ideal monoenergetics may not be equal to the number of the energy windows when the projection data are acquired, and the value of the ideal monoenergetics may or may not be within the energy range of the energy windows when the projection data are acquired.
As one example, the intermediate energy of a specified energy window may be obtained as the ideal monoenergetic.
Fig. 2 shows a first exemplary flowchart of fig. 1 for obtaining ideal monoenergetic data. As shown in fig. 2, in some embodiments, step S120 may further include:
step S201, using the projection data to obtain a reconstructed image of the calibration phantom in the designated energy window by a single energy tomography CT reconstruction method.
In this step, a single-energy spectrum CT reconstruction method, i.e., a conventional CT reconstruction method. In some embodiments, the conventional CT reconstruction method involves a combination of X-ray projection imaging methods and Radon transform methods.
As an example, let E be the energy window for collecting datakThe projection data collected by the energy window is pkSelecting the appointed energy as ideal monoenergetic to obtain ideal monoenergetic projection data qk(K ═ 1, 2.., K), K being the number of energy windows, and K being an integer equal to or greater than 2. If R is used to represent the Radon transform of X-ray imaging, the reconstructed image of the calibration phantom in the designated energy window can be represented as Pk=R-1(pk) The ideal mono-energy reconstructed image is Qk=R-1(qk)。
In this example, a smaller energy window may be selected for good energy resolution.
Step S202, acquiring the set ideal monoenergetic, and obtaining an ideal monoenergetic reconstructed image under the ideal monoenergetic through image registration.
In the embodiment of the invention, the reconstructed images obtained under different conditions such as different detector units, different projection angles and the like can be matched or superposed through image registration.
In some embodiments, the process of image registration may include: taking the projection data of the collected calibration phantom in two or more energy windows as original projection data; obtaining a reconstructed image of each energy window by using a single energy spectrum CT reconstruction method according to the projection data of each energy window in the two or more energy windows; carrying out re-projection on the reconstructed image of each energy window according to the acquisition condition of the actual projection to obtain re-projection data of the reconstructed image of each energy window; analyzing the original projection and the corresponding re-projection data, calculating the axial displacement of the actual projection, and correcting the original projection data of each energy window according to the axial displacement of the actual projection; and obtaining a reconstructed image of each energy window by using the corrected original projection data of each energy window and a single energy spectrum CT reconstruction method.
And S203, projecting the ideal mono-energy reconstructed image to obtain ideal mono-energy projection data, and taking the ideal mono-energy projection data as ideal mono-energy data.
As one example, the energy window E is acquired using a calibration phantomkProjection data p ofkObtaining a multi-energy reconstructed image P by a traditional CT reconstruction methodk. Energy-taking window EkThe intermediate energy is used as ideal monoenergetic, and an ideal monoenergetic reconstructed image Q can be obtained through image registrationkFor the obtained ideal mono-energy reconstructed image QkProjecting to obtain ideal single-energy projection data qk。
In this embodiment, projection data p are utilizedkAnd ideal monoenergetic projection data qkThe trained corresponding neural network may be referred to as a projection domain neural network.
Fig. 3 is a first example flow diagram of training a corresponding neural network from the multipotent data in fig. 1. As shown in fig. 3, in some embodiments, the step of training the corresponding neural network according to the multipotent data in step S130 may further include:
step S301, obtaining an hXg neighborhood of a projection point of the calibration phantom in the projection data of the designated energy window, and using the projection value of the projection point in the hXg neighborhood as input data of a projection domain neural network.
In this step, h is the number of detector units to which the specified projection point is adjacent, and g is the number of projection angles to which the specified projection point is adjacent.
Step S302, taking an ideal projection value corresponding to the central projection point in the h x g neighborhood in the ideal single-energy projection data as a target value of the projection domain neural network.
Step S303, setting the number of hidden layer layers, a hidden layer activation function, an output layer activation function and a target function of the projection domain neural network, and training the projection domain neural network through a preset neural network algorithm and a learning rate.
In this step, the trained projection domain neural network may include an input layer, an output layer, and an hidden layer, wherein the number of layers of the hidden layer may be greater than or equal to 1.
In some embodiments, the number of hidden layer layers of the projection domain neural network can be set empirically, and the hidden layer activation function, the output layer activation function and the target function can be selected according to different requirements. The number of hidden layer layers of the projection domain neural network is related to the type and the number of the calibration motifs, and the activation function of the hidden layer and the activation function of the output layer are related to input data.
In some embodiments, a fully-connected connection mode can be adopted among the input layer, the output layer and the hidden layer of the projection domain neural network.
When a full-connection mode is adopted, h × g neighborhoods of projection points of a calibration phantom in projection data of a specified energy window can be obtained, and projection values of the projection points in the h × g neighborhoods are used as input data of a projection domain neural network; the product of the number of projection points in the h multiplied by g neighborhood and the number of energy windows can be used as the number of nodes of the input layer, and the set number of ideal monoenergetics can be used as the number of nodes of the output layer.
In other embodiments, a Convolutional Neural Network (CNN) and full connection mode may be used between the input layer, the output layer and the hidden layer of the projection domain Neural Network.
In an embodiment of the invention, the convolutional neural network is a deep neural network with a convolutional structure. Hidden layers of a convolutional neural network may include convolutional layers (adaptive convolutional layers) and pooling layers (posinglayers). The convolutional layer can be used for enhancing the characteristics of input image signals through convolution operation and reducing image noise, and the pooling layer can be used for reducing dimensionality of convolution results of the convolutional layer and preventing overfitting.
When the projection domain neural network is trained in a connection mode of combining the convolution neural network and full connection, a projection image of a calibration phantom in a specified energy window can be acquired as input data of the projection domain neural network, and parameters such as the size of convolution kernels of convolution layers in the projection domain neural network, the number of the convolution kernels and the like are determined.
As an example, for a projection point B (m, n) of the calibration phantom on the projection image of the designated energy window, wherein m may represent the m-th detector unit, n may represent the n-th projection angle, the projection value p of the projection point B (m, n)kThe ideal value of (m, n) is the projection value q of the corresponding point on the ideal mono-energy projection imagek(m,n)。
Considering that the projection value of the projection point B (m, n) is affected by the projection values of the surrounding projection points, in order to improve the robustness of the projection domain neural network, in some embodiments, the projection values of h adjacent detector units and g adjacent projection angles of the projection points B (m, n) and B (m, n), i.e., the projection value of h × g neighborhood with the point B (m, n) as the center, are taken as input data of the projection domain neural network, and the corresponding ideal projection value of the ideal monoenergetic projection data point B (m, n) is taken as a target value of the projection domain neural network, so as to train the projection domain neural network.
In some embodiments, as an example, the projection values of K × h × g projection points in the h × g neighborhoods of the K energy windows of the acquired calibration phantom may be used as input data of a projection domain neural network, and training of the projection domain neural network may be performed simultaneously, where K is an integer greater than or equal to 2.
In some embodiments, the trained projection domain neural network may also be tested and/or validated. Specifically, a proportion of input data may be selected from the input data of the projection domain neural network as training data of the projection domain neural network.
In some embodiments, a portion of the input data of the projection domain neural network may be used as training data of the projection domain neural network, and another portion of the input data may be used as test and/or validation data of the projection domain neural network.
The trained projection domain neural network can be obtained through the embodiment. In practical use, projection data of a substance to be decomposed in a specified energy window can be collected in the same energy spectrum CT system, input data of a projection domain neural network is generated according to the projection data, the input data and the input data in the process of training the projection domain neural network have the same data format, the trained projection domain neural network is used for processing the input data, and ideal single-energy projection data are output.
Specifically, the step of decomposing the substance to be decomposed based on the projection data of the substance to be decomposed in the specified energy window and the trained neural network in step S140 may further include:
step 1401, obtaining an h × g neighborhood of a projection point of a substance to be decomposed in projection data of a specified energy window, using a projection value of the projection point of the substance to be decomposed in the h × g neighborhood as input data of a projection domain neural network, and obtaining an ideal mono-energy projection value of the substance to be decomposed by using the trained projection domain neural network.
In this step, h is the number of detector units to which the specified projection point is adjacent, and g is the number of projection angles to which the specified projection point is adjacent.
And S1402, performing single-energy CT reconstruction on the ideal single-energy projection value of the substance to be decomposed to obtain an ideal single-energy reconstructed image of the substance to be decomposed.
In the embodiment, ideal monoenergetic projection data output by a projection domain neural network is used, and an ideal monoenergetic reconstructed image can be obtained through traditional monoenergetic CT reconstruction.
The following describes the process of training the image domain neural network in the image domain to perform material decomposition in detail by using a specific embodiment.
In some embodiments, the material decomposition of the designated region in step S110 is implemented by performing material decomposition on an image domain, and when the calibrated phantom is subjected to material decomposition in the image domain, the multi-energy data of the calibrated phantom is a reconstructed image of the calibrated phantom in a designated energy window, the reconstructed image is obtained by a single-energy CT reconstruction method using projection data of the calibrated phantom in the selected energy window, and the ideal single-energy data of the calibrated phantom is an ideal single-energy reconstructed image.
In this embodiment, the neural network trained using the reconstructed images may be referred to as an image domain neural network.
FIG. 4 shows a second exemplary flowchart of the steps in FIG. 1 for obtaining ideal monoenergetic data. As shown in fig. 4, in some embodiments, step S120 may further include:
step S401, acquiring the set ideal monoenergetic.
As one example, the intermediate energy of a specified energy window may be obtained as the ideal monoenergetic.
And S402, obtaining an ideal mono-energy reconstructed image under the ideal mono-energy through a table look-up method according to the energy value of the ideal mono-energy, and taking the ideal mono-energy reconstructed image as ideal mono-energy data.
As an example, a calibration phantom is used in energy window EkProjection data p ofkObtaining a multi-energy reconstructed image P by a traditional CT reconstruction methodkEnergy-taking window EkThe intermediate energy is used as the energy value of ideal monoenergetic, and the related attenuation coefficient provided by the national institute of standards and technology NIST is inquired by a table look-up method to obtain a calibrated die body in an energy window EkCorresponding ideal monoenergetic reconstructed image Qk。
In this embodiment, the reconstructed image P is usedkAnd an ideal monoenergetic reconstructed image QkThe trained neural network may be referred to as an image domain neural network.
FIG. 5 is a second example flow diagram of training a neural network based on the multipotent data in FIG. 1. As shown in fig. 5, in some embodiments, the step of training the neural network according to the multipotent data in step S130 may further include:
step S501, acquiring an n multiplied by n neighborhood of a pixel point of a calibration motif in a reconstructed image of a specified energy window, and taking a reconstructed value of the pixel point in the n multiplied by n neighborhood as input data of an image domain neural network.
In this step, n is the neighborhood size of the specified pixel point.
Step S502, taking an ideal reconstruction value corresponding to a central pixel point in an n multiplied by n neighborhood in an ideal single-energy reconstruction image as a target value of an image domain neural network;
step S503, setting the number of hidden layer layers, hidden layer activation function, output layer activation function and target function of the image domain neural network, and training the image domain neural network through a preset neural network algorithm and a learning rate.
In this step, the trained image domain neural network includes an input layer, an output layer, and hidden layers, wherein the number of hidden layers may be greater than or equal to 1.
In some embodiments, the number of hidden layer layers of the image domain neural network can be set empirically, and the hidden layer activation function, the output layer activation function and the objective function can be selected according to different requirements. The number of hidden layer layers of the image domain neural network is related to the type and the number of the calibration motifs, and the activation function of the hidden layer and the activation function of the output layer are related to input data.
In some embodiments, the image domain neural network may employ fully-connected connections between the input layer, the output layer, and the hidden layer.
When a full-connection mode is adopted, an nxn neighborhood of pixel points of a calibration phantom in a reconstruction image of a specified energy window can be obtained, a reconstruction value of the pixel points in the nxn neighborhood is used as input data of an image domain neural network, the product of the number of the pixel points in the nxn neighborhood and the number of the energy window can be used as the number of nodes of an input layer, and the set number of ideal monoenergetics is used as the number of nodes of an output layer.
In other embodiments, the input layer, the output layer and the hidden layer of the image domain neural network can adopt a connection mode of a convolutional neural network and full connection.
When the image domain neural network is trained in a connection mode of combining the convolutional neural network and full connection, a reconstructed image of a calibration phantom in a specified energy window can be acquired as input data of the projection domain neural network, and parameters such as the size of convolutional cores and the number of the convolutional cores of convolutional layers in the image domain neural network are determined.
As an example, for a pixel point A (i, j) of the calibrated phantom on the multi-energy reconstructed image with the specified energy window, wherein i and j can be used to represent the position of the pixel point. The reconstruction value of the pixel point A (i, j) is the attenuation coefficient PkThe ideal value of (i, j) is the reconstruction value Q of the corresponding point on the reconstructed image of the ideal monoenergetick(i,j)。
Considering that the reconstruction value of the pixel point a (i, j) is affected by surrounding pixels, in order to improve the robustness of the image domain neural network, in some embodiments, an image region with a neighborhood size of n × n is taken with the pixel point a (i, j) as a center, and is used as input data of the image domain neural network. Correspondingly, the ideal reconstruction value, namely the real attenuation coefficient, of the pixel point A (i, j) in the ideal single-energy reconstruction image is used as the target value of the image domain neural network, and the image domain neural network is trained.
In some embodiments, as an example, the reconstructed values of K × n × n pixel points in an n × n neighborhood of the obtained calibration phantom under K energy windows may be used as input data of the image domain neural network, and the K energy windows perform training of the image domain neural network simultaneously.
In other embodiments, the trained image domain neural network may be tested and/or validated. Specifically, a proportion of input data from the input data of the image domain neural network may be selected as training data of the image domain neural network.
In some embodiments, a portion of the input data of the image domain neural network may be used as training data of the image domain neural network, and another portion of the input data may be used as test and/or validation data of the image domain neural network.
The trained image domain neural network can be obtained through the embodiment. In practical use, the projection data of the substance to be decomposed in a specified energy window can be collected in the same energy spectrum CT system, the input data of the image domain neural network is generated according to the projection data, the input data and the input data in the image domain neural network training have the same data format, the trained image domain neural network is used for processing the input data, and an ideal single-energy reconstructed image is output.
In some embodiments, the decomposing the substance to be decomposed in step S140 may further include, based on the projection data of the substance to be decomposed in the specified energy window and the trained neural network:
step S1411, obtaining a reconstructed image of the substance to be decomposed in the designated energy window by a single energy CT reconstruction method according to the projection data of the substance to be decomposed in the designated energy window.
Step S1412, acquiring an nxn neighborhood of a pixel point of the substance to be decomposed in the reconstructed image of the designated energy window, taking a reconstructed value of the pixel point of the substance to be decomposed in the nxn neighborhood as input data of the image domain neural network, and obtaining an ideal mono-energy reconstructed value of the substance to be decomposed by using the trained image domain neural network.
In this step, n is the neighborhood size of the specified pixel point.
And step S1413, obtaining an ideal monoenergetic reconstructed image of the substance to be decomposed according to the ideal monoenergetic reconstructed value of the substance to be decomposed.
In the embodiment, an ideal monoenergetic reconstruction value, namely an ideal monoenergetic attenuation coefficient, is output by using an image domain neural network, and a reconstructed image is correspondingly returned according to the positions of pixel points, so that an ideal monoenergetic reconstructed image can be obtained.
In the above embodiment, when the projection domain neural network training or the image domain neural network training is performed, the preset neural network algorithm may be, for example, a gradient descent method or Levenberg-Marquardt algorithm; the hidden layer activation function may be a Sigmoid function or a ReLU (Rectified Linear Unit), the output layer activation function may be a Linear function, and the objective function may be a minimum mean square error, for example.
In some embodiments, the preset neural network algorithm may also be an improved algorithm of a gradient descent method, and the objective function, i.e., the cost function, may be set according to different training requirements.
And, in some embodiments, the setting of the appropriate learning rate can avoid the neural network from falling into local minima during the training process.
In other embodiments, the neural network algorithm may also be a Back Propagation (BP) algorithm, in which additional momentum may be combined with an adaptive learning rate to reduce oscillation during network training and accelerate convergence of the trained neural network.
In some embodiments, different activation functions may be selected depending on the situation. The S-shaped curve Sigmoid function in the embodiment of the invention can also be called as an S-shaped growth curve, and plays a role in nonlinear transformation of an activation function for an artificial neural network, so that the nonlinear expression capability of an artificial neural network system is increased; the linear rectification function ReLU can also be called a modified linear unit, gradient reduction can be more efficient when the ReLU is used as an activation function for operation, and the calculation process can be simplified, so that the overall calculation cost of the neural network is reduced.
In some embodiments, the hidden layer may also include a regulation layer (BN), which may also be referred to as a BN layer in some embodiments. The BN layer can normalize the neurons through a BN algorithm and optimize the training artificial neural network.
In some embodiments, the BN algorithm may make neural network training more stable, accelerate convergence speed, control over-fitting of the trained neural network, and may also play a role in regularization.
In some embodiments, the hidden layer may further include a dropout layer, and network optimization through the dropout layer may reduce overfitting of the neural network, and increase success rate and accuracy of neural network training.
When the projection domain neural network training or the image domain neural network training is carried out, whether the air value is simultaneously used for carrying out the neural network training or not can be selected according to requirements.
Specifically, the projection data further comprises projection data of the acquired air in two or more energy windows; the multi-energy data also includes air multi-energy data determined from air projection data at a specified energy window and a material decomposition implementation at a specified region.
In an embodiment of the present invention, the method for obtaining the input data of the air value may be the same as the method for obtaining the input data of the calibration phantom. Also, in some embodiments, the target value of the air value is set to 0. The training of the air value is added, so that the precision of the neural network training can be improved, and unnecessary errors and noises generated in the training process are avoided.
In other embodiments, the image data of a plurality of calibration phantoms may be used for training, and corresponding input data and target data may be added during network training.
According to the material decomposition method in the embodiment of the invention, the training neural network is not limited by the number of energy windows and the calibration phantom, and when the number of training samples is enough, the trained neural network can be used for carrying out accurate material decomposition and material identification, thereby carrying out accurate image reconstruction.
A substance decomposing device according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 6 is a schematic structural diagram of a material decomposition device according to an embodiment of the present invention. As shown in fig. 6, the substance decomposing device 600 includes:
the multi-energy data acquisition module 610 is configured to acquire projection data of a calibration phantom in two or more energy windows, and determine multi-energy data of the calibration phantom corresponding to a material decomposition implementation manner of a specified domain according to the projection data and the material decomposition implementation manner of the specified domain.
And the ideal monoenergetic data acquisition module 620 is used for setting ideal monoenergetics and obtaining ideal monoenergetic data of the calibrated die body under the ideal monoenergetics by a preset method.
The neural network training module 630 is configured to train a corresponding neural network according to the multi-energy data, where the trained neural network reflects a mapping relationship between the multi-energy data and the ideal mono-energy data.
And the substance decomposition module 640 is used for decomposing the substance to be decomposed based on the projection data of the substance to be decomposed in the specified energy window and the trained neural network.
According to the substance decomposition device provided by the embodiment of the invention, the neural network is trained to decompose the substance, so that the substance is identified, and the accuracy of image reconstruction can be improved.
In some embodiments, the specified domain of matter decomposition is implemented as a projected domain of matter decomposition; the multi-energy data acquisition module 610 is specifically configured to use projection data of a calibrated phantom in a specified energy window as multi-energy data.
In other embodiments, the domain-specific material decomposition is implemented as an image domain material decomposition; the multi-energy data acquisition module 610 is specifically configured to obtain a reconstructed image of the calibrated phantom in the specified energy window by using the projection data through a single-energy CT reconstruction method, and use the obtained reconstructed image of the calibrated phantom as the multi-energy data.
Fig. 7 is a schematic diagram of a specific structure of the ideal monoenergetic data acquisition module in fig. 6. As shown in fig. 7, in some embodiments, the ideal monoenergetic data acquisition module 620 includes:
and the reconstructed image obtaining unit 621 is configured to obtain a reconstructed image of the calibrated phantom in the specified energy window by using the projection data through a single energy tomography CT reconstruction method.
And an image registration unit 622, configured to acquire the set ideal monoenergetic, and obtain an ideal monoenergetic reconstructed image under the ideal monoenergetic through image registration.
And the ideal mono-energy projection data acquisition unit 623 is configured to project the ideal mono-energy reconstructed image to obtain ideal mono-energy projection data, and use the ideal mono-energy projection data as ideal mono-energy data.
In other embodiments, the ideal monoenergetic data acquisition module 620 may further include:
and the ideal mono-energy reconstructed image obtaining unit 624 is configured to obtain the set ideal mono-energy, obtain an ideal mono-energy reconstructed image under the ideal mono-energy by a table lookup method according to an energy value of the ideal mono-energy, and use the ideal mono-energy reconstructed image as ideal mono-energy data.
In the embodiment of the present invention, the ideal monoenergetic projection data of the neural network in the training projection domain is obtained by the ideal monoenergetic data obtaining module 620, and the ideal monoenergetic projection data is used as a target value of the training projection and the neural network; and acquiring an ideal mono-energy reconstructed image of the training image domain neural network through the ideal mono-energy data acquisition module 620, and taking the ideal mono-energy reconstructed image as a target value of the training image domain neural network.
Fig. 8 shows a specific structural diagram of the neural network training module 630 in fig. 6. As shown in fig. 8, in some embodiments, the neural network training module 630 may include:
the projection domain neural network input data acquiring unit 631 is configured to acquire an h × g neighborhood of a projection point of the calibration phantom in the projection data of the specified energy window, and use a projection value of the projection point in the h × g neighborhood as input data of the projection domain neural network, where h is the number of detector units adjacent to the specified projection point, and g is the number of projection angles adjacent to the specified projection point.
A projection domain neural network target value obtaining unit 632, configured to use an ideal projection value corresponding to the central projection point in the h × g neighborhood in the ideal mono-energy projection data as a target value of the projection domain neural network.
The projection domain neural network construction unit 633 is used for taking the product of the number of projection points in the h × g neighborhood and the number of energy windows as the number of nodes of an input layer, taking the set number of ideal monoenergetics as the number of nodes of an output layer, setting the number of hidden layer layers, the number of hidden layer nodes, a hidden layer activation function, an output layer activation function and a target function of the projection domain neural network, and training the projection domain neural network through a preset neural network algorithm and a preset learning rate.
With continued reference to fig. 8, in other embodiments, the neural network training module 630 may also include:
the image domain neural network input data obtaining unit 634 is configured to obtain an nxn neighborhood of a pixel point of the calibration phantom in the reconstructed image of the specified energy window, and use a reconstructed value of the pixel point in the nxn neighborhood as input data of the image domain neural network, where n is a neighborhood size of the specified pixel point.
The image domain neural network target value obtaining unit 635 is configured to use an ideal reconstruction value corresponding to a central pixel point in an n × n neighborhood in the ideal mono-energy reconstructed image as the target value of the image domain neural network.
The image domain neural network construction unit 636 is used for taking the product of the number n multiplied by n of the pixel points in the n multiplied by n neighborhood and the number of the energy windows as the number of nodes of the input layer and taking the set number of ideal monoenergetics as the number of nodes of the output layer; and setting the number of hidden layer layers, the number of hidden layer nodes, a hidden layer activation function, an output layer activation function and a target function of the image domain neural network, and training the image domain neural network through a preset neural network algorithm and a learning rate.
And, in some embodiments, the setting of the appropriate learning rate can avoid the neural network from falling into local minima during the training process.
In other embodiments, the neural network algorithm may also be a back propagation BP algorithm, and in the BP algorithm, the additional momentum may be combined with the adaptive learning rate to reduce oscillation during the network training process and accelerate convergence of the trained neural network.
In the above embodiment, the neural network in the projection domain may be constructed by the neural network training module 630, and the neural network in the projection domain is obtained by training using the projection data of the calibration phantom and the ideal monoenergetic projection data; an image domain neural network can also be constructed through the neural network training module 630, and the image domain neural network is trained by using the reconstructed image of the calibration phantom and the ideal monoenergetic reconstructed image.
In some embodiments, material decomposition module 640 may be specifically configured to: acquiring an h multiplied by g neighborhood of a projection point of a substance to be decomposed in projection data of a specified energy window, taking a projection value of the projection point of the substance to be decomposed in the h multiplied by g neighborhood as input data, and acquiring an ideal mono-energy projection value of the substance to be decomposed by using a trained projection domain neural network; and performing single-energy CT reconstruction on the ideal single-energy projection value of the substance to be decomposed to obtain an ideal single-energy reconstructed image of the substance to be decomposed.
In the material decomposition module 640, the trained projection domain neural network or the trained image domain neural network is used to decompose the material to be decomposed, so as to perform accurate image reconstruction.
In the embodiment of the invention, the preset neural network algorithm can be a gradient descent method or Levenberg-Marquardt algorithm; the hidden layer activation function may be a Sigmoid function or a linear rectification function ReLU, the output layer activation function may be a linear function, and the objective function may be a minimum mean square error.
In some embodiments, the algorithms such as BN and dropout in the above embodiments may also be used to optimize the training neural network according to actual situations.
In some embodiments, to improve the accuracy of neural network training, the neural network training may be performed simultaneously on the air values.
Specifically, the projection data may also include projection data of the acquired air at two or more energy windows; the multi-energy data also includes air multi-energy data determined from air projection data at a specified energy window and a material decomposition implementation at a specified region.
The method of obtaining input data for air values may be the same as the method of obtaining input data for calibration phantom. The training of adding the air value can avoid generating unnecessary errors and noises in the training process.
According to the material decomposition device provided by the embodiment of the invention, enough training samples are used for training the neural network, and the trained neural network can be used for carrying out accurate material decomposition and material identification, so that accurate image reconstruction is carried out.
Other details of the material decomposition device according to the embodiment of the present invention are similar to those of the material decomposition method according to the embodiment of the present invention described above with reference to fig. 1 to 5, and are not repeated herein.
For ease of understanding, the substance decomposition method according to an embodiment of the present invention will be described below by taking the substance decomposition in the image domain as an example.
Firstly, 5% sodium chloride, glucose and alcohol solution, 10% sodium chloride, glucose and alcohol solution, 15% sodium chloride, glucose and alcohol solution and 20% sodium chloride, glucose and alcohol solution are respectively selected as calibration phantom.
In the experiment, the reconstructed image of each energy window can be obtained by a single energy CT reconstruction method. Specifically, dual energy projection data with energy windows of [29, 43] keV and [43, 57] keV may be acquired, and the intermediate energy of each energy window may be set as the ideal monoenergetic energy to be reconstructed, i.e., the ideal monoenergetic energy with the energy window of [29, 43] keV is 36keV and the ideal monoenergetic energy with the energy window of [43, 57] keV is 50 keV.
Secondly, setting the neighborhood size n to be 5 for each point A (i, j) on the dual-energy reconstructed image of the calibrated phantom, namely taking an image block of 5 multiplied by 5 around A (i, j), and taking the reconstructed value of each pixel point in the image block as input data of neural network training; and searching the NIST table through the ideal monoenergetic energy to obtain an ideal reconstruction value of the ideal monoenergetic, namely an ideal monoenergetic attenuation coefficient serving as a target value of the training neural network.
Then, a 5-layer neural network is adopted, namely the neural network comprises an input layer, 4 hidden layers and an output layer, wherein the input layer has 5 multiplied by 2 nodes, and the output layer has 2 nodes; respectively setting the number of nodes of each layer of the hidden layer, namely the number of neurons of each layer of the hidden layer according to the empirical value; all the layers of the neural network are connected in a full-connection mode; the activation function of each hidden layer can be set as a Sigmoid function, and the activation function of the output layer is set as a linear function.
Then, in training the neural network, a portion of the input data, e.g., 90%, of the input data may be used for neural network training in training the network. The conjugate gradient descent method can be adopted for training, the objective function can be the minimum mean square error, and the learning rate can be set to be 0.01.
Finally, when testing the network, the trained neural network is tested and validated using another portion of the input data, e.g., 10%, to assess the strength and utility of the neural network, resulting in a neural network for the trained image domain.
The embodiment of the invention obtains ideal monoenergetic data by using the trained neural network by using the collected multipotent data. And (4) performing substance decomposition and substance identification by using a neural network so as to perform accurate image reconstruction.
In embodiments of the invention, at least a portion of the material decomposition methods and apparatus according to embodiments of the invention described in conjunction with fig. 1-8 may be implemented by computing device 900. Fig. 9 is a diagram showing a hardware configuration of a computing apparatus according to an embodiment of the present invention.
As shown in fig. 9, the computing device 900 may include: the device comprises a processor 901, a memory 902, a communication interface 903 and a bus 910, wherein the processor 901, the memory 902 and the communication interface 903 are connected through the bus 910 and complete mutual communication.
In particular, the processor 901 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured as one or more integrated circuits implementing an embodiment of the present invention.
Memory 902 may include mass storage for data or instructions. By way of example, and not limitation, memory 902 may include an HDD, floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or Universal Serial Bus (USB) drive or a combination of two or more of these. Memory 902 may include removable or non-removable (or fixed) media, where appropriate. The memory 902 may be internal or external to the computing device 900, where appropriate. In a particular embodiment, the memory 902 is a non-volatile solid-state memory. In a particular embodiment, the memory 902 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The communication interface 903 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present invention.
The bus 910 includes hardware, software, or both to couple the components of the computing device 900 to each other. By way of example, and not limitation, a bus may include an accelerated graphics port (AHP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a hyper transport (hT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a low pin count (lPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards Association local (VlB) bus, or other suitable bus or a combination of two or more of these. Bus 910 can include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
In some embodiments, the computing device 900 shown in fig. 9 may be implemented as a matter decomposition system comprising a processor 901 and a memory 902. The memory 902 is used for storing program codes; the processor 901 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 902, for executing the above-described substance decomposing method.
Therefore, according to the material decomposition system provided by the embodiment of the present invention, the multi-energy data of the calibration phantom corresponding to the material decomposition implementation manner of the designated domain may be determined according to the projection data and the material decomposition implementation manner of the designated domain by collecting the projection data of the calibration phantom in two or more energy windows; setting ideal monoenergetics, and obtaining ideal monoenergetic data of the calibration die body under the ideal monoenergetics by a preset method; training a corresponding neural network according to the multi-energy data, wherein the trained neural network reflects the mapping relation between the multi-energy data and the ideal mono-energy data; and decomposing the substance to be decomposed based on the projection data of the substance to be decomposed in the specified energy window and the trained neural network.
The computing device 900 in the embodiment of the present invention may perform steps S110 to S140, steps S201 to S203, steps S301 to S304, steps S1401 to S1402, steps S401 to S402, steps S501 to S504, and steps S1411 to S1413 of the substance decomposition method in the above-described embodiment of the present invention, thereby implementing the substance decomposition method and apparatus described in conjunction with fig. 1 to 8.
According to the material decomposition system provided by the embodiment of the invention, the material decomposition is carried out through the trained neural network, so that the decomposition error can be effectively reduced.
In practical applications, the material decomposition method, the material decomposition device and the material decomposition system can accurately decompose materials containing k edges, and have good inhibition effect on beam hardening artifacts caused by polychromatic spectrums and ring artifacts caused by inconsistency among units of a photon counting detector.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.