CN113436109B - An ultrafast and high-quality plane wave ultrasound imaging method based on deep learning - Google Patents
An ultrafast and high-quality plane wave ultrasound imaging method based on deep learning Download PDFInfo
- Publication number
- CN113436109B CN113436109B CN202110774364.5A CN202110774364A CN113436109B CN 113436109 B CN113436109 B CN 113436109B CN 202110774364 A CN202110774364 A CN 202110774364A CN 113436109 B CN113436109 B CN 113436109B
- Authority
- CN
- China
- Prior art keywords
- data
- plane wave
- ultrasound
- synthetic aperture
- deep network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
技术领域technical field
本发明涉及医学超声成像技术领域,尤其涉及一种基于深度学习的超快高质量的平面波超声成像方法。The invention relates to the technical field of medical ultrasound imaging, in particular to an ultrafast and high-quality plane wave ultrasound imaging method based on deep learning.
背景技术Background technique
超声成像设备的不同采集方式在成像速度(帧率)、成像质量(分辨率和信噪比)和系统复杂性(成本)三个方面有各自的优势和劣势,需要根据实际应用场景进行选择和折衷。利用全部阵元发射,全部阵元接收的平面波超声是一种可以实现超快成像的采集方式,但牺牲了成像质量。Different acquisition methods of ultrasound imaging equipment have their own advantages and disadvantages in terms of imaging speed (frame rate), imaging quality (resolution and signal-to-noise ratio), and system complexity (cost). compromise. Using all array elements to transmit and receive plane wave ultrasound is an acquisition method that can achieve ultrafast imaging, but at the expense of imaging quality.
现在市场上的超声设备采用逐线扫描模式,提高了成像质量,但降低了成像速度。文献[1]Z.Zhou,Y.Wang,Y.Guo,X.Jiang and Y.Qi,"Ultrafast Plane Wave ImagingWith Line-Scan-Quality Using an Ultrasound-Transfer Generative AdversarialNetwork,"in IEEE Journal of Biomedical and Health Informatics,vol.24,no.4,pp.943-956,April 2020,doi:10.1109/JBHI.2019.2950334.提出了一种利用深度学习方法,以逐线扫描超声图像为学习标签,提高平面波超声图像的质量。在文献[1]建立的数据集中,平面波超声图像和逐线扫描超声图像是由两套设备分别采集的,两种图像不是严格成对的,一般来说,相对于利用成对样本训练得到模型,利用非成对样本来训练深度网络进行图像映射,其性能会下降。Ultrasound equipment on the market now adopts line-by-line scanning mode, which improves the imaging quality but reduces the imaging speed. Literature[1]Z.Zhou,Y.Wang,Y.Guo,X.Jiang and Y.Qi,"Ultrafast Plane Wave ImagingWith Line-Scan-Quality Using an Ultrasound-Transfer Generative AdversarialNetwork,"in IEEE Journal of Biomedical and Health Informatics,vol.24,no.4,pp.943-956,April 2020,doi:10.1109/JBHI.2019.2950334. A deep learning method is proposed to use line-by-line scanning ultrasound images as learning labels to improve plane wave ultrasound images the quality of. In the data set established in the literature [1], the plane wave ultrasound image and the line-by-line scan ultrasound image are collected by two sets of equipment respectively, and the two kinds of images are not strictly paired. Generally speaking, compared with the model obtained by training paired samples , using unpaired samples to train deep networks for image mapping, its performance will degrade.
在文献[2]Jensen,J.A.,Nikolov,S.I.,Gammelmark,K.L.,&Pedersen,M.H.(2006).Synthetic aperture ultrasound imaging.Ultrasonics,44,e5-e15.中,合成孔径超声由一个阵元发射信号,全部阵元同时接收信号,全部阵元依次发射并接收所有信号后,最后利用波束形成技术将所有通道数据相加得到成像结果,所以能够实现发射单元和接收单元的动态聚焦,相对于目前市场上主流的逐线扫描的超声,是一种有效提高超声图像质量(分辨率和对比度)的技术。合成孔径超声的缺点是需要传输的数据量巨大和成像计算量大,导致其成像速度慢,帧率低。In [2] Jensen, J.A., Nikolov, S.I., Gammelmark, K.L., & Pedersen, M.H. (2006). Synthetic aperture ultrasound imaging. Ultrasonics, 44, e5-e15., synthetic aperture ultrasound transmits signals from one array element, all The array elements receive signals at the same time. After all the array elements transmit and receive all the signals in turn, the beamforming technology is used to add the data of all channels to obtain the imaging result. Therefore, the dynamic focusing of the transmitting unit and the receiving unit can be realized. Compared with the mainstream in the current market Line-by-line scan ultrasound is an effective technique to improve the quality (resolution and contrast) of ultrasound images. The disadvantage of synthetic aperture ultrasound is the huge amount of data that needs to be transmitted and the large amount of imaging calculation, resulting in slow imaging speed and low frame rate.
文献[3]R.Ali,C.D.Herickhoff,D.Hyun,J.J.Dahl and N.Bottenus,"ExtendingRetrospective Encoding for Robust Recovery of the Multistatic Data Set,"inIEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.67,no.5,pp.943-956,May 2020,doi:10.1109/TUFFC.2019.2961875.中,由于合成孔径超声由一个阵元发射信号,全部阵元同时接收信号,全部阵元依次发射并接收所有信号,所以利用合成孔径采集的数据,根据其他采集方式的参数,可以方便地生成对应该采集方式的数据。Reference [3] R.Ali,C.D.Herickhoff,D.Hyun,J.J.Dahl and N.Bottenus,"ExtendingRetrospective Encoding for Robust Recovery of the Multistatic Data Set,"in IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.67, no.5, pp.943-956, May 2020, doi: 10.1109/TUFFC.2019.2961875. In the case of synthetic aperture ultrasound, one array element transmits signals, all array elements receive signals at the same time, and all array elements transmit and receive all signals sequentially , so using the data collected by the synthetic aperture, according to the parameters of other collection methods, the data corresponding to the collection method can be easily generated.
本发明提出的方法利用合成孔径超声数据,生成平面波数据,构造出成对的数据集,训练出深度网络模型,实现将平面波成像的RF数据映射为合成孔径超声成像的RF数据。本发明所提方法在保持平面波超声所具有的超快成像速度优势的同时,利用深度学习提高了成像质量。The method proposed by the invention utilizes synthetic aperture ultrasound data to generate plane wave data, constructs paired data sets, trains a deep network model, and realizes the mapping of the RF data of plane wave imaging to the RF data of synthetic aperture ultrasound imaging. The method proposed in the invention improves the imaging quality by using deep learning while maintaining the ultra-fast imaging speed advantage of plane wave ultrasound.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提出一种基于深度学习的超快高质量的平面波超声成像方法,其特征在于,包括以下步骤:The purpose of the present invention is to propose an ultra-fast and high-quality plane wave ultrasound imaging method based on deep learning, which is characterized by comprising the following steps:
步骤1:构建成对RF数据集;利用超声平台采集合成孔径超声的三维通道数据,建立合成孔径-平面波超声的成对RF数据集;Step 1: construct a paired RF data set; use an ultrasonic platform to collect three-dimensional channel data of synthetic aperture ultrasound, and establish a paired RF data set of synthetic aperture-plane wave ultrasound;
步骤2:训练深度网络模型;构建深度网络和Loss函数,利用步骤1得到的成对RF数据集训练深度网络,得到深度网络模型;Step 2: train a deep network model; construct a deep network and a Loss function, train a deep network using the paired RF data sets obtained in step 1, and obtain a deep network model;
步骤3:部署深度网络;将实时获得的平面波超声的RF数据输入步骤2训练好的深度网络模型,得到网络的输出作为增强后的超声RF数据。Step 3: Deploy a deep network; input the RF data of the plane wave ultrasound obtained in real time into the deep network model trained in step 2, and obtain the output of the network as the enhanced ultrasound RF data.
所述步骤1包括以下子步骤:The step 1 includes the following sub-steps:
步骤11:对于每一个样本,将所有发射阵元得到的合成孔径超声通道数据dj(xt,xr,t)累加到一起,得到二维的平面波通道数据其中,j=1…N,N为样本数,xt为发射阵元的坐标,xr为接收阵元的坐标,t为声波的双程传播时间;Step 11: For each sample, accumulate the synthetic aperture ultrasound channel data d j (x t , x r , t) obtained by all transmitting array elements to obtain two-dimensional plane wave channel data Among them, j=1...N, N is the number of samples, x t is the coordinate of the transmitting array element, x r is the coordinate of the receiving array element, and t is the two-way propagation time of the acoustic wave;
步骤12:利用合成孔径超声波束形成器B1处理dj(xt,xr,t),得到合成孔径RF数据oj(x,t)=B1{dj(xt,xr,t)},(x,t)为成像点的空间坐标;Step 12: Use synthetic aperture ultrasonic beamformer B 1 to process d j (x t , x r , t) to obtain synthetic aperture RF data o j (x, t)=B 1 {d j (x t , x r , t)}, (x, t) is the spatial coordinate of the imaging point;
步骤13:利用平面波超声波束形成器B2处理pj(xr,t)得到平面波RF数据ij(x,t)=B2{pj(xr,t)};Step 13: Use the plane wave ultrasonic beamformer B 2 to process p j (x r , t) to obtain plane wave RF data i j (x, t)=B 2 {p j (x r , t)};
步骤14:按照步骤11~步骤13处理完所有样本后,得到成对RF数据集D={ij(x,t),oj(x,t),j=1…N},N为样本数。Step 14: After processing all samples according to steps 11 to 13, a paired RF data set D={i j (x, t), o j (x, t), j=1...N} is obtained, where N is the sample number.
所述步骤2中的Loss函数为:The Loss function in the step 2 is:
其中,f(ij(x,t),W)表示深度网络计算过程,W表示网络参数。Among them, f(i j (x, t), W) represents the deep network calculation process, and W represents the network parameters.
本发明的有益效果在于:The beneficial effects of the present invention are:
本发明所提方法不仅保持了平面波超声具有的超快成像的速度优势,还利用深度学习提高了成像质量。The method proposed in the invention not only maintains the speed advantage of ultra-fast imaging of plane wave ultrasound, but also improves the imaging quality by using deep learning.
附图说明Description of drawings
图1为本发明总流程图;Fig. 1 is the general flow chart of the present invention;
图2(a)为成对数据集部分的流程图;图2(b)为深度网络训练部分的流程图;图2(c)为深度网络部署部分的流程图;Fig. 2 (a) is the flow chart of the paired dataset part; Fig. 2 (b) is the flow chart of the deep network training part; Fig. 2 (c) is the flow chart of the deep network deployment part;
图3为所采用的深度网络;Figure 3 shows the adopted deep network;
其中:“k3n16s1”表示卷积核大小为3×3,通道数为16,步长为1;“×2”表示该操作重复两次,其他标注同理;Among them: "k3n16s1" indicates that the size of the convolution kernel is 3×3, the number of channels is 16, and the step size is 1; "×2" indicates that the operation is repeated twice, and other labels are the same;
图4为训练集中的RF数据对;其中,(a)为16个平面波RF数据样本;(b)为对应的合成孔径RF数据样本;Fig. 4 is the RF data pair in the training set; wherein, (a) is the 16 plane wave RF data samples; (b) is the corresponding synthetic aperture RF data sample;
图5为本发明的测试集试验实例;其中,(a)为深度网络输入的平面波RF数据的B超;(b)为深度网络输出RF数据的B超;(c)为合成孔径超声的B超。Fig. 5 is the test example of the test set of the present invention; wherein, (a) is the B ultrasound of the plane wave RF data input by the deep network; (b) is the B ultrasound of the RF data output by the deep network; (c) is the B ultrasound of the synthetic aperture ultrasound overtake.
具体实施方式Detailed ways
本发明提出一种基于深度学习的超快高质量的平面波超声成像方法,下面结合附图和具体实施例对本发明做进一步说明。The present invention proposes an ultra-fast and high-quality plane wave ultrasound imaging method based on deep learning. The present invention will be further described below with reference to the accompanying drawings and specific embodiments.
图1为本发明总流程图,图2(a)为成对数据集部分的流程图;图2(b)为深度网络训练部分的流程图;图2(c)为深度网络部署部分的流程图;具体可表述如下:Fig. 1 is the general flow chart of the present invention, Fig. 2 (a) is the flow chart of the paired data set part; Fig. 2 (b) is the flow chart of the deep network training part; Fig. 2 (c) is the flow chart of the deep network deployment part Figure; specific can be expressed as follows:
1)建立成对数据集:1) Build a paired dataset:
a)利用超声平台采集多人多部位的合成孔径超声的三维通道数据,记为dj(xt,xr,t),其中,j=1…N,N为样本数,xt为发射阵元的坐标,xr为接收阵元的坐标,t为声波的双程传播时间。a) Use the ultrasound platform to collect the three-dimensional channel data of synthetic aperture ultrasound of multiple people and multiple parts, denoted as d j (x t , x r , t), where j=1...N, N is the number of samples, and x t is the emission The coordinates of the array element, x r is the coordinate of the receiving array element, and t is the two-way propagation time of the sound wave.
b)对于每一个样本,将所有发射阵元得到的合成孔径超声通道数据dj(xt,xr,t)累加到一起,得到二维的平面波通道数据 b) For each sample, accumulate the synthetic aperture ultrasound channel data d j (x t , x r , t) obtained by all transmitting array elements together to obtain two-dimensional plane wave channel data
c)利用《陆文凯;一种高效的合成孔径超声成像方法2021.5.18中国CN202110539280.3》中的合成孔径超声波束形成器B1处理dj(xt,xr,t),得到RF数据oj(x,t)=B1{dj(xt,xr,t)}。c) Process d j (x t , x r , t) using the synthetic aperture ultrasonic beamformer B 1 in "Lu Wenkai; An Efficient Synthetic Aperture Ultrasound Imaging Method 2021.5.18 China CN202110539280.3" to obtain RF data o j (x,t)=B 1 {d j (x t ,x r ,t)}.
d)利用M.Albulayli and D.Rakhmatov,"Fourier Domain Depth Migration forPlane-Wave Ultrasound Imaging,"in IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.65,no.8,pp.1321-1333,Aug.2018,doi:10.1109/TUFFC.2018.2837000.中的平面波超声波束形成器B2处理pj(xr,t)得到RF数据ij(x,t)=B2{pj(xr,t)}。d) Using M.Albulayli and D.Rakhmatov, "Fourier Domain Depth Migration for Plane-Wave Ultrasound Imaging," in IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol.65, no.8, pp.1321-1333, Aug. .2018, doi:10.1109/TUFFC.2018.2837000. The plane wave ultrasonic beamformer B 2 processes p j (x r , t) to obtain RF data i j (x, t) = B 2 {p j (x r , t) )}.
e)处理完所有样本后,得到成对数据集D={ij(x,t),oj(x,t),j=1…N{,N为样本数。e) After all samples are processed, a paired data set D={i j (x, t), o j (x, t), j=1...N{, N is the number of samples is obtained.
2)训练深度网络模型:2) Train a deep network model:
a)相对于合成孔径超声的RF数据oj(x,t),平面波超声的RF数据ij(x,t)由于存在比较严重的串扰噪声而质量下降。针对上述图像增强问题,我们构建深度网络和损失函数,图3给出本次试验的所采用的深度网络结构。如图3所示,我们采用二维U-Net模型。U-Net模型包括一个编码过程和一个解码过程,编码、解码过程之间存在跳接线,用于提取不同尺度特征。此外,也可采用其他合理的端到端网络结构进行替代。训练过程的损失函数为:a) Compared with the RF data o j (x, t) of the synthetic aperture ultrasound, the RF data ij (x, t) of the plane wave ultrasound is degraded due to the presence of relatively serious crosstalk noise. For the above image enhancement problem, we build a deep network and loss function. Figure 3 shows the deep network structure used in this experiment. As shown in Figure 3, we adopt a two-dimensional U-Net model. The U-Net model includes an encoding process and a decoding process, and there are jumpers between the encoding and decoding processes to extract features of different scales. In addition, other reasonable end-to-end network structures can also be used instead. The loss function of the training process is:
其中,f(·)表示深度网络计算过程,W表示网络参数,训练过程采用Adam或其他合理的优化器对网络参数进行优化。Among them, f( ) represents the calculation process of the deep network, W represents the network parameters, and the training process uses Adam or other reasonable optimizers to optimize the network parameters.
b)利用第1步建立的成对数据集D训练前述深度网络,得到深度网络模型f(i(x,t),W)。图4给出了训练集中的样本对。b) Use the paired dataset D established in step 1 to train the aforementioned deep network to obtain the deep network model f(i(x,t),W). Figure 4 presents the sample pairs in the training set.
3)部署深度网络:3) Deploy a deep network:
a)将实时获得的平面波超声的RF数据i(x,t)输入第二步训练好的深度网络模型f(i(x,t),W),得到网络的输出作为增强后的超声RF数据。a) Input the RF data i(x,t) of the plane wave ultrasound obtained in real time into the deep network model f(i(x,t),W) trained in the second step, and get the output of the network as enhanced ultrasound RF data.
b)利用得到用于显示的B超。b) use Get a B-ultrasound for display.
采集12个人多个部位的三通道合成孔径超声通道数据,并利用合成孔径通道数据生成对应发平面波的通道数据,然后分别进行合成孔径成像和平面波成像,共得到605组配对样本。其中,424组作为训练集,60组作为验证集,121组作为测试集。每个样本图像的大小为3100×256。训练时,我们从训练集中随机切块,得到10000个大小为64×64的训练样本,设置batch size为64,训练迭代次数为100次,初始学习率为0.0001,采用Adam优化器进行优化。实验平台包含Intel(R)Core(TM)i9-9820X CPU@3.30GHz、64GB RAM、GeForce RTX2080Ti、GeForce RTX 3090。图5展示了本发明在测试集上的试验结果。5(a)为深度网络输入的平面波RF数据的B超;5(b)为深度网络输出RF数据的B超;5(c)为合成孔径超声的B超(图5(b)的标准答案)。通过图5可以看出,本发明得到的B超图像,其质量明显优于平面波B超图像。表1给出了测试集的性能指标。The three-channel synthetic aperture ultrasound channel data of multiple parts of 12 people was collected, and the synthetic aperture channel data was used to generate the channel data corresponding to the plane wave, and then the synthetic aperture imaging and the plane wave imaging were performed respectively, and a total of 605 pairs of samples were obtained. Among them, 424 groups are used as training sets, 60 groups are used as validation sets, and 121 groups are used as test sets. The size of each sample image is 3100×256. During training, we randomly divided the training set into 10,000 training samples with a size of 64 × 64, set the batch size to 64, the number of training iterations to 100, and the initial learning rate of 0.0001. Adam optimizer was used for optimization. The experimental platform includes Intel(R) Core(TM) i9-9820X CPU@3.30GHz, 64GB RAM, GeForce RTX2080Ti, GeForce RTX 3090. Figure 5 shows the experimental results of the present invention on the test set. 5(a) is the ultrasound of the plane wave RF data input by the deep network; 5(b) is the ultrasound of the RF data output by the deep network; 5(c) is the ultrasound of the synthetic aperture ultrasound (the standard answer of Fig. 5(b) ). It can be seen from FIG. 5 that the quality of the B-ultrasound image obtained by the present invention is obviously better than that of the plane wave B-ultrasound image. Table 1 presents the performance metrics on the test set.
表1 测试集性能指标对比表Table 1 Comparison of test set performance indicators
从中可见,经过本方法增强之后,RF图像、B超图像的SSIM、PSNR、SNR均得到提升。表2给出了不同GPU卡上深度网络处理一帧的时间。It can be seen that after the enhancement of this method, the SSIM, PSNR, and SNR of RF images and B-ultrasound images are improved. Table 2 presents the time for deep network processing one frame on different GPU cards.
表2 不同GPU卡深度网络处理时耗对比表(单位:帧/秒)Table 2 Comparison table of deep network processing time consumption of different GPU cards (unit: frame/second)
由表2可见,原始RF数据在Geforce RTX 3090上的处理速度达130帧每秒,并且如果对原始RF数据进行适当降采样,可以进一步提升处理速度。It can be seen from Table 2 that the processing speed of raw RF data on Geforce RTX 3090 reaches 130 frames per second, and if the raw RF data is properly downsampled, the processing speed can be further improved.
综上所述,本发明的技术在保持平面波超声所具有的超快成像速度优势的同时,利用深度学习大幅地提高了成像质量。To sum up, the technology of the present invention greatly improves the imaging quality by using deep learning while maintaining the ultra-fast imaging speed advantage of plane wave ultrasound.
此实施例仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。This embodiment is only a preferred embodiment of the present invention, but the protection scope of the present invention is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed by the present invention. , all should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110774364.5A CN113436109B (en) | 2021-07-08 | 2021-07-08 | An ultrafast and high-quality plane wave ultrasound imaging method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110774364.5A CN113436109B (en) | 2021-07-08 | 2021-07-08 | An ultrafast and high-quality plane wave ultrasound imaging method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113436109A CN113436109A (en) | 2021-09-24 |
CN113436109B true CN113436109B (en) | 2022-10-14 |
Family
ID=77759700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110774364.5A Active CN113436109B (en) | 2021-07-08 | 2021-07-08 | An ultrafast and high-quality plane wave ultrasound imaging method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113436109B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780329A (en) * | 2016-12-07 | 2017-05-31 | 华中科技大学 | A kind of plane of ultrasound wave imaging method based on the conversion of anti-perspective plane |
CN109965905A (en) * | 2019-04-11 | 2019-07-05 | 复旦大学 | A deep learning-based imaging method for contrast region detection |
CN110074813A (en) * | 2019-04-26 | 2019-08-02 | 深圳大学 | Ultrasonic image reconstruction method and system |
WO2020252463A1 (en) * | 2019-06-14 | 2020-12-17 | Mayo Foundation For Medical Education And Research | Super-resolution microvessel imaging using separated subsets of ultrasound data |
CN112528731A (en) * | 2020-10-27 | 2021-03-19 | 西安交通大学 | Plane wave beam synthesis method and system based on double-regression convolutional neural network |
CN112771374A (en) * | 2018-10-08 | 2021-05-07 | 洛桑联邦理工学院 | Image reconstruction method based on training nonlinear mapping |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3569154A1 (en) * | 2018-05-15 | 2019-11-20 | Koninklijke Philips N.V. | Ultrasound processing unit and method, and imaging system |
-
2021
- 2021-07-08 CN CN202110774364.5A patent/CN113436109B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780329A (en) * | 2016-12-07 | 2017-05-31 | 华中科技大学 | A kind of plane of ultrasound wave imaging method based on the conversion of anti-perspective plane |
CN112771374A (en) * | 2018-10-08 | 2021-05-07 | 洛桑联邦理工学院 | Image reconstruction method based on training nonlinear mapping |
CN109965905A (en) * | 2019-04-11 | 2019-07-05 | 复旦大学 | A deep learning-based imaging method for contrast region detection |
CN110074813A (en) * | 2019-04-26 | 2019-08-02 | 深圳大学 | Ultrasonic image reconstruction method and system |
WO2020252463A1 (en) * | 2019-06-14 | 2020-12-17 | Mayo Foundation For Medical Education And Research | Super-resolution microvessel imaging using separated subsets of ultrasound data |
CN112528731A (en) * | 2020-10-27 | 2021-03-19 | 西安交通大学 | Plane wave beam synthesis method and system based on double-regression convolutional neural network |
Non-Patent Citations (4)
Title |
---|
Accelerated plane-wave destruction;Zhonghuan Chen等;《 Geophysics》;20131231;第1-16页 * |
Image Quality Enhancement Using a Deep Neural Network for Plane Wave Medical Ultrasound Imaging;Yanxing Qi等;《IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL》;20210430;第926-934页 * |
Ultrasound image reconstruction from plane wave radio-frequency data by self-supervised deep neural network;Jingke Zhang等;《Medical Image Analysis》;20210225;第1-18页 * |
超声平面波经颅成像相位校正方法;宋亚龙等;《应用声学》;20210131;第1-10页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113436109A (en) | 2021-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3187113B1 (en) | Method and device for ultrasonic imaging by synthetic focusing | |
Yoon et al. | Efficient B-mode ultrasound image reconstruction from sub-sampled RF data using deep learning | |
CN110074813B (en) | Ultrasonic image reconstruction method and system | |
CN113994367A (en) | Method and system for generating synthetic elastography images | |
CN102148987B (en) | Compressed Sensing Image Reconstruction Method Based on Prior Model and l0 Norm | |
JP7515183B2 (en) | How to maintain image quality in ultrasound imaging at low cost, low size and low power | |
CN109584164B (en) | Medical image super-resolution three-dimensional reconstruction method based on two-dimensional image transfer learning | |
WO2020206755A1 (en) | Ray theory-based method and system for ultrasound ct image reconstruction | |
CN112912758A (en) | Method and system for adaptive beamforming of ultrasound signals | |
CN104739451B (en) | Elastic image imaging method, device and supersonic imaging apparatus | |
Chen et al. | ApodNet: Learning for high frame rate synthetic transmit aperture ultrasound imaging | |
Goudarzi et al. | Ultrasound beamforming using mobilenetv2 | |
Yoon et al. | Deep learning for accelerated ultrasound imaging | |
Indhumathi et al. | Hybrid pixel based method for multimodal image fusion based on Integration of Pulse Coupled Neural Network (PCNN) and Genetic Algorithm (GA) using Empirical Mode Decomposition (EMD) | |
CN113436109B (en) | An ultrafast and high-quality plane wave ultrasound imaging method based on deep learning | |
CN110554393B (en) | High-contrast minimum variance imaging method based on deep learning | |
CN115471580A (en) | A physically intelligent high-definition magnetic resonance diffusion imaging method | |
US20180284249A1 (en) | Ultrasound imaging system and method for representing rf signals therein | |
CN103654868B (en) | The formation method of ultrasonic diagnostic equipment and system | |
CN106859695B (en) | Q-frame T-aperture composite emission imaging method and system applied to ultrasonic probe | |
Toffali et al. | Improving the quality of monostatic synthetic-aperture ultrasound imaging through deep-learning-based beamforming | |
Khan et al. | Pushing the limit of unsupervised learning for ultrasound image artifact removal | |
CN110490869B (en) | Ultrasonic image contrast and transverse resolution optimization method | |
EP4579279A1 (en) | Methods, systems, and storage medium for data processing | |
Mahurkar et al. | Iteratively-reweighted beamforming for high-resolution ultrasound imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |