[go: up one dir, main page]

CN112699948A - Ultrasonic breast lesion classification method and device and storage medium - Google Patents

Ultrasonic breast lesion classification method and device and storage medium Download PDF

Info

Publication number
CN112699948A
CN112699948A CN202011644067.0A CN202011644067A CN112699948A CN 112699948 A CN112699948 A CN 112699948A CN 202011644067 A CN202011644067 A CN 202011644067A CN 112699948 A CN112699948 A CN 112699948A
Authority
CN
China
Prior art keywords
image
classification
data
breast
ultrasound breast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011644067.0A
Other languages
Chinese (zh)
Other versions
CN112699948B (en
Inventor
甘从贵
过易
赵明昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Chison Medical Technologies Co Ltd
Original Assignee
Wuxi Chison Medical Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Chison Medical Technologies Co Ltd filed Critical Wuxi Chison Medical Technologies Co Ltd
Priority to CN202011644067.0A priority Critical patent/CN112699948B/en
Publication of CN112699948A publication Critical patent/CN112699948A/en
Application granted granted Critical
Publication of CN112699948B publication Critical patent/CN112699948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及一种超声乳腺病灶的分类方法、装置及存储介质,属于深度学习技术领域,该方法包括:将目标超声乳腺图像或视频(视频则为逐帧图像)分割为n1*n2个数据块;将每个数据块转换为p1*p2*c维的向量数据;将n1*n2个数据块对应的向量数据合并,得到n1n2×p1p2c的二维数据矩阵;根据每个数据块在目标超声乳腺图像中的位置,生成位置对应的位置编码向量,并将位置编码向量添加至二维数据矩阵中,得到的待处理的数据矩阵;将待处理的数据矩阵输入图像分类网络中,得到的目标超声乳腺病灶的病灶性质分类;可以解决人工对超声乳腺图像进行分类时,分类效率较低的问题;提高对超声乳腺图像进行分类的准确率及效率。

Figure 202011644067

The present application relates to a classification method, device and storage medium for ultrasonic breast lesions, belonging to the technical field of deep learning. The method includes: dividing a target ultrasonic breast image or video (frame-by-frame image for video) into n 1 * n 2 Data block; convert each data block into p 1 *p 2 *c-dimensional vector data; combine the vector data corresponding to n 1 *n 2 data blocks to obtain n 1 n 2 ×p 1 p 2 c of two dimensional data matrix; according to the position of each data block in the target ultrasound breast image, the position encoding vector corresponding to the position is generated, and the position encoding vector is added to the two-dimensional data matrix to obtain the data matrix to be processed; It can solve the problem of low classification efficiency when classifying ultrasonic breast images manually, and improve the accuracy and efficiency of classifying ultrasonic breast images. .

Figure 202011644067

Description

Ultrasonic breast lesion classification method and device and storage medium
Technical Field
The application relates to a classification method, a classification device and a storage medium for an ultrasonic breast lesion, and belongs to the technical field of deep learning.
Background
Breast cancer is one of the mortality factors for female diseases, and early screening is one of the important factors for preventing prolonged survival of breast diseases.
Existing screening approaches include: after the ultrasonic breast image of the human body is acquired, medical personnel analyze the ultrasonic breast image, so that the type of the breast disease is obtained.
However, manual analysis of ultrasound breast images is slow and inefficient.
Disclosure of Invention
The application provides a classification method, a classification device and a storage medium for an ultrasonic breast lesion, which can solve the problem of low efficiency of manually analyzing an ultrasonic breast image. The application provides the following technical scheme:
in a first aspect, there is provided a method of classifying an ultrasound breast lesion, the method comprising:
acquiring target ultrasonic breast information to be classified, wherein the target ultrasonic breast information is a target ultrasonic breast image or a target ultrasonic breast video, and the target ultrasonic breast video comprises at least two frames of target ultrasonic breast images;
for each frame of the target ultrasound breast image,
segmenting the target ultrasound breast image into n1*n2A data block; wherein n is1Number of data blocks divided for image height direction, n2Number of data blocks divided for image width direction, n1,n2Is a positive integer;
converting each data block into p1*p2Vector data in the c dimension; wherein n is1=H/p1,n2=W/p2(ii) a H is the height of the input image, W is the width of the input image, p1For the height of the divided data block, p2The width of the divided data block;
n is to be1*n2Vector data set corresponding to each data blockAnd, obtaining n1n2×p1p2c two-dimensional data matrix;
generating a position coding vector corresponding to the position according to the position of each data block in the target ultrasonic breast image, and adding the position coding vector into the two-dimensional data matrix to obtain a data matrix to be processed; segmentation and division
And inputting the data matrix to be processed into a pre-trained image classification network to obtain the focus property classification corresponding to the target ultrasonic mammary gland image.
Optionally, the image classification network includes a multi-head attention module, a feed-forward neural module, and a multi-layer fully-connected classification module;
the multi-head attention module comprises three fully-connected networks, an activation function layer and a multi-dimensional logistic regression layer, wherein the input of each fully-connected network is the data matrix to be processed, and the output of each fully-connected network is characteristic data with preset dimensionality; after the multiplication of characteristic data output by two preset fully-connected networks and the division by a preset scale factor, a logistic regression result is obtained through calculation of a multidimensional logistic regression layer; multiplying the logistic regression result with the characteristic data of the other fully-connected network to obtain an output result of the multi-head attention module; the other fully connected network is a fully connected network different from the preset two fully connected networks in the three network branches;
the feedforward neural module comprises a fully-connected network, a linear rectification activation function connected with the fully-connected network and layer normalization; the output result of the multi-head attention module is subjected to full-connection network, linear rectification activation function connected with the full-connection network and layer normalization to obtain the output result of the feedforward neural module;
the multilayer fully-connected classification module receives the output result of the feedforward neural module and then performs fully-connected layer processing; and carrying out layer normalization processing on the processed data to obtain the lesion property classification.
Optionally, if the target ultrasound breast image is an image in the target ultrasound breast video, the method further includes:
after lesion property classification is obtained according to each frame of target ultrasonic breast image, lesion property classification corresponding to the target ultrasonic breast video is determined and obtained according to lesion property classification obtained from each frame of target ultrasonic breast image in the target ultrasonic breast video.
Optionally, the determining, according to the lesion property classification obtained from each frame of target ultrasound breast image in the target ultrasound breast video, a lesion property classification corresponding to the target ultrasound breast video includes:
if no image with the focus property classification as malignant exists in each frame of target ultrasonic breast image of the target ultrasonic breast video, counting the focus property classification with the largest number in the focus property classifications corresponding to each frame of target ultrasonic breast image, and determining the focus property classification obtained through counting as the focus property classification corresponding to the target ultrasonic breast video;
and if the focus property classification is a malignant image in each frame of target ultrasonic breast image of the target ultrasonic breast video, determining the focus property classification corresponding to the target ultrasonic breast video as malignant.
Optionally, the lesion property classification comprises: benign type and malignant type; alternatively, the cancer includes at least one of benign type, malignant type, inflammatory type, adenopathy type, proliferative type, ductal ectasia type, early stage invasive cancer, non-invasive cancer, lobular adenocarcinoma, ductal adenocarcinoma, medullary carcinoma, hard cancer, simple cancer, carcinoma in situ, early stage cancer, invasive cancer, undifferentiated cancer, poorly differentiated cancer, and highly differentiated cancer.
Optionally, the number of the multi-head attention module and the feedforward neural module is plural.
Optionally, the target ultrasound breast image is a whole ultrasound breast image or a breast lesion region image.
Optionally, the image classification network is obtained by random activation training based on weights.
In a second aspect, there is provided an apparatus for classifying an ultrasound breast lesion, the apparatus comprising:
the information acquisition unit is used for acquiring target ultrasonic breast information to be classified, wherein the target ultrasonic breast information is a target ultrasonic breast image or a target ultrasonic breast video, and the target ultrasonic breast video comprises at least two frames of target ultrasonic breast images;
an image segmentation unit for segmenting the target ultrasound breast image into n for each frame of the target ultrasound breast image1*n2A data block; wherein n is1Number of data blocks divided for image height direction, n2Number of data blocks divided for image width direction, n1,n2Is a positive integer;
a data conversion unit for converting each data block into p1*p2Vector data in the c dimension; wherein n is1=H/p1,n2=W/p2(ii) a H is the height of the input image, W is the width of the input image, p1For the height of the divided data block, p2The width of the divided data block;
a vector merging unit for merging n1*n2Merging the vector data corresponding to each data block to obtain n1n2×p1p2c two-dimensional data matrix;
the matrix generating unit is used for generating a position coding vector corresponding to the position according to the position of each data block in the target ultrasonic breast image, and adding the position coding vector into the two-dimensional data matrix to obtain a data matrix to be processed;
and the focus classification unit is used for inputting the data matrix to be processed into a pre-trained image classification network to obtain focus property classification segmentation corresponding to the target ultrasonic mammary gland image.
Optionally, the image classification network includes a multi-head attention module, a feed-forward neural module, and a multi-layer fully-connected classification module;
the multi-head attention module comprises three fully-connected networks, an activation function layer and a multi-dimensional logistic regression layer, wherein the input of each fully-connected network is the data matrix to be processed, and the output of each fully-connected network is characteristic data with preset dimensionality; after multiplying the characteristic data output by two preset fully-connected networks and dividing the multiplied characteristic data by a preset scale factor, obtaining a logistic regression result through multilayer logistic regression calculation; multiplying the logistic regression result with the characteristic data of the other fully-connected network to obtain an output result of the multi-head attention module; the other fully connected network is a fully connected network different from the preset two fully connected networks in the three network branches;
the output of the multi-head attention module is as follows:
Figure BDA0002880920060000041
wherein Q, K and V are results of full connection of the input data blocks respectively, and d is a scale factor.
The feedforward neural module comprises a fully-connected network, a linear rectification activation function connected with the fully-connected network and layer normalization; the output result of the multi-head attention module is subjected to full-connection network, linear rectification activation function connected with the full-connection network and layer normalization to obtain the output result of the feedforward neural module;
the output of the feedforward neural network is as follows:
zout=LN(RELU(MLP(RELU(MLP(zin) ))) wherein, z) isinIs the input of a feedforward neural network, zoutFor the output of the feedforward neural network, LN is the layer normalization operation, RELU is the linear rectification activation function, and MLP is the full-link layer.
The multilayer fully-connected classification module receives the output result of the feedforward neural module and then performs fully-connected layer processing; and carrying out layer normalization processing on the processed data to obtain the lesion property classification.
In a third aspect, there is provided an apparatus for ultrasound classification of breast lesions, the apparatus comprising a processor and a memory; the memory has stored therein a program that is loaded and executed by the processor to implement the method of classifying an ultrasound breast lesion provided by the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which a program is stored which, when being executed by a processor, is adapted to carry out the method for classifying an ultrasound breast lesion provided by the first aspect.
The beneficial effects of this application include at least: by segmenting the target ultrasound breast image into n for each frame of the target ultrasound breast image1*n2A data block; converting each data block into p1*p2Vector data in the c dimension; n is to be1*n2Merging the vector data corresponding to each data block to obtain n1n2×p1p2c two-dimensional data matrix; generating a position coding vector corresponding to the position according to the position of each data block in the target ultrasonic mammary gland image, and adding the position coding vector into a two-dimensional data matrix to obtain a data matrix to be processed; inputting a data matrix to be processed into a pre-trained image classification network to obtain focus property classification corresponding to the target ultrasonic mammary gland image; the problem of low classification efficiency when the ultrasonic breast images are classified manually can be solved; because automatic classification can be realized through the image classification model, the accuracy and the efficiency of classifying the ultrasonic breast image can be improved.
In addition, the image classification is realized by combining a full-connection network with other calculation modes instead of convolution operation by setting the image classification network, so that the calculation amount and difficulty of the model can be reduced, and the calculation efficiency of the model is improved.
In addition, after the image is divided into a plurality of data blocks, each data block is converted into vector data and combined to be input into the image classification network, so that the calculation amount of the input image classification network can be reduced, and the model calculation efficiency can be further improved.
In addition, the generalization capability of the image classification network can be improved by setting the number of the multi-head attention module and the feedforward neural module to be a plurality.
In addition, the generalization capability of the image classification network can be improved by obtaining the image classification network based on weight random activation training.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
Fig. 1 is a flowchart of a method for classifying an ultrasound breast lesion provided in one embodiment of the present application;
FIG. 2 is a schematic diagram of an image classification model provided by an embodiment of the present application;
fig. 3 is a block diagram of an ultrasound breast lesion classification apparatus provided in an embodiment of the present application;
fig. 4 is a block diagram of an ultrasound breast lesion classification apparatus according to still another embodiment of the present application.
Detailed Description
The following detailed description of embodiments of the present application will be described in conjunction with the accompanying drawings and examples. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Optionally, in the present application, an execution subject of each embodiment is taken as an example of an electronic device with computing capability, where the electronic device may be a terminal or a server, and the terminal may be an ultrasound imaging device, a computer, a mobile phone, a tablet computer, and the like, and the embodiment does not limit the type of the electronic device.
The classification method of the ultrasound breast lesion provided by the present application is described below.
Fig. 1 is a flowchart of a classification method of an ultrasound breast lesion provided in an embodiment of the present application. The method at least comprises the following steps:
step 101, target ultrasonic breast information to be classified is obtained, wherein the target ultrasonic breast information is a target ultrasonic breast image or a target ultrasonic breast video, and the target ultrasonic breast video comprises at least two frames of target ultrasonic breast images.
Optionally, the target ultrasound breast image is a whole ultrasound breast image or a breast lesion region image.
Step 102, for each frame of target ultrasonic breast image, dividing the target ultrasonic breast image into n1*n2A data block; wherein n is1Number of data blocks divided for image height direction, n2Number of data blocks divided for image width direction, n1,n2Is a positive integer.
In one example, by a preset dimension p1×p2And segmenting the target ultrasonic mammary gland image. P corresponding to different data blocks1And p2The same or different. p is a radical of1And p2Can be set by a user or set by default in the electronic equipment, and the embodiment does not adopt p1And p2The value of (A) is defined.
Step 103, converting each data block into p1*p2Vector data of dimension c, p1,p2For the size of each data block.
Wherein n is1=H/p1,n2=W/p2(ii) a H is the height of the input image, W is the width of the input image, p1For the height of the divided data block, p2The width of the divided data block; n is1Number of data blocks divided for image height direction, n2The number of data blocks divided in the image width direction.
In the embodiment, one target ultrasonic breast image is divided into a plurality of data blocks, and each data block is converted into vector data, so that the data volume of the input image classification model can be compressed, and the calculation efficiency is improved.
Optionally, each data block is converted to p1*p2The way of vector data of dimension c includes but is not limited to: converting the data blocks into corresponding characteristic vectors by using a neural network to obtain vector data; or, each pixel value in the data block is taken as vector data, and the embodiment does not limit the manner of obtaining the vector data.
Step 104, adding n1*n2Data of a personMerging vector data corresponding to the blocks to obtain n1n2×p1p2c two-dimensional data matrix.
And 105, generating a position coding vector corresponding to the position according to the position of each data block in the target ultrasonic breast image, and adding the position coding vector into the two-dimensional data matrix to obtain a to-be-processed data matrix.
The position-encoding vector is used to indicate the position of the data block in the target ultrasound breast image.
And 106, inputting the data matrix to be processed into a pre-trained image classification network to obtain the focus property classification corresponding to the target ultrasonic mammary gland image.
Wherein, the lesion property classification is used for indicating the corresponding pathological property of the target ultrasonic breast image.
Referring to fig. 2, the image classification network includes a multi-headed attention module 21, a feed-forward neural module 22, and a multi-layered fully-connected classification module 23.
The multi-headed attention module 21 includes three fully connected networks, an activation function layer and a multidimensional logistic regression layer. Of course, the multi-head attention module 21 may also include other network structures, and the embodiment is not listed here. The input of each full-connection network is a data matrix to be processed, and the output is characteristic data with preset dimensionality; multiplying feature data output by two preset fully-connected networks, dividing the multiplied feature data by a preset scale factor, and then obtaining a logistic regression result through multi-dimensional logistic regression calculation; multiplying the logistic regression result with the characteristic data of the other fully-connected network to obtain an output result of the multi-head attention module; the other full-connection network is a full-connection network which is different from the preset two full-connection networks in the three network branches;
in one example, the output of the multi-head attention module is:
Figure BDA0002880920060000071
wherein Q, K and V are results of full connection of the input data blocks respectively, and d is a scale factor.
The feedforward neural module 22 includes a fully connected network, a linear rectification activation function connected to the fully connected network, and layer normalization; the output result of the multi-head attention module is subjected to full-connection network, linear rectification activation function connected with the full-connection network and layer normalization to obtain the output result of the feedforward neural module.
In one example, the output of the feedforward neural module 22 is:
zout=LN(RELU(MLP(RELU(MLP(zin) ))) wherein, z) isinIs the input of a feedforward neural network, zoutFor the output of the feedforward neural network, LN is the layer normalization operation, RELU is the linear rectification activation function, and MLP is the full-link layer.
The multi-layer full-connection classification module 23 receives the output result of the feedforward neural module and then performs full-connection layer processing; and carrying out layer normalization processing on the processed data to obtain focus property classification.
Wherein the logistic regression calculation can be implemented by softmax in the multi-head attention module 21.
In the feedforward neural module 22, the number of network units formed by the fully-connected network and the linear rectification and activation function is one or more, and the number of the network units is illustrated as two in fig. 2.
Optionally, the lesion property classification comprises: benign type and malignant type; alternatively, the cancer includes at least one of benign type, malignant type, inflammatory type, adenopathy type, proliferative type, ductal ectasia type, early stage invasive cancer, non-invasive cancer, lobular adenocarcinoma, ductal adenocarcinoma, medullary carcinoma, hard cancer, simple cancer, carcinoma in situ, early stage cancer, invasive cancer, undifferentiated cancer, poorly differentiated cancer, and highly differentiated cancer. In other embodiments, the lesion property classification may be classified into other types, and the classification manner of the lesion property classification is not limited in this embodiment.
Optionally, the number of the multi-head attention module and the feedforward neural module is multiple, so that the generalization of the model can be improved.
In this embodiment, the image classification network is obtained by training the initial neural network model using training data. The training data comprises a sample data matrix corresponding to the sample ultrasound mammary gland image and a classification label corresponding to the sample data matrix. In the training process, the sample data matrix is input into the initial neural network model to obtain a model result; and calculating the difference between the model result and the classification label by using a preset loss function, and performing iterative training on the initial neural network model according to the calculation result to finally obtain the image classification network. Illustratively, the image classification network is obtained by randomly activating training based on the weights, so that the generalization of the model can be further improved.
The type of the classification label corresponds to the output type of the image classification network, and the network structure of the image classification network is the same as that of the initial neural network model.
Optionally, if the target ultrasound breast image is an image in the target ultrasound breast video, the method further includes: after lesion property classification is obtained according to each frame of target ultrasonic breast image, the property classification corresponding to the target ultrasonic breast video is determined according to the lesion property classification obtained from each frame of target ultrasonic breast image in the target ultrasonic breast video.
The method for determining the lesion property classification corresponding to the target ultrasonic breast video according to the lesion property classification obtained from each frame of target ultrasonic breast image in the target ultrasonic breast video comprises the following steps: if no image with the focus property classification as malignant exists in each frame of target ultrasonic breast image of the target ultrasonic breast video, counting the focus property classification with the largest number in the focus property classifications corresponding to each frame of target ultrasonic breast image, and determining the focus property classification obtained through counting as the focus property classification corresponding to the target ultrasonic breast video; and if the focus property classification is a malignant image in each frame of target ultrasonic breast image of the target ultrasonic breast video, determining the focus property classification corresponding to the target ultrasonic breast video as malignant.
In summary, the classification method of the ultrasound breast lesion provided in this embodiment is implemented by performing ultrasound on the breast for each frame of the targetImage, segmenting the target ultrasound breast image into n1*n2A data block; converting each data block into p1*p2Vector data in the c dimension; n is to be1*n2Merging the vector data corresponding to each data block to obtain n1n2×p1p2c two-dimensional data matrix; generating a position coding vector corresponding to the position according to the position of each data block in the target ultrasonic mammary gland image, and adding the position coding vector into a two-dimensional data matrix to obtain a data matrix to be processed; inputting a data matrix to be processed into a pre-trained image classification network to obtain focus property classification corresponding to the target ultrasonic mammary gland image; the problem of low classification efficiency when the ultrasonic breast images are classified manually can be solved; because automatic classification can be realized through the image classification model, the accuracy and the efficiency of classifying the ultrasonic breast image can be improved.
In addition, the image classification is realized by combining a full-connection network with other calculation modes instead of convolution operation by setting the image classification network, so that the calculation amount and difficulty of the model can be reduced, and the calculation efficiency of the model is improved.
In addition, after the image is divided into a plurality of data blocks, each data block is converted into vector data and combined to be input into the image classification network, so that the calculation amount of the input image classification network can be reduced, and the model calculation efficiency can be further improved.
In addition, the generalization capability of the image classification network can be improved by setting the number of the multi-head attention module and the feedforward neural module to be a plurality.
In addition, the generalization capability of the image classification network can be improved by obtaining the image classification network based on weight random activation training.
Fig. 3 is a block diagram of an ultrasound breast lesion classification apparatus according to an embodiment of the present application. The device at least comprises the following modules: an information acquisition unit 310, an image segmentation unit 320, a data conversion unit 330, a vector merging unit 340, a matrix generation unit 350, and a lesion classification unit 360.
The information acquiring unit 310 is configured to acquire target ultrasound breast information to be classified, where the target ultrasound breast information is a target ultrasound breast image or a target ultrasound breast video, and the target ultrasound breast video includes at least two frames of target ultrasound breast images;
an image segmentation unit 320 for segmenting each frame of the target ultrasound breast image into n1*n2A data block; wherein n is1Number of data blocks divided for image height direction, n2Number of data blocks divided for image width direction, n1,n2Is a positive integer;
a data conversion unit 330 for converting each data block into p1*p2Vector data of dimension c, said p1,p2For each data block size; wherein n is1=H/p1,n2=W/p2(ii) a H is the height of the input image, W is the width of the input image, p1For the height of the divided data block, p2The width of the divided data block; n is1Number of data blocks divided for image height direction, n2The number of data blocks divided in the image width direction;
a vector merging unit 340 for merging n1*n2Merging the vector data corresponding to each data block to obtain n1n2×p1p2c two-dimensional data matrix;
a matrix generating unit 350, configured to generate a position coding vector corresponding to the position according to the position of each data block in the target ultrasound breast image, and add the position coding vector to the two-dimensional data matrix to obtain a to-be-processed data matrix;
and the lesion classification unit 360 is configured to input the data matrix to be processed into a pre-trained image classification network, so as to obtain a lesion property classification corresponding to the target ultrasound breast image.
For relevant details reference is made to the above-described method embodiments.
It should be noted that: the classification device for an ultrasound breast lesion provided in the above embodiment is only exemplified by the division of the above functional modules when classifying the ultrasound breast lesion, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the classification device for an ultrasound breast lesion is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the ultrasound breast lesion classification device provided in the above embodiments and the ultrasound breast lesion classification method embodiment belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiment and are not described herein again.
Fig. 4 is a block diagram of an ultrasound breast lesion classification apparatus according to an embodiment of the present application. The apparatus comprises at least a processor 401 and a memory 402.
Processor 401 may include one or more processing cores such as: 4 core processors, 8 core processors, etc. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the method of classifying an ultrasound breast lesion provided by the method embodiments herein.
In some embodiments, the ultrasound breast lesion classification device may further include: a peripheral interface and at least one peripheral. The processor 401, memory 402 and peripheral interface may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface via a bus, signal line, or circuit board. Illustratively, peripheral devices include, but are not limited to: radio frequency circuit, touch display screen, audio circuit, power supply, etc.
Of course, the classification device for the ultrasound breast lesion may also include fewer or more components, which is not limited in this embodiment.
Optionally, the present application further provides a computer readable storage medium, in which a program is stored, the program being loaded and executed by a processor to implement the method for classifying an ultrasound breast lesion of the above-mentioned method embodiment.
Optionally, the present application further provides a computer product comprising a computer readable storage medium, in which a program is stored, the program being loaded and executed by a processor to implement the method for classifying an ultrasound breast lesion of the above-mentioned method embodiment.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1.一种超声乳腺病灶的分类方法,其特征在于,所述方法包括:1. a classification method of ultrasound breast lesions, is characterized in that, described method comprises: 获取待分类的目标超声乳腺信息,所述目标超声乳腺信息为目标超声乳腺图像或者目标超声乳腺视频,所述目标超声乳腺视频中包括至少两帧目标超声乳腺图像;Obtaining target ultrasound breast information to be classified, the target ultrasound breast information is a target ultrasound breast image or a target ultrasound breast video, and the target ultrasound breast video includes at least two frames of target ultrasound breast images; 对于每一帧目标超声乳腺图像,将所述目标超声乳腺图像分割为n1*n2个数据块;其中,n1为图像高度方向分割的数据块数目,n2为图像宽度方向分割的数据块数目,所述n1,n2为正整数;For each frame of target ultrasound breast image, the target ultrasound breast image is divided into n 1 *n 2 data blocks; where n 1 is the number of data blocks divided in the height direction of the image, and n 2 is the data divided in the width direction of the image the number of blocks, the n 1 and n 2 are positive integers; 将每个数据块转换为p1*p2*c维的向量数据;其中n1=H/p1,n2=W/p2;H为输入图像的高度,W为输入图像的宽度,p1为分割后数据块的高度,p2为分割后数据块的宽度;Convert each data block into p 1 *p 2 *c-dimensional vector data; where n 1 =H/p 1 , n 2 =W/p 2 ; H is the height of the input image, W is the width of the input image, p 1 is the height of the divided data block, p 2 is the width of the divided data block; 将n1*n2个数据块对应的向量数据合并,得到n1n2×p1p2c的二维数据矩阵;Combine the vector data corresponding to the n 1 *n 2 data blocks to obtain a two-dimensional data matrix of n 1 n 2 ×p 1 p 2 c; 根据每个数据块在所述目标超声乳腺图像中的位置,生成所述位置对应的位置编码向量,并将所述位置编码向量添加至所述二维数据矩阵中,得到的待处理的数据矩阵;According to the position of each data block in the target ultrasound breast image, a position encoding vector corresponding to the position is generated, and the position encoding vector is added to the two-dimensional data matrix to obtain the data matrix to be processed ; 将所述待处理的数据矩阵输入预先训练的图像分类网络中,得到所述目标超声乳腺图像所对应的病灶性质分类。The data matrix to be processed is input into a pre-trained image classification network to obtain the classification of the lesion properties corresponding to the target ultrasound breast image. 2.根据权利要求1所述的方法,其特征在于,2. The method according to claim 1, wherein 所述图像分类网络包括多头注意力模块、前馈神经模块和多层全连接分类模块;The image classification network includes a multi-head attention module, a feedforward neural module and a multi-layer fully connected classification module; 所述多头注意力模块包括三个全连接网络、激活函数层和多维逻辑回归层,每个全连接网络的输入为所述待处理的数据矩阵,输出为预设维度的特征数据;其中,预设两个全连接网络输出的特征数据相乘并除以预设比例因子后,经过所述多维逻辑回归层计算得到逻辑回归结果;所述逻辑回归结果与另一个全连接网络的特征数据相乘后,得到所述多头注意力模块的输出结果;所述另一个全连接网络是所述三个全连接网络中与所述预设两个全连接网络不同的全连接网络;The multi-head attention module includes three fully connected networks, an activation function layer and a multi-dimensional logistic regression layer. The input of each fully connected network is the data matrix to be processed, and the output is feature data of preset dimensions; After multiplying the feature data output by two fully connected networks and dividing by a preset scale factor, a logistic regression result is obtained through the multi-dimensional logistic regression layer calculation; the logistic regression result is multiplied by the feature data of another fully connected network Then, the output result of the multi-head attention module is obtained; the other fully connected network is a fully connected network that is different from the preset two fully connected networks among the three fully connected networks; 所述前馈神经模块包括全连接网络、与所述全连接网络相连的线性整流激活函数和层归一化;所述多头注意力模块的输出结果经过全连接网络、与所述全连接网络相连的线性整流激活函数和层归一化后,得到所述前馈神经模块的输出结果;The feedforward neural module includes a fully connected network, a linear rectification activation function and layer normalization connected to the fully connected network; the output result of the multi-head attention module is connected to the fully connected network through the fully connected network. After the linear rectification activation function and layer normalization of , the output result of the feedforward neural module is obtained; 所述多层全连接分类模块接收到所述前馈神经模块的输出结果后,经过全连接层处理;将处理后的数据进行层归一化处理,得到所述病灶性质分类。After receiving the output result of the feedforward neural module, the multi-layer fully-connected classification module undergoes fully-connected layer processing; the processed data is subjected to layer normalization processing to obtain the lesion property classification. 3.根据权利要求1所述的方法,其特征在于,若所述目标超声乳腺图像为所述目标超声乳腺视频中的图像,则所述方法还包括:3. The method according to claim 1, wherein if the target ultrasound breast image is an image in the target ultrasound breast video, the method further comprises: 在根据每帧目标超声乳腺图像得到病灶性质分类之后,根据所述目标超声乳腺视频中的各帧目标超声乳腺图像得到的病灶性质分类确定得到所述目标超声乳腺视频所对应的病灶性质分类。After the lesion property classification is obtained according to each frame of the target ultrasound breast image, the lesion property classification corresponding to the target ultrasound breast video is determined according to the lesion property classification obtained from each frame of the target ultrasound breast image in the target ultrasound breast video. 4.根据权利要求3所述的方法,其特征在于,所述根据所述目标超声乳腺视频中的各帧目标超声乳腺图像得到的病灶性质分类确定得到所述目标超声乳腺视频所对应的病灶性质分类,包括:4 . The method according to claim 3 , wherein, according to the classification of the lesion properties obtained from each frame of the target ultrasound breast images in the target ultrasound breast video, the lesion properties corresponding to the target ultrasound breast video are determined and obtained. 5 . Categories, including: 若所述目标超声乳腺视频的各帧目标超声乳腺图像中不存在病灶性质分类为恶性的图像,则统计各帧目标超声乳腺图像所对应的病灶性质分类中数量最多的病灶性质分类,并将统计得到的病灶性质分类确定为所述目标超声乳腺视频所对应的病灶性质分类;If there is no image classified as malignant in each frame of the target ultrasonic breast image of the target ultrasonic breast video, the classification of the lesion property with the largest number in the classification of the lesion property corresponding to each frame of the target ultrasonic breast image is counted, and the statistical The obtained lesion property classification is determined as the lesion property classification corresponding to the target ultrasound breast video; 若所述目标超声乳腺视频的各帧目标超声乳腺图像中存在病灶性质分类为恶性的图像,则将所述目标超声乳腺视频所对应的病灶性质分类确定为恶性。If there is an image of which the lesion property is classified as malignant in each frame of the target ultrasound breast image of the target ultrasound breast video, the lesion property classification corresponding to the target ultrasound breast video is determined to be malignant. 5.根据权利要求1至4任一所述的方法,其特征在于,所述病灶性质分类包括:良性类型和恶性类型;或者,包括良性类型、恶性类型、炎症类型、腺病类型、增生类型、导管扩张类型、早期浸润癌、浸润癌、非浸润性癌、小叶腺癌、导管腺癌、髓样癌、硬癌、单纯癌、原位癌、早期癌、未分化癌、低分化癌、中分化癌、高分化癌中的至少一种。5. The method according to any one of claims 1 to 4, wherein the classification of lesion properties comprises: benign type and malignant type; or, comprises benign type, malignant type, inflammatory type, adenopathy type, hyperplasia type , ductal dilatation type, early invasive carcinoma, invasive carcinoma, non-invasive carcinoma, lobular adenocarcinoma, ductal adenocarcinoma, medullary carcinoma, sclerocarcinoma, simple carcinoma, carcinoma in situ, early carcinoma, undifferentiated carcinoma, poorly differentiated carcinoma, At least one of moderately differentiated cancer and well-differentiated cancer. 6.根据权利要求1至4任一所述的方法,其特征在于,所述多头注意力模块和所述前馈神经模块的数量为多个。6. The method according to any one of claims 1 to 4, wherein the number of the multi-head attention module and the feedforward neural module is multiple. 7.根据权利要求1至4任一所述的方法,其特征在于,所述目标超声乳腺图像为整张超声乳腺图像或乳腺病灶区域图像。7 . The method according to claim 1 , wherein the target ultrasound breast image is an entire ultrasound breast image or an image of a breast lesion area. 8 . 8.一种超声乳腺病灶的分类装置,其特征在于,所述装置包括:8. A classification device for ultrasonic breast lesions, wherein the device comprises: 信息获取单元,用于获取待分类的目标超声乳腺信息,所述目标超声乳腺信息为目标超声乳腺图像或者目标超声乳腺视频,所述目标超声乳腺视频中包括至少两帧目标超声乳腺图像;an information acquisition unit, configured to acquire target ultrasound breast information to be classified, the target ultrasound breast information is a target ultrasound breast image or a target ultrasound breast video, and the target ultrasound breast video includes at least two frames of target ultrasound breast images; 图像分割单元,对于每一帧目标超声乳腺图像,将所述目标超声乳腺图像分割为n1*n2个数据块;其中,n1为图像高度方向分割的数据块数目,n2为图像宽度方向分割的数据块数目,所述n1,n2为正整数;The image segmentation unit, for each frame of the target ultrasound breast image, divides the target ultrasound breast image into n 1 *n 2 data blocks; where n 1 is the number of data blocks divided in the height direction of the image, and n 2 is the image width The number of data blocks divided in the direction, the n 1 and n 2 are positive integers; 数据转换单元,用于将每个数据块转换为p1*p2*c维的向量数据;其中n1=H/p1,n2=W/p2;H为输入图像的高度,W为输入图像的宽度,p1为分割后数据块的高度,p2为分割后数据块的宽度分割分割;A data conversion unit for converting each data block into p 1 *p 2 *c-dimensional vector data; where n 1 =H/p 1 , n 2 =W/p 2 ; H is the height of the input image, W is the width of the input image, p 1 is the height of the divided data block, and p 2 is the width of the divided data block. 向量合并单元,用于将n1*n2个数据块对应的向量数据合并,得到n1n2×p1p2c的二维数据矩阵;The vector merging unit is used to merge the vector data corresponding to the n 1 *n 2 data blocks to obtain a two-dimensional data matrix of n 1 n 2 ×p 1 p 2 c; 矩阵生成单元,用于根据每个数据块在所述目标超声乳腺图像中的位置,生成所述位置对应的位置编码向量,并将所述位置编码向量添加至所述二维数据矩阵中,得到的待处理的数据矩阵;A matrix generation unit, configured to generate a position encoding vector corresponding to the position according to the position of each data block in the target ultrasound breast image, and add the position encoding vector to the two-dimensional data matrix to obtain The pending data matrix; 病灶分类单元,用于将所述待处理的数据矩阵输入预先训练的图像分类网络中,得到所述目标超声乳腺图像所对应的病灶性质分类。The lesion classification unit is used for inputting the data matrix to be processed into a pre-trained image classification network to obtain the lesion property classification corresponding to the target ultrasound breast image. 9.一种超声乳腺病灶的分类装置,其特征在于,所述装置包括处理器和存储器;所述存储器中存储有程序,所述程序由所述处理器加载并执行以实现如权利要求1至7任一项所述的超声乳腺病灶的分类方法。9. An ultrasonic breast lesion classification device, characterized in that the device comprises a processor and a memory; the memory stores a program, and the program is loaded and executed by the processor to realize the method according to claim 1 to 7. The method for classifying ultrasound breast lesions according to any one of them. 10.一种计算机可读存储介质,其特征在于,所述存储介质中存储有程序,所述程序被处理器执行时用于实现如权利要求1至7任一项所述的超声乳腺病灶的分类方法。10. A computer-readable storage medium, characterized in that, a program is stored in the storage medium, and when the program is executed by a processor, the program is used for realizing the ultrasonic breast lesion according to any one of claims 1 to 7. Classification.
CN202011644067.0A 2020-12-31 2020-12-31 Classification method, device and storage medium for ultrasonic breast lesions Active CN112699948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011644067.0A CN112699948B (en) 2020-12-31 2020-12-31 Classification method, device and storage medium for ultrasonic breast lesions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011644067.0A CN112699948B (en) 2020-12-31 2020-12-31 Classification method, device and storage medium for ultrasonic breast lesions

Publications (2)

Publication Number Publication Date
CN112699948A true CN112699948A (en) 2021-04-23
CN112699948B CN112699948B (en) 2025-01-17

Family

ID=75514294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011644067.0A Active CN112699948B (en) 2020-12-31 2020-12-31 Classification method, device and storage medium for ultrasonic breast lesions

Country Status (1)

Country Link
CN (1) CN112699948B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114091507A (en) * 2021-09-02 2022-02-25 北京医准智能科技有限公司 Ultrasonic focus area detection method and device, electronic equipment and storage medium
CN114587416A (en) * 2022-03-10 2022-06-07 山东大学齐鲁医院 Diagnosis system of gastrointestinal submucosal tumor based on deep learning multi-target detection
CN114862842A (en) * 2022-06-06 2022-08-05 北京医准智能科技有限公司 Image processing apparatus, electronic device, and medium
WO2024239701A1 (en) * 2023-05-23 2024-11-28 中日友好医院(中日友好临床医学研究所) Hyperuricemia and gouty nephropathy classification method and apparatus, and electronic device

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120197896A1 (en) * 2008-02-25 2012-08-02 Georgetown University System and method for detecting, collecting, analyzing, and communicating event-related information
EP3321855A1 (en) * 2016-11-15 2018-05-16 Houzz, Inc. Aesthetic search engine
CN108427951A (en) * 2018-02-08 2018-08-21 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment
US20180349743A1 (en) * 2017-05-30 2018-12-06 Abbyy Development Llc Character recognition using artificial intelligence
WO2019012296A1 (en) * 2017-07-14 2019-01-17 The Francis Crick Institute Limited Analysis of hla alleles in tumours and the uses thereof
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
CN110189323A (en) * 2019-06-05 2019-08-30 深圳大学 A breast ultrasound image lesion segmentation method based on semi-supervised learning
US20190290246A1 (en) * 2018-03-23 2019-09-26 China Medical University Hospital Assisted detection model of breast tumor, assisted detection system thereof, and method for assisted detecting breast tumor
CN110321920A (en) * 2019-05-08 2019-10-11 腾讯科技(深圳)有限公司 Image classification method, device, computer readable storage medium and computer equipment
CN110472688A (en) * 2019-08-16 2019-11-19 北京金山数字娱乐科技有限公司 The method and device of iamge description, the training method of image description model and device
US20190350533A1 (en) * 2018-05-15 2019-11-21 Konica Minolta, Inc. Ultrasound diagnosis apparatus
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
WO2020019671A1 (en) * 2018-07-23 2020-01-30 哈尔滨工业大学(深圳) Breast lump detection and classification system and computer-readable storage medium
WO2020077962A1 (en) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Method and device for breast image recognition
CN111095263A (en) * 2017-06-26 2020-05-01 纽约州立大学研究基金会 Systems, methods, and computer-accessible media for virtual pancreatography
CN111222513A (en) * 2019-12-31 2020-06-02 深圳云天励飞技术有限公司 License plate number recognition method, device, electronic device and storage medium
WO2020107156A1 (en) * 2018-11-26 2020-06-04 深圳先进技术研究院 Automated classification method and device for breast medical ultrasound images
CN111369562A (en) * 2020-05-28 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium
WO2020228519A1 (en) * 2019-05-10 2020-11-19 腾讯科技(深圳)有限公司 Character recognition method and apparatus, computer device and storage medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120197896A1 (en) * 2008-02-25 2012-08-02 Georgetown University System and method for detecting, collecting, analyzing, and communicating event-related information
EP3321855A1 (en) * 2016-11-15 2018-05-16 Houzz, Inc. Aesthetic search engine
US20180349743A1 (en) * 2017-05-30 2018-12-06 Abbyy Development Llc Character recognition using artificial intelligence
CN111095263A (en) * 2017-06-26 2020-05-01 纽约州立大学研究基金会 Systems, methods, and computer-accessible media for virtual pancreatography
WO2019012296A1 (en) * 2017-07-14 2019-01-17 The Francis Crick Institute Limited Analysis of hla alleles in tumours and the uses thereof
CN108427951A (en) * 2018-02-08 2018-08-21 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment
US20190290246A1 (en) * 2018-03-23 2019-09-26 China Medical University Hospital Assisted detection model of breast tumor, assisted detection system thereof, and method for assisted detecting breast tumor
US20190350533A1 (en) * 2018-05-15 2019-11-21 Konica Minolta, Inc. Ultrasound diagnosis apparatus
WO2020019671A1 (en) * 2018-07-23 2020-01-30 哈尔滨工业大学(深圳) Breast lump detection and classification system and computer-readable storage medium
WO2020077962A1 (en) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Method and device for breast image recognition
WO2020107156A1 (en) * 2018-11-26 2020-06-04 深圳先进技术研究院 Automated classification method and device for breast medical ultrasound images
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium
CN110321920A (en) * 2019-05-08 2019-10-11 腾讯科技(深圳)有限公司 Image classification method, device, computer readable storage medium and computer equipment
WO2020224406A1 (en) * 2019-05-08 2020-11-12 腾讯科技(深圳)有限公司 Image classification method, computer readable storage medium, and computer device
WO2020228519A1 (en) * 2019-05-10 2020-11-19 腾讯科技(深圳)有限公司 Character recognition method and apparatus, computer device and storage medium
CN110189323A (en) * 2019-06-05 2019-08-30 深圳大学 A breast ultrasound image lesion segmentation method based on semi-supervised learning
CN110472688A (en) * 2019-08-16 2019-11-19 北京金山数字娱乐科技有限公司 The method and device of iamge description, the training method of image description model and device
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN111222513A (en) * 2019-12-31 2020-06-02 深圳云天励飞技术有限公司 License plate number recognition method, device, electronic device and storage medium
CN111369562A (en) * 2020-05-28 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PANEK, L等: "Classification of niederreiter–rosenbloom–tsfasman block codes", 《IEEE TRANSACTIONS ON INFORMATION THEORY》, vol. 56, no. 10, 13 September 2010 (2010-09-13), pages 5207 - 5216 *
郑元杰等: "基于人工智能的乳腺影像诊断综述", 《山东师范大学学报(自然科学版)》, 15 June 2020 (2020-06-15), pages 1 - 3 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114091507A (en) * 2021-09-02 2022-02-25 北京医准智能科技有限公司 Ultrasonic focus area detection method and device, electronic equipment and storage medium
CN114587416A (en) * 2022-03-10 2022-06-07 山东大学齐鲁医院 Diagnosis system of gastrointestinal submucosal tumor based on deep learning multi-target detection
CN114862842A (en) * 2022-06-06 2022-08-05 北京医准智能科技有限公司 Image processing apparatus, electronic device, and medium
WO2024239701A1 (en) * 2023-05-23 2024-11-28 中日友好医院(中日友好临床医学研究所) Hyperuricemia and gouty nephropathy classification method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN112699948B (en) 2025-01-17

Similar Documents

Publication Publication Date Title
CN112699948A (en) Ultrasonic breast lesion classification method and device and storage medium
Yu et al. Melanoma recognition in dermoscopy images via aggregated deep convolutional features
Nahid et al. Involvement of machine learning for breast cancer image classification: a survey
Shi et al. Histopathological image classification with color pattern random binary hashing-based PCANet and matrix-form classifier
Kolouri et al. Optimal mass transport: Signal processing and machine-learning applications
Gnanasekaran et al. Deep learning algorithm for breast masses classification in mammograms
Lyu et al. Using multi-level convolutional neural network for classification of lung nodules on CT images
Zhang et al. New convolutional neural network model for screening and diagnosis of mammograms
CN113239951B (en) Classification method, device and storage medium for ultrasonic breast lesions
Dharejo et al. Multimodal-boost: Multimodal medical image super-resolution using multi-attention network with wavelet transform
Tsivgoulis et al. An improved SqueezeNet model for the diagnosis of lung cancer in CT scans
Li et al. Automatic recognition and classification system of thyroid nodules in CT images based on CNN
Halder et al. Atrous convolution aided integrated framework for lung nodule segmentation and classification
Zhang et al. Unsupervised intrinsic image decomposition using internal self-similarity cues
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
Lam et al. Content-based image retrieval for pulmonary computed tomography nodule images
Wu et al. Histopathological image classification using random binary hashing based PCANet and bilinear classifier
Bai et al. Applying graph convolution neural network in digital breast tomosynthesis for cancer classification
Ahn The compact 3D convolutional neural network for medical images
Chen et al. Image retrieval based on quadtree classified vector quantization
Ozaltin et al. OzNet: a new deep learning approach for automated classification of COVID-19 computed tomography scans
CN118364423B (en) Medical data fusion method and system based on multimodal feature filling
Nair et al. Automated identification of breast cancer type using novel multipath transfer learning and ensemble of classifier
CN116188346B (en) Image quality enhancement method and device for endoscope image
Dodia et al. A novel bi-level lung cancer classification system on CT scans

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant