[go: up one dir, main page]

CN111368672A - Construction method and device for genetic disease facial recognition model - Google Patents

Construction method and device for genetic disease facial recognition model Download PDF

Info

Publication number
CN111368672A
CN111368672A CN202010118850.7A CN202010118850A CN111368672A CN 111368672 A CN111368672 A CN 111368672A CN 202010118850 A CN202010118850 A CN 202010118850A CN 111368672 A CN111368672 A CN 111368672A
Authority
CN
China
Prior art keywords
training
face
model
genetic disease
face region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010118850.7A
Other languages
Chinese (zh)
Inventor
王建峰
康健
程伟
李鑫
付文
赵荔君
宋泽坤
梁波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Chaoyun Life Intelligence Industry Research Institute Co ltd
Original Assignee
Suzhou Chaoyun Life Intelligence Industry Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Chaoyun Life Intelligence Industry Research Institute Co ltd filed Critical Suzhou Chaoyun Life Intelligence Industry Research Institute Co ltd
Priority to CN202010118850.7A priority Critical patent/CN111368672A/en
Publication of CN111368672A publication Critical patent/CN111368672A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a construction method and a device for a genetic disease facial recognition model. The method for constructing the genetic disease facial recognition model comprises the steps of training a deep learning neural network by a first loss function through a large number of first face images to obtain a pre-training model, responding to updating operation of network parameters in the pre-training model to obtain the updated pre-training model, then training the updated pre-training model by a small number of second face images through the second loss function, obtaining the genetic disease facial recognition model capable of detecting genetic disease risks based on the collected face images until the second loss function reaches the minimum value, and improving the detection accuracy of the model under the condition that the data volume of a genetic patient is small.

Description

Construction method and device for genetic disease facial recognition model
Technical Field
The application relates to the fields related to artificial intelligence technology and genetic consultation, in particular to a method and a device for constructing a facial recognition model of a genetic disease.
Background
Genetic diseases refer to diseases caused by changes in genetic material or controlled by pathogenic genes. In the conventional technology, genetic detection is generally performed, and the family disease history is analyzed, which is not only costly and time-consuming, but also the accuracy of the detection result usually depends on the professional level and clinical experience of the doctor.
However, with the rapid development of artificial intelligence technology, machine vision is closely related to various industries, the traditional medical industry has more prominent demand for artificial intelligence, and more machine vision technical means are also applied to the medical field. Therefore, a genetic disease facial recognition model based on machine vision is needed.
Disclosure of Invention
In view of the above, there is a need to provide a method and apparatus, a computer device and a storage medium for constructing a facial recognition model for genetic diseases, which can improve the detection accuracy without intervention of a doctor.
A method of constructing a genetic disease facial recognition model, the method comprising:
acquiring a first training data set and a second training data set, wherein the first training data set comprises a plurality of first face images, the second training data set comprises a second face image of a patient with a genetic disease, and the second face image is provided with a genetic disease type label;
training a deep learning neural network by using a first loss function by using a first training data set to obtain a pre-training model;
responding to the updating operation of the network parameters in the pre-training model, and acquiring the updated pre-training model;
and training the updated pre-training model by using a second loss function by using a second training data set until the second loss function reaches the minimum value, so as to obtain the genetic disease face recognition model.
In one embodiment, the first loss function is a softmax loss function and the second loss function is an a-softmax loss function.
In one embodiment, the deep learning neural network adopts Resnet64 as a backbone network, and an SE module is added in each layer of residual error network of Resnet64 to form an SE-Resnet64 network structure; training a deep learning neural network using a first loss function, comprising: the SE-Resnet64 network structure is trained using a first loss function.
In one embodiment, the SE-Resnet64 network structure includes 4 residual error modules connected in sequence, each residual error module includes 10 layers, 18 layers, 24 layers and 10 layers of residual error networks, and the number of channels corresponding to each residual error module is 64, 128, 256 and 512.
In one embodiment, the deep learning neural network adopts Densenet169 as a backbone network, and an Ateention module is added behind the maximum pooling layer of Densenet169 to form an A-Densenet169 network structure; training a deep learning neural network using a first loss function, comprising: the A-Densenet169 network structure is trained using a first loss function.
In one embodiment, in response to an update operation on a network parameter in a pre-trained model, the method includes: acquiring network weight freezing operation on a set layer in a pre-training model and acquiring setting operation on the number of categories and the model learning rate output by a full connection layer in the pre-training model; and updating the network parameters of the pre-training model according to the freezing operation and the setting operation to obtain the updated pre-training model.
A method of genetic disease risk prediction, the method comprising:
acquiring data of a picture to be predicted;
adopting a multitask convolutional neural network to identify a face region in image data to be predicted, and detecting a face confidence coefficient of the face region;
extracting a face region with the face confidence coefficient meeting a set value, and carrying out image quality detection on the face region;
the genetic disease face recognition model constructed by the method is used for carrying out genetic disease face recognition on the face region with the image quality meeting the requirements, and the risk prediction results of the face region on various genetic disease types are output.
In one embodiment, the image quality detection of the face area includes: and (3) carrying out moire detection on the face region, if no moire is detected, determining that the image quality of the face region meets the requirement, otherwise, determining that the image quality of the face region does not meet the requirement.
In one embodiment, after outputting the risk prediction results of the face region on various genetic disease types, the method further comprises: and generating and outputting corresponding face thermodynamic diagrams according to the risk prediction results of the face regions on various genetic disease types.
An apparatus for constructing a genetic disease facial recognition model, comprising:
the training data acquisition module is used for acquiring a first training data set and a second training data set, wherein the first training data set comprises a plurality of first face images, the second training data set comprises a second face image of a patient suffering from a hereditary disease, and the second face image is provided with a hereditary disease type label;
the first training module is used for training the deep learning neural network by using a first loss function by adopting a first training data set to obtain a pre-training model;
the model updating module is used for responding to the updating operation of the network parameters in the pre-training model and acquiring the updated pre-training model;
and the second training module is used for training the updated pre-training model by using a second loss function by using a second training data set until the second loss function reaches the minimum value, so as to obtain the genetic disease facial recognition model.
A genetic disease risk prediction device comprising:
the device comprises a to-be-predicted image acquisition module, a to-be-predicted image acquisition module and a to-be-predicted image acquisition module, wherein the to-be-predicted image acquisition module is used for acquiring data of a to-be-predicted image;
the first detection module is used for identifying a face region in image data to be predicted by adopting a multitask convolutional neural network and detecting the face confidence coefficient of the face region;
the second detection module is used for extracting a face region with the face confidence coefficient meeting a set value and carrying out image quality detection on the face region;
and the prediction module is used for performing genetic disease facial recognition on the face region with the image quality meeting the requirements by the genetic disease facial recognition model constructed by the method and outputting risk prediction results of the face region on various genetic disease types.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method as described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as set forth above.
According to the construction method for the genetic disease facial recognition model, the genetic disease risk prediction method, the device, the computer equipment and the storage medium, a large number of first face images are adopted, the deep learning neural network is trained by utilizing the first loss function to obtain the pre-training model, the updated pre-training model is obtained in response to the updating operation of network parameters in the pre-training model, then a small number of second face images are adopted, the updated pre-training model is trained by utilizing the second loss function until the second loss function reaches the minimum value, the genetic disease facial recognition model capable of detecting the genetic disease risk based on the collected face images is obtained, and the detection accuracy of the model can be improved under the condition that the data volume of a genetic patient is small.
Drawings
FIG. 1 is a schematic flow chart showing a method for constructing a genetic disease facial recognition model according to an embodiment;
FIG. 2 is a block diagram of the structure of a genetic disease facial recognition model based on the SE-Resnet64 network in one embodiment;
FIG. 3 is a schematic diagram of the SE-Resnet residual network layer in FIG. 2;
FIG. 4(A) is a block diagram showing the structure of an A-Densenet169 network-based genetic disease facial recognition model in one embodiment;
FIG. 4(B) is a schematic diagram of the internal structure of the Attention module in FIG. 4 (A);
FIG. 5 is a diagram showing an environment where the method for predicting risk of a genetic disease is applied in one embodiment;
FIG. 6 is a schematic flow chart of a method for predicting risk of genetic disease according to one embodiment;
FIG. 7 is a diagram illustrating the structure of a P-Net network according to one embodiment;
FIG. 8 is a schematic diagram of the structure of an R-Net network in one embodiment;
FIG. 9 is a diagram illustrating the structure of an O-Net network according to one embodiment;
FIG. 10 is a block diagram showing an example of the construction of an apparatus for constructing a genetic disease facial recognition model;
FIG. 11 is a block diagram showing the structure of a genetic disease risk prediction apparatus according to an embodiment;
FIG. 12 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The construction method of the genetic disease facial recognition model can be applied to a server or a terminal, wherein the server can be realized by an independent server or a server cluster consisting of a plurality of servers. The terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. In an embodiment, the present application provides a method for constructing a genetic disease facial recognition model, as shown in fig. 1, which is described by taking the method as an example of being applied to a server, and specifically may include the following steps:
step 102, a first training data set and a second training data set are obtained.
The first training data set comprises a plurality of first face images, the first face images correspond to face images of normal people without genetic diseases, the second training data set comprises second face images of patients with genetic diseases, and genetic disease type labels are arranged in the second face images. Specifically, the genetic disease type label may be information of a genetic disease type of the patient corresponding to the second face image, such as a type of a genetic disease like down syndrome, angelman syndrome, delaunay fever syndrome, and the like.
In general, a large amount of training data (generally ten thousand or more orders of magnitude) is required to construct a model, so that the model can learn from the training data and has the capability of accurate identification and classification. However, because the data volume of the genetic patients is too small, the number of the genetic patients of a certain type capable of acquiring data can be about hundred and ten, and deep learning training of a small data set easily causes overfitting, so that the model lacks good generalization capability. Therefore, in this embodiment, a training method of transfer learning is used to train to construct a genetic disease facial recognition model, that is, a large amount of first training data sets are used to train a basic network to obtain a pre-training model, and then a small amount of second training data sets are used to train the pre-training model, so that the model can accurately extract facial genetic disease features more excellently under the condition of rare training sets. Therefore, in the embodiment, the first face images in the first training data set are only normal face images, that is, face images of normal persons without genetic diseases, the second face images in the second training data set include face images of patients with various types of genetic diseases, and the number of the first face images in the first training data set is much larger than that of the second face images in the second training data set.
And 104, training the deep learning neural network by using the first loss function by using the first training data set to obtain a pre-training model.
The deep learning neural network can be implemented by using Resnet (residual error network) or Densenet (dense convolution network). Specifically, the Resnet or Densenet is trained by using the first training data set and the first loss function, so that the network can better learn the facial features between different facial images in the first training data set, and a facial feature extraction pre-training model extremely sensitive to the facial features is obtained.
And 106, responding to the updating operation of the network parameters in the pre-training model, and acquiring the updated pre-training model.
The updating operation refers to an operation instruction for modifying and updating the network parameters in the pre-training model, and the network parameters may be network weights, classification numbers, learning rates, or the like. Specifically, because the data volume of the hereditary patient is too small, and the deep learning training of the small data set easily results in overfitting, so that the network has no good generalization capability, in order to overcome the dilemma, in this embodiment, before the small data set is used for training the pre-trained model, the network parameters of the pre-trained model are finely adjusted under the condition that the parameters of the bottom layer of the model are not changed, so that the purpose of optimizing the network parameters of the model is achieved, and the effect of subsequently training the pre-trained model by using the small data set is improved.
And 108, training the updated pre-training model by using a second loss function by using a second training data set until the second loss function reaches the minimum value, so as to obtain a genetic disease face recognition model.
Specifically, the updated pre-training model is trained by using the second training data set and the second loss function until the second loss function reaches the minimum value, so as to obtain the genetic disease face recognition model. The network can better learn the facial features of the facial images of different types of genetic diseases in the second training data set, the facial features of the genetic diseases can be well distinguished through learning, meanwhile, the facial images of the same type of genetic diseases are gathered together as much as possible, the minimum inter-class distance is kept, and therefore the effects of increasing the inter-class distance and reducing the intra-class distance are achieved.
According to the construction method of the genetic disease facial recognition model, a large number of first face images are adopted, the deep learning neural network is trained through the first loss function, a pre-training model is obtained, the updated pre-training model is obtained in response to the updating operation of network parameters in the pre-training model, then a small number of second face images are adopted, the updated pre-training model is trained through the second loss function, the genetic disease facial recognition model capable of detecting the genetic disease risk based on the collected face images is obtained until the second loss function reaches the minimum value, and the detection accuracy rate of the model can be improved under the condition that the data volume of a genetic patient is small.
In one embodiment, the first loss function may specifically adopt a softmax loss function, and the second loss function may specifically adopt an a-softmax loss function. As for Softmax, the separation effect on the feature distance between classes is good, namely, two different genetic diseases can be well classified according to facial features; compared with softmax, the A-softmax can better optimize the inter-class distance, namely the feature distance of the feature points of the same class is closer in the feature space, for example, for two facial images which belong to Down syndrome, the capability of a model for judging that the facial features of two human faces are close is stronger, the capability of judging that the facial features of two human faces are not similar is also enhanced, and the A-softmax can make the images of the same disease more compact because the images have similar features. Based on this, in the embodiment, softmax is used in the pre-training process, so that the facial attributes between people in the training set can be better distinguished, and a-softmax is used in the training process to optimize the facial attributes for distinguishing diseases. Therefore, the purposes of increasing the inter-class distance and reducing the intra-class distance can be achieved. The genetic disease needs to distinguish facial features of various diseases, and meanwhile, the faces of the same kind of diseases are gathered together as much as possible to keep the minimum inter-class distance, so that a method combining two classifiers is selected to train the model, and the classification capability of the model can be greatly improved.
In one embodiment, the deep learning neural network may adopt Resnet64 (a 64-layer based residual network) as a backbone network, and add SE (squeeze and excitation) modules to each layer of the residual network of Resnet64 to form an SE-Resnet64 network structure as shown in fig. 2. The structure comprises a primary feature extraction layer (7 × 7Conv + Pooling), 4 SE-Resnet residual modules and a final fully connected layer (FC) which are connected in sequence. In the embodiment, each two layers of SE-Resnet residual error networks form an SE-Resnet residual error block, and the 4 SE-Resnet residual error modules sequentially comprise 5, 9, 12 and 5 SE-Resnet residual error blocks. Specifically, the number of channels corresponding to each residual error module is 64, 128, 256 and 512, so that the low-order features are gradually abstracted to the high-order features, and the features with deeper layers and richer layers are extracted. The input image firstly passes through a convolution layer (7 × 7Conv) to obtain a preliminary feature map, enters a residual error layer (SE-Resnet) after passing through a Pooling layer (Pooling) to perform feature extraction, and outputs a detection result (syndrome) through a full connection layer (FC) after passing through the Pooling layer (Aug Pooling).
Each SE-Resnet residual block comprises a layer of Resnet + SE block, the Squeeze and excitation (SE block) is a network block which is light in magnitude and small in occupied resource, facial features of an image can be compressed through the Squeeze operation, each two-dimensional feature channel is changed into a real number, the real number has a global receptive field to some extent, and the output dimension is matched with the number of input feature channels. Therefore, the global information extraction capability of SE Block can be utilized to enable the network to fully utilize the global information in the shallow layer, so that the extraction capability of the network on the global facial features of the genetic diseases can be improved. Compared with traditional face recognition, the target of genetic disease recognition needs global information of more diseases besides local features of specific individuals, so that the network can be helped to learn more weight-effective parts and less weight-ineffective facial features through increased SE block.
Specifically, each SE-Resnet Residual block has a structure as shown in fig. 3, and includes a Residual layer (Residual), a Global pooling layer (Global pooling layer), a first fully connected layer (FC1), a first active layer (ReLu), a second fully connected layer (FC2), and a second active layer (Sigmoid) which are connected in sequence. By the structure, the weight distribution of the normal facial feature part can be reduced, and the weight distribution of the diseased facial feature part can be increased, and the feature channels are distributed with weights according to the classification requirements from the model perspective. For example, a genetically diseased face has many features, including subtle features of eyes, eye distances, hair, nose, cheekbones, etc., which can be viewed as one or more feature channels, so that it can be abstractly interpreted as a feature block. For example, the weights are assigned according to the following: for example, a down syndrome facial image has features of wide eye distance and collapse of nose bridge, SE block randomly assigns a high numerical weight to the eye distance features, which is an operation beneficial to model classification, and the classifier feeds back a condition of loss function reduction through the loss function, which indicates that the classification effect of the model is improved, so that the network retains "information that high weight should be added to the eye distance channel", that is, the network can obtain information that the classification eye distance is a beneficial feature through learning. By analogy, the whole model can carry out the weight distribution self-learning process on all channels specified on the image, and finally, high weight can be more accurately given to the ill facial features of a genetic disease, and low weight is given to the facial features irrelevant to classification, so that the optimization of global fuzzy extraction and accurate extraction on the facial feature extraction is realized, and the accuracy of model prediction is also improved.
In one embodiment, the deep learning neural network may further adopt densenert 169 (a 169-layer-based dense convolutional network) as a backbone network, and add an attention module (attention mechanism) after the maximum pooling layer of the densenert 169 to form an a-densenert 169 network structure as shown in fig. 4 (a). Specifically, as shown in fig. 4(a), the structure includes a preliminary feature extraction layer (7 × 7Conv), a maximum Pooling layer (Pooling), an attention mechanism layer (attention Block), 4 Dense volume blocks (density Block1, density Block2, density Block3, and density Block4), 3 Transition layers (Transition1, Transition2, and Transition3), a global Pooling layer (7 × 7global Avg Pool), and a final full connection layer (FC). The 4 dense convolution blocks respectively comprise 6 layers of Densenet convolution networks, 12 layers of Densenet convolution networks, 32 layers of Densenet convolution networks and 32 layers of Densenet convolution networks, and each layer of Densenet convolution network has convolution kernels with two scales of 1 × 1 and 3 × 3.
In the traditional deep neural network model, the extraction of facial features of genetic diseases is not targeted, namely the features of all human face parts are subjected to fuzzy extraction, but most of the facial features of the genetic diseases are embodied as feature changes of partial organs, namely, for a genetic disease, eyes, noses, mouths and the like have independent symptom phenotypes which do not necessarily exist simultaneously, namely, if the same feature extraction operation is carried out on parts with diseases and parts without the diseases, the identification capability of the model on the characteristics of the genetic disease is reduced. Therefore, in order to allow the model to accurately extract the onset facial features of genetic diseases, by adding an Attention module, as shown in fig. 4(B), the Attention module includes three convolution kernels W disposed at the input endθ
Figure BDA0002392318550000091
And WgThe convolution kernel is arranged at the output end, and the operation module is arranged between the input end and the output end, and specifically:
the net input is X ═ H, W,1024, and passes through the convolution kernel W of 1 × 1, respectivelyθAnd
Figure BDA0002392318550000092
convolution operation is carried out to reduce the number of channels, and theta-convolved (H, W,1024) sum is calculated
Figure BDA0002392318550000093
The convolved (H, W,1024) output. The two outputs are then subjected to a shape change operation resulting in (HW,1024) and the two are then matrix multiplied (one of which is required to be subjected toThe method comprises the steps of (HW, HW) obtaining output, carrying out a softmax operation to obtain an attention channel, finding the correlation between each pixel in a characteristic diagram and all other position pixels after the operation, carrying out a convolution operation of 1 × 1 on g, carrying out shape transformation on g, carrying out matrix multiplication on g and the output of softmax to obtain (H, W,512), applying a channel attention mechanism to the corresponding position of each characteristic diagram of all channels, wherein each position value of the output is a weighted average value of all other positions, and the commonality can be further highlighted through the softmax operation, finally recovering the original channel number through a convolution of 1x1, so that the input and output scales are completely the same, because the input size is equal to the output size and is a fixed block, the input size can be added into any network, and new weights are learned in the transfer learning, so that the pre-training weight cannot be used due to the introduction of new modules.
Based on this, the model can learn the weight distribution of the features by utilizing the Attention mechanism, so that the face features beneficial to classification can be distinguished, and the concentration degree of the whole model corresponding to different parts on the feature map of the input data is different. For example, the down syndrome, which is a genetic facial disease, is characterized by wide eye distance, low and flat nasal root and small eye cleft, and the main features of the disease are concentrated on the eyes, and after the attention module is added, more weight and learning capacity of the model can be extracted on the eye features, so that the extraction effect of the model on the facial features is better, and the accuracy of model classification is improved.
In one embodiment, in response to the operation of updating the network parameters in the pre-training model, the method specifically includes: the method comprises the steps of obtaining network weight freezing operation of a set layer in a pre-training model, obtaining setting operation of the number of categories and the model learning rate output by a full connection layer in the pre-training model, and further updating network parameters of the pre-training model according to the freezing operation and the setting operation to obtain the updated pre-training model. The setting of the number of categories output by the full link layer may be set according to the number of genetic disease types included in the training data in the second training data set, for example, if the second training data set includes training data of 3 different genetic disease types, the number of categories output by the full link layer may be set to 3. The model learning rate may be set to be slightly larger than the learning rate of the normal face recognition model. For example, if the deep learning neural network adopts a SE-Resnet64 network structure as shown in fig. 2, after it is trained by using the first training data set, weights of the SE-Resnet residual networks of 10 layers in the first SE-Resnet residual module in the trained network may be frozen. If the deep learning neural network adopts an a-densenert 169 network structure as shown in fig. 4(a), after the deep learning neural network is trained by using the first training data set, the weights of the 6-layer densenert convolutional network in the first Dense convolutional Block densener Block1 in the trained network can be frozen. Thereby achieving the effect of optimizing the model parameters.
In one embodiment, the present application further provides a genetic disease risk prediction method, which can be applied to the application environment shown in fig. 5. In this embodiment, the terminal 502 may be various devices having an image capturing function, such as but not limited to various smart phones, tablet computers, cameras, and portable image capturing devices, and the server 504 may be implemented by an independent server or a server cluster formed by a plurality of servers. Specifically, the terminal 502 is configured to acquire data of a to-be-predicted image, and send the acquired data of the to-be-predicted image to the server 504 through a network, and the server 504 detects the face confidence and the image quality of the data of the to-be-predicted image, and further performs genetic disease face identification on a face image meeting requirements, so as to output a risk prediction result of a face region on each genetic disease type to the terminal 502.
In one embodiment, as shown in fig. 6, a method for predicting genetic disease risk is provided, which is illustrated by applying the method to the server in fig. 5, and includes the following steps:
step 602, acquiring data of a picture to be predicted.
The image data to be predicted is image data which needs genetic disease risk prediction, and is usually a shot face image.
And step 604, identifying a face region in the image data to be predicted by adopting a multitask convolutional neural network, and detecting the face confidence of the face region.
The Multi-task convolutional neural network (MTCNN) adopts three cascaded networks, and is used for detecting a face region of input image data and detecting face key points. Namely, the MTCNN is used to locate the face region in the to-be-predicted image data to identify the optimal face region, such as having five sense organs, and detect key points in the face region, that is, detect whether the face region is a false image, thereby outputting face confidence.
And 606, extracting the face region with the face confidence coefficient meeting the set value, and detecting the image quality of the face region.
The set value may be a preset threshold of human face confidence. Specifically, only when the face confidence of the face region meets the set threshold, the image quality of the face region is further detected, in this embodiment, a quality detection classification model may be used to detect the image quality of the face region.
And step 608, performing genetic disease facial recognition on the face region with the image quality meeting the requirements by using the genetic disease facial recognition model, and outputting a risk prediction result of the face region on each genetic disease type.
Specifically, the genetic disease face recognition model constructed by the method is used for carrying out genetic disease face recognition on the face region with the image quality meeting the requirements, so that risk prediction results of the face region on various genetic disease types are obtained, and the results are output to the terminal.
According to the genetic disease risk prediction method, the face region in the data of the image to be predicted is recognized by adopting the multitask convolutional neural network, the face confidence coefficient of the face region is detected, the face region with the face confidence coefficient meeting a set value is further extracted, the image quality of the face region is detected, the genetic disease face recognition model constructed by the method is used for carrying out genetic disease face recognition on the face region with the image quality meeting the requirement, the risk prediction results of the face region on various genetic disease types are output, and the output of each layer of the risk prediction results depends on the calculation of the previous layer, so that the accuracy of the genetic disease risk prediction results is greatly improved.
In one embodiment, the image quality detection on the face region specifically includes: and (3) performing Moire pattern detection on the face region through a Moire pattern two-classification detection model, if no Moire pattern is detected, determining that the image quality of the face region meets the requirement, and otherwise, determining that the image quality of the face region does not meet the requirement. It is understood that, in order to ensure the accuracy of the genetic disease risk prediction result, the detection of step 608 is not performed for the face region that does not meet the image quality requirement.
Specifically, three cascaded networks in MTCNN include P-Net (Proposal network), R-Net (RefineNet), and O-Net (output Net).
In which P-Net is constructed as a fully connected network, as shown in fig. 7, the input of which is a 12 × 12 picture, so that the generated training data (by generating a bounding box and then cutting the bounding box into 12 × 12 pictures) needs to be converted into a 12 × 3 structure before training.
10 5 × 5 signatures were generated by 10 convolution kernels of 3 × 3, Max Pooling (stride 2) operations of 2 × 2. 16 signatures of 3 x 3 were generated by 16 convolution kernels of 3 x 10. 32 signatures of 1x1 were generated by 32 convolution kernels of 3 x 16.
For 32 feature maps of 1 × 1, 2 feature maps of 1 × 1 can be generated for classification by 2 convolution kernels of 1 × 32; 4 convolution kernels of 1 × 32, and 4 characteristic graphs of 1 × 1 are generated for judging the regression frame; and 10 convolution kernels of 1 × 32 generate 10 feature maps of 1 × 1 for judging the face contour points.
In summary, the initial feature extraction and the setting of the frame are carried out through convolution, the size of the window is automatically adjusted, the window is filtered through non-maximum value inhibition, the feature result is input into three convolution layers, whether the region is a human face or not is judged through a human face classifier, meanwhile, frame regression and facial key point detection locator are carried out to propose the human face part, and a plurality of windows with the possible human face are output. And finally, inputting the primarily screened face region into R-Net for processing.
For the R-Net, the structure is a convolution neural network, as shown in FIG. 8, compared with the P-Net, a full connection layer is added to the network structure, the training data mainly comprises regression frames generated by the P-Net and face contour key points, the face region window inputted in the previous layer is further selected and adjusted, the window is filtered with high precision, and the face region is optimized.
The specific process comprises the following steps: the model inputs 24 × 24 pictures, and generates 28 11 × 11 feature maps after passing through 28 convolution kernels of 3 × 3 and max posing of 3 × 3(stride 2); 48 signatures of 4 × 4 were generated after 48 convolution kernels of 3 × 28 and max firing of 3 × 3(stride 2); after passing through 64 convolution kernels of 2 x 48, 64 feature maps of 3 x 3 were generated; converting the 3 x 64 feature map into a 128-sized fully connected layer; converting the regression frame classification problem into a full connection layer with the size of 2; converting the position regression problem of the bounding box into a full connection layer with the size of 4; face contour keypoints are converted into fully connected layers of size 10.
For O-Net, the structure is a complex convolutional neural network, as shown in FIG. 9, compared with R-Net, the structure has one more convolutional layer, the network has more input features, the face region is identified through more supervision, the face features are regressed, the face discrimination, the face region frame regression and the face feature positioning are carried out, and finally the upper left corner coordinate and the lower right corner coordinate of the face region and five feature points of the face region are output.
The specific process comprises the following steps: the model input was a 48 × 3 picture, which was transformed into 32 23 × 23 signatures by 32 convolution kernels of 3 × 3 and max firing of 3 × 3(stride 2); after passing through 64 convolution kernels of 3 × 32 and max posing of 3 × 3(stride 2), the feature maps are converted into 64 feature maps of 10 × 10; after passing through 64 convolution kernels of 3 × 64 and max posing of 3 × 3(stride 2), the feature maps are converted into 64 feature maps of 4 × 4; converting into 128 characteristic maps of 3 × 3 through 128 convolution kernels of 2 × 64; converting into a full link layer with 256 sizes through a full link operation; and generating regression frame classification features with the size of 2, regression features with the size of 4 and regression features with the size of 10 for the position of the face contour. And the identified optimal face region is clipped, and the face confidence is output.
In one embodiment, after outputting the risk prediction results of the face region on each genetic disease type, the method further comprises: and generating a face thermodynamic diagram according to the risk prediction results of the face region on various genetic disease types and outputting the face thermodynamic diagram. Specifically, because the facial features of the genetic disease are usually feature changes of partial organs, a face thermodynamic diagram corresponding to the detected face region can be generated based on the risk prediction result, and the color of the facial features corresponding to the prediction result and having a large weight in the thermodynamic diagram can be more intuitive and obvious, so that the output result is more intuitive.
It should be understood that although the various steps in the flow charts of fig. 1-9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-9 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 10, there is provided an apparatus for constructing a genetic disease facial recognition model, including: a training data acquisition module 1001, a first training module 1002, a model update module 1003, and a second training module 1004, wherein:
a training data acquisition module 1001 configured to acquire a first training data set and a second training data set, where the first training data set includes a plurality of first face images, the second training data set includes a second face image of a patient with a genetic disease, and the second face image is provided with a genetic disease type tag;
the first training module 1002 is configured to train a deep learning neural network by using a first loss function using a first training data set to obtain a pre-training model;
a model updating module 1003, configured to respond to an update operation on a network parameter in the pre-training model, and obtain an updated pre-training model;
the second training module 1004 is configured to train the updated pre-training model with the second loss function by using the second training data set until the second loss function reaches a minimum value, so as to obtain a genetic disease facial recognition model.
In one embodiment, the first loss function employs a softmax loss function and the second loss function employs an a-softmax loss function.
In one embodiment, the deep learning neural network adopts Resnet64 as a backbone network, and adds an SE module in each layer of residual error network of Resnet64 to form an SE-Resnet64 network structure; the first training module 1002 is specifically configured to train the SE-Resnet64 network structure using a first loss function.
In one embodiment, the SE-Resnet64 network structure includes 4 residual modules connected in sequence, each residual module includes 10-layer, 18-layer, 24-layer and 10-layer residual networks, and the number of channels corresponding to each residual module is 64, 128, 256 and 512.
In one embodiment, the deep learning neural network adopts Densenet169 as a backbone network, and an Ateention module is added behind the maximum pooling layer of Densenet169 to form an A-Densenet169 network structure; the first training module 1002 is specifically configured to train the a-detect 169 network structure using the first loss function.
In one embodiment, the model update module 1003 is specifically configured to: acquiring network weight freezing operation on a set layer in a pre-training model and acquiring setting operation on the number of categories and the model learning rate output by a full connection layer in the pre-training model; and updating the network parameters of the pre-training model according to the freezing operation and the setting operation to obtain the updated pre-training model.
For specific limitations of the construction device of the genetic disease facial recognition model, reference may be made to the above limitations of the construction method of the genetic disease facial recognition model, which are not described herein again. The various modules in the above construction device for the genetic disease facial recognition model can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, as shown in fig. 11, there is provided a genetic disease risk prediction apparatus including: a data to be predicted acquisition module 1101, a first detection module 1102, a second detection module 1103, and a prediction module 1104, wherein:
a data to be predicted acquisition module 1101, configured to acquire data to be predicted;
the first detection module 1102 is configured to identify a face region in image data to be predicted by using a multitask convolutional neural network, and detect a face confidence of the face region;
the second detection module 1103 is configured to extract a face region whose face confidence meets a set value, and perform image quality detection on the face region;
and the prediction module 1104 is used for performing genetic disease facial recognition on the face region with the image quality meeting the requirement by using the genetic disease facial recognition model constructed by the method, and outputting a risk prediction result of the face region on each genetic disease type.
In one embodiment, the second detecting module 1103 is specifically configured to: and (3) carrying out moire detection on the face region, if no moire is detected, determining that the image quality of the face region meets the requirement, otherwise, determining that the image quality of the face region does not meet the requirement.
In one embodiment, the above apparatus further comprises: and the face thermodynamic diagram generation module is used for generating and outputting corresponding face thermodynamic diagrams according to the risk prediction results of the face region on various genetic disease types.
For the specific definition of the genetic disease risk prediction device, reference may be made to the above definition of the genetic disease risk prediction method, which is not described herein again. The modules in the genetic disease risk prediction device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing a training data set or image data to be predicted. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of constructing a genetic disease facial recognition model or a method of genetic disease risk prediction.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a first training data set and a second training data set, wherein the first training data set comprises a plurality of first face images, the second training data set comprises a second face image of a patient with a genetic disease, and the second face image is provided with a genetic disease type label;
training a deep learning neural network by using a first loss function by using a first training data set to obtain a pre-training model;
responding to the updating operation of the network parameters in the pre-training model, and acquiring the updated pre-training model;
and training the updated pre-training model by using a second loss function by using a second training data set until the second loss function reaches the minimum value, so as to obtain the genetic disease face recognition model.
In one embodiment, the deep learning neural network adopts Resnet64 as a backbone network, and adds an SE module to each layer of residual error network of Resnet64 to form an SE-Resnet64 network structure, and the processor when executing the computer program further implements the following steps: the SE-Resnet64 network structure is trained using a first loss function.
In one embodiment, the deep learning neural network adopts densenert 169 as a backbone network, and adds an attention module after the maximum pooling layer of the densenert 169 to form an a-densenert 169 network structure, so that the processor when executing the computer program further realizes the following steps: the A-Densenet169 network structure is trained using a first loss function.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring network weight freezing operation on a set layer in a pre-training model and acquiring setting operation on the number of categories and the model learning rate output by a full connection layer in the pre-training model; and updating the network parameters of the pre-training model according to the freezing operation and the setting operation to obtain the updated pre-training model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring data of a picture to be predicted; adopting a multitask convolutional neural network to identify a face region in image data to be predicted, and detecting a face confidence coefficient of the face region; extracting a face region with the face confidence coefficient meeting a set value, and carrying out image quality detection on the face region; and carrying out genetic disease facial recognition on the face region with the image quality meeting the requirements by using the constructed genetic disease facial recognition model, and outputting a risk prediction result of the face region on each genetic disease type.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and (3) carrying out moire detection on the face region, if no moire is detected, determining that the image quality of the face region meets the requirement, otherwise, determining that the image quality of the face region does not meet the requirement.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and generating a face thermodynamic diagram according to the risk prediction results of the face region on various genetic disease types and outputting the face thermodynamic diagram.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first training data set and a second training data set, wherein the first training data set comprises a plurality of first face images, the second training data set comprises a second face image of a patient with a genetic disease, and the second face image is provided with a genetic disease type label;
training a deep learning neural network by using a first loss function by using a first training data set to obtain a pre-training model;
responding to the updating operation of the network parameters in the pre-training model, and acquiring the updated pre-training model;
and training the updated pre-training model by using a second loss function by using a second training data set until the second loss function reaches the minimum value, so as to obtain the genetic disease face recognition model.
In one embodiment, the deep learning neural network employs Resnet64 as a backbone network, and adds SE modules to each layer of residual network of Resnet64 to form a SE-Resnet64 network structure, and the computer program when executed by the processor further implements the steps of: the SE-Resnet64 network structure is trained using a first loss function.
In one embodiment, the deep learning neural network employs denet 169 as a backbone network, and adds an attention module after the maximum pooling layer of denet 169 to form an a-denet 169 network structure, and the computer program when executed by the processor further implements the following steps: the A-Densenet169 network structure is trained using a first loss function.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring network weight freezing operation on a set layer in a pre-training model and acquiring setting operation on the number of categories and the model learning rate output by a full connection layer in the pre-training model; and updating the network parameters of the pre-training model according to the freezing operation and the setting operation to obtain the updated pre-training model.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring data of a picture to be predicted; adopting a multitask convolutional neural network to identify a face region in image data to be predicted, and detecting a face confidence coefficient of the face region; extracting a face region with the face confidence coefficient meeting a set value, and carrying out image quality detection on the face region; and carrying out genetic disease facial recognition on the face region with the image quality meeting the requirements by using the constructed genetic disease facial recognition model, and outputting a risk prediction result of the face region on each genetic disease type.
In one embodiment, the computer program when executed by the processor further performs the steps of: and (3) carrying out moire detection on the face region, if no moire is detected, determining that the image quality of the face region meets the requirement, otherwise, determining that the image quality of the face region does not meet the requirement.
In one embodiment, the computer program when executed by the processor further performs the steps of: and generating a face thermodynamic diagram according to the risk prediction results of the face region on various genetic disease types and outputting the face thermodynamic diagram.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A method for constructing a genetic disease facial recognition model, the method comprising:
acquiring a first training data set and a second training data set, wherein the first training data set comprises a plurality of first face images, the second training data set comprises a second face image of a patient with a genetic disease, and the second face image is provided with a genetic disease type label;
training a deep learning neural network by using a first loss function by using the first training data set to obtain a pre-training model;
responding to the updating operation of the network parameters in the pre-training model, and acquiring an updated pre-training model;
and training the updated pre-training model by using a second loss function by using the second training data set until the second loss function reaches a minimum value to obtain a genetic disease face recognition model.
2. The method of claim 1, wherein the first loss function employs a softmax loss function, and wherein the second loss function employs an a-softmax loss function.
3. The method according to claim 1, wherein the deep learning neural network adopts Resnet64 as a backbone network, and adds SE modules in each layer of residual network of the Resnet64 to form a SE-Resnet64 network structure; the training of the deep learning neural network using the first loss function includes:
the SE-Resnet64 network structure is trained using a first loss function.
4. The method according to claim 3, wherein the SE-Resnet64 network structure comprises 4 residual modules connected in sequence, each residual module comprises 10-layer, 18-layer, 24-layer and 10-layer residual networks, and the number of channels corresponding to each residual module is 64, 128, 256 and 512.
5. The method according to claim 1, wherein the deep learning neural network adopts Densenet169 as a backbone network, and adds an Ateention module after the maximum pooling layer of the Densenet169 to form an A-Densenet169 network structure; the training of the deep learning neural network using the first loss function includes:
the A-Densenet169 network structure is trained using a first loss function.
6. The method of any of claims 1 to 5, wherein the responding to the update operation of the network parameters in the pre-trained model comprises:
acquiring network weight freezing operation on a set layer in the pre-training model and acquiring setting operation on the number of categories and model learning rate output by a full connection layer in the pre-training model;
and updating the network parameters of the pre-training model according to the freezing operation and the setting operation to obtain an updated pre-training model.
7. A method for predicting the risk of a genetic disorder, the method comprising:
acquiring data of a picture to be predicted;
adopting a multitask convolutional neural network to identify a face region in the image data to be predicted, and detecting the face confidence of the face region;
extracting the face region with the face confidence coefficient meeting a set value, and carrying out image quality detection on the face region;
carrying out genetic disease facial recognition on the face region with image quality meeting the requirements by using the genetic disease facial recognition model constructed by the method of any one of claims 1 to 6, and outputting the risk prediction results of the face region on various genetic disease types.
8. The method of claim 7, wherein the detecting the image quality of the face region comprises:
and performing moire detection on the face region, if no moire is detected, determining that the image quality of the face region meets the requirement, otherwise, determining that the image quality of the face region does not meet the requirement.
9. The method according to claim 7 or 8, wherein the outputting the risk prediction results of the face region on each genetic disease type further comprises:
and generating and outputting a corresponding face thermodynamic diagram according to the risk prediction results of the face region on each genetic disease type.
10. An apparatus for constructing a genetic disease facial recognition model, comprising:
the training data acquisition module is used for acquiring a first training data set and a second training data set, wherein the first training data set comprises a plurality of first face images, the second training data set comprises a second face image of a patient suffering from a hereditary disease, and the second face image is provided with a hereditary disease type label;
the first training module is used for training the deep learning neural network by using a first loss function by using the first training data set to obtain a pre-training model;
the model updating module is used for responding to the updating operation of the network parameters in the pre-training model and acquiring the updated pre-training model;
and the second training module is used for training the updated pre-training model by using a second loss function by using the second training data set until the second loss function reaches the minimum value, so as to obtain a genetic disease face recognition model.
11. A genetic disease risk prediction device, comprising:
the device comprises a to-be-predicted image acquisition module, a to-be-predicted image acquisition module and a to-be-predicted image acquisition module, wherein the to-be-predicted image acquisition module is used for acquiring data of a to-be-predicted image;
the first detection module is used for identifying a face region in the image data to be predicted by adopting a multitask convolutional neural network and detecting the face confidence of the face region;
the second detection module is used for extracting the face region with the face confidence coefficient meeting a set value and carrying out image quality detection on the face region;
a prediction module, configured to perform genetic disease facial recognition on the face region with satisfactory image quality by using the genetic disease facial recognition model constructed by the method according to any one of claims 1 to 6, and output a risk prediction result of the face region on each genetic disease type.
12. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 9 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN202010118850.7A 2020-02-26 2020-02-26 Construction method and device for genetic disease facial recognition model Pending CN111368672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010118850.7A CN111368672A (en) 2020-02-26 2020-02-26 Construction method and device for genetic disease facial recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010118850.7A CN111368672A (en) 2020-02-26 2020-02-26 Construction method and device for genetic disease facial recognition model

Publications (1)

Publication Number Publication Date
CN111368672A true CN111368672A (en) 2020-07-03

Family

ID=71211507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010118850.7A Pending CN111368672A (en) 2020-02-26 2020-02-26 Construction method and device for genetic disease facial recognition model

Country Status (1)

Country Link
CN (1) CN111368672A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111816304A (en) * 2020-07-22 2020-10-23 北京聚道科技有限公司 A method and system for the establishment of an aided decision-making for hereditary diseases
CN112016450A (en) * 2020-08-27 2020-12-01 京东方科技集团股份有限公司 Training method and device of machine learning model and electronic equipment
CN112101490A (en) * 2020-11-20 2020-12-18 支付宝(杭州)信息技术有限公司 Thermodynamic diagram conversion model training method and device
CN112434552A (en) * 2020-10-13 2021-03-02 广州视源电子科技股份有限公司 Neural network model adjusting method, device, equipment and storage medium
CN112598707A (en) * 2020-12-23 2021-04-02 南京稻子菱机电设备有限公司 Real-time video stream object detection and tracking method
CN112766596A (en) * 2021-01-29 2021-05-07 苏州思萃融合基建技术研究所有限公司 Building energy consumption prediction model construction method, energy consumption prediction method and device
CN113658095A (en) * 2021-07-09 2021-11-16 浙江大学 Engineering drawing review and recognition processing method and device for manual instrument drawing
CN113705685A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Disease feature recognition model training method, disease feature recognition device and disease feature recognition equipment
CN113936314A (en) * 2021-10-13 2022-01-14 浙江核新同花顺网络信息股份有限公司 A method and system for facial expression recognition
CN114612746A (en) * 2022-03-12 2022-06-10 北京工业大学 A method for plaque recognition in intravascular ultrasound images based on multi-model fusion
CN114708536A (en) * 2022-04-08 2022-07-05 中国科学院自动化研究所 Small sample target detection method, system and device
CN115019367A (en) * 2022-06-07 2022-09-06 苏州超云生命智能产业研究院有限公司 Genetic disease facial recognition device and method
CN115565051A (en) * 2022-11-15 2023-01-03 浙江芯昇电子技术有限公司 Lightweight face attribute recognition model training method, recognition method and device
CN115761833A (en) * 2022-10-10 2023-03-07 荣耀终端有限公司 Face recognition method, electronic device, program product, and medium
WO2023161956A1 (en) * 2022-02-28 2023-08-31 Saanvi Mehra Device and method for assessing a subject for down syndrome
US20230326016A1 (en) * 2020-09-08 2023-10-12 Kang Zhang Artificial intelligence for detecting a medical condition using facial images

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291737A (en) * 2016-04-01 2017-10-24 腾讯科技(深圳)有限公司 Nude picture detection method and device
CN108154503A (en) * 2017-12-13 2018-06-12 西安交通大学医学院第附属医院 A kind of leucoderma state of an illness diagnostic system based on image procossing
CN109215010A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 A kind of method and robot face identification system of picture quality judgement
CN109615016A (en) * 2018-12-20 2019-04-12 北京理工大学 A Target Detection Method Based on Pyramid Input Gain Convolutional Neural Network
CN109858471A (en) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 Biopsy method, device and computer equipment based on picture quality
CN110363075A (en) * 2019-06-03 2019-10-22 陈丙涛 Suspicious ill face detection system based on big data server
CN110415815A (en) * 2019-07-19 2019-11-05 银丰基因科技有限公司 Auxiliary diagnosis system for genetic diseases based on deep learning and facial biometric information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291737A (en) * 2016-04-01 2017-10-24 腾讯科技(深圳)有限公司 Nude picture detection method and device
CN109215010A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 A kind of method and robot face identification system of picture quality judgement
CN108154503A (en) * 2017-12-13 2018-06-12 西安交通大学医学院第附属医院 A kind of leucoderma state of an illness diagnostic system based on image procossing
CN109615016A (en) * 2018-12-20 2019-04-12 北京理工大学 A Target Detection Method Based on Pyramid Input Gain Convolutional Neural Network
CN109858471A (en) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 Biopsy method, device and computer equipment based on picture quality
CN110363075A (en) * 2019-06-03 2019-10-22 陈丙涛 Suspicious ill face detection system based on big data server
CN110415815A (en) * 2019-07-19 2019-11-05 银丰基因科技有限公司 Auxiliary diagnosis system for genetic diseases based on deep learning and facial biometric information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦艺帆: "地图时空大数据爬取与规划分析教程", 东南大学出版社, pages: 177 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111816304A (en) * 2020-07-22 2020-10-23 北京聚道科技有限公司 A method and system for the establishment of an aided decision-making for hereditary diseases
CN112016450A (en) * 2020-08-27 2020-12-01 京东方科技集团股份有限公司 Training method and device of machine learning model and electronic equipment
CN112016450B (en) * 2020-08-27 2023-09-05 京东方科技集团股份有限公司 Training method and device of machine learning model and electronic equipment
US20230326016A1 (en) * 2020-09-08 2023-10-12 Kang Zhang Artificial intelligence for detecting a medical condition using facial images
CN112434552A (en) * 2020-10-13 2021-03-02 广州视源电子科技股份有限公司 Neural network model adjusting method, device, equipment and storage medium
CN112434552B (en) * 2020-10-13 2024-12-06 广州视源电子科技股份有限公司 Neural network model adjustment method, device, equipment and storage medium
CN112101490A (en) * 2020-11-20 2020-12-18 支付宝(杭州)信息技术有限公司 Thermodynamic diagram conversion model training method and device
CN112598707A (en) * 2020-12-23 2021-04-02 南京稻子菱机电设备有限公司 Real-time video stream object detection and tracking method
CN112766596A (en) * 2021-01-29 2021-05-07 苏州思萃融合基建技术研究所有限公司 Building energy consumption prediction model construction method, energy consumption prediction method and device
CN112766596B (en) * 2021-01-29 2024-04-16 苏州思萃融合基建技术研究所有限公司 Construction method of building energy consumption prediction model, energy consumption prediction method and device
CN113658095A (en) * 2021-07-09 2021-11-16 浙江大学 Engineering drawing review and recognition processing method and device for manual instrument drawing
CN113705685B (en) * 2021-08-30 2023-08-01 平安科技(深圳)有限公司 Disease feature recognition model training, disease feature recognition method, device and equipment
CN113705685A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Disease feature recognition model training method, disease feature recognition device and disease feature recognition equipment
CN113936314A (en) * 2021-10-13 2022-01-14 浙江核新同花顺网络信息股份有限公司 A method and system for facial expression recognition
WO2023161956A1 (en) * 2022-02-28 2023-08-31 Saanvi Mehra Device and method for assessing a subject for down syndrome
CN114612746A (en) * 2022-03-12 2022-06-10 北京工业大学 A method for plaque recognition in intravascular ultrasound images based on multi-model fusion
CN114708536A (en) * 2022-04-08 2022-07-05 中国科学院自动化研究所 Small sample target detection method, system and device
CN115019367A (en) * 2022-06-07 2022-09-06 苏州超云生命智能产业研究院有限公司 Genetic disease facial recognition device and method
CN115761833A (en) * 2022-10-10 2023-03-07 荣耀终端有限公司 Face recognition method, electronic device, program product, and medium
CN115761833B (en) * 2022-10-10 2023-10-24 荣耀终端有限公司 Face recognition method, electronic equipment and medium
CN115565051A (en) * 2022-11-15 2023-01-03 浙江芯昇电子技术有限公司 Lightweight face attribute recognition model training method, recognition method and device
CN115565051B (en) * 2022-11-15 2023-04-18 浙江芯昇电子技术有限公司 Lightweight face attribute recognition model training method, recognition method and device

Similar Documents

Publication Publication Date Title
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN109902546B (en) Face recognition method, device and computer readable medium
CN110555481B (en) Portrait style recognition method, device and computer readable storage medium
WO2021227726A1 (en) Methods and apparatuses for training face detection and image detection neural networks, and device
CN111274916B (en) Face recognition method and face recognition device
US12039440B2 (en) Image classification method and apparatus, and image classification model training method and apparatus
CN110222717B (en) Image processing method and device
WO2021218238A1 (en) Image processing method and image processing apparatus
CN113221663B (en) A real-time sign language intelligent recognition method, device and system
CN115050064B (en) Human face liveness detection method, device, equipment and medium
CN112801018A (en) Cross-scene target automatic identification and tracking method and application
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN110414344B (en) A video-based person classification method, intelligent terminal and storage medium
CN112446270A (en) Training method of pedestrian re-identification network, and pedestrian re-identification method and device
CN109522925B (en) An image recognition method, device and storage medium
CN112597941A (en) Face recognition method and device and electronic equipment
CN110929622A (en) Video classification method, model training method, device, equipment and storage medium
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN110503076B (en) Video classification method, device, equipment and medium based on artificial intelligence
CN111291809A (en) Processing device, method and storage medium
CN112084917A (en) Living body detection method and device
CN112668366A (en) Image recognition method, image recognition device, computer-readable storage medium and chip
CN110222718B (en) Image processing method and device
CN112070044A (en) Video object classification method and device
CN112395979A (en) Image-based health state identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200703

RJ01 Rejection of invention patent application after publication