CN107463932B - Method for extracting picture features by using binary bottleneck neural network - Google Patents
Method for extracting picture features by using binary bottleneck neural network Download PDFInfo
- Publication number
- CN107463932B CN107463932B CN201710568350.1A CN201710568350A CN107463932B CN 107463932 B CN107463932 B CN 107463932B CN 201710568350 A CN201710568350 A CN 201710568350A CN 107463932 B CN107463932 B CN 107463932B
- Authority
- CN
- China
- Prior art keywords
- layer
- binary
- neural network
- picture
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for extracting picture characteristics by using a binary bottleneck neural network, which belongs to the technical field of video processing, wherein a binary bottleneck neural network is established, pictures are automatically extracted into characteristic vectors containing a plurality of binary bits, when the similarity degree between two images needs to be compared, only the binary characteristic vectors of the two pictures need to be compared, and then the Hamming distance between the two binary characteristic vectors is calculated: the smaller the Hamming distance is, the more similar the two images are, and the technical problem of extracting the binary characteristic vector of the image is solved; the image feature binary sequence calculated by the method can be used for quickly calculating the similarity of the images, and has important value for similarity retrieval of pictures and videos.
Description
Technical Field
The invention belongs to the technical field of video processing, and particularly relates to a method for extracting picture features by using a binary bottleneck neural network.
Background
The image data belongs to typical unstructured data, and the query, retrieval, similarity comparison and the like of an image database have difficulties, which are caused by several reasons: 1) the image data has higher dimensionality, the resolution of a general high-definition image can reach about 200 ten thousand pixels, and the resolution of an ultra-definition image can reach as much as 800 ten thousand pixels; 2) the semantics contained in the image are difficult to directly acquire from the data, for example, one image contains a car, and the image semantics is easy to observe by human beings, but the computer is difficult to acquire the semantics, and the specific semantics of the car contained in the image can be recognized only through a complex algorithm such as artificial intelligence.
In order to make the image easier to be inquired, searched and compared, the method of extracting the image features is a common method at present. The SIFT algorithm or SURF algorithm is typically used to extract local feature points of an image.
The SIFT feature and the SURF feature are similar, and are both descriptions of the value distribution of the pixel points in the local region of the feature points, for example, each feature point of the SIFT feature corresponds to a 128-bit description vector, the SURF feature is calculated faster than the SIFT feature, and each feature point corresponds to a 64-bit description vector.
Both the SIFT feature and SURF feature are manually designed feature extraction methods. The data dimensionality is reduced to a certain extent after the feature extraction. Comparing the degree of similarity of two images may be achieved by comparing SIFT features or SURF features between them. However, the dimensionality of the feature vector obtained by calculation is still high, and the requirement on quick retrieval of the image cannot be met.
Disclosure of Invention
The invention aims to provide a method for extracting picture features by using a binary bottleneck neural network, which solves the technical problem of extracting binary feature vectors of pictures.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for extracting picture features using a binary bottleneck neural network, comprising the steps of:
step 1: establishing a binary bottleneck neural network, wherein the binary bottleneck neural network comprises an input layer, a hidden layer, an output layer and a mirror layer; the hidden layers comprise a first hidden layer, a second hidden layer and a third hidden layer;
step 2: after the pictures are obtained through the camera, the pictures are subjected to unified processing, so that the pictures are changed into the size of the resolution ratio suitable for processing in the binary bottleneck neural network, and the unified processing comprises amplification processing and reduction processing;
when pictures with 8-bit coding format are processed in a unified mode, since the range of pixel values of the pictures with 8-bit coding format is 0-255, all the pixel values in the pictures with 8-bit coding format are divided by 255 during processing, and the pixel values are normalized to be in the range of 0-1;
and step 3: inputting the pictures subjected to unified processing into the input layer, wherein the pixel values of the pictures subjected to unified processing are used as state values of the input layer;
and 4, step 4: the hidden layer obtains the state value of the input layer and calculates according to the following formula 1:
in the formula, a vector x represents a state value of an input layer, W represents a weight matrix from the input layer to a hidden layer, b represents a bias value of the hidden layer, and y represents a state value of the hidden layer;
the hidden layers can be multiple, each hidden layer takes an adjacent previous hidden layer as an input layer of the hidden layer, and the state value of the input layer is obtained through a formula 1;
and 5: inputting the state value of the hidden layer into the output layer, and calculating the neuron activation probability of the output layer, wherein the calculation formula 2 is as follows:
where the vector j represents the state value of the third hidden layer, k represents the bias value of the output layer, the index i represents the i-th element of the output layer, P represents the probability that the i-th element of the output layer is active, P (O)i1) represents OiProbability of 1, OiOnly two values are taken, namely 1 or 0, wherein 1 is taken to represent activation, 0 is taken to represent non-activation, and the probability of activation of the ith neuron of the output layer is given by formula 2;
the neural network firstly calculates the neuron activation probability value P of an output layer in the calculation process, and then carries out random sampling according to the probability value P, thereby finally obtaining the activation state of the output neuron; the neural network maps any one picture into a binary sequence code with a fixed length, namely a binary characteristic vector of the picture;
step 6: when the similarity between the picture N and the picture M needs to be compared, the corresponding binary values of the picture N and the picture M are respectively calculated according to the methods from the step 1 to the step 5Making sequence code, setting the calculated binary sequence code of the picture N as BNThe calculated binary sequence code of the picture M is BM;
Then calculate BNAnd BMHamming distance between H (B)N,BM) The smaller the Hamming distance is, the higher the similarity degree of the picture N and the picture M is;
and 7: a mirror image layer is arranged behind the output layer, and the mirror image layer is a mirror image of the hidden layer and the input layer by taking the output layer as a mirror surface; the number of the neurons of the last layer of the mirror image layer is the same as that of the neurons of the input layer, and the number of the neurons of the penultimate layer of the mirror image layer is the same as that of the neurons of the first hidden layer.
The hamming distance refers to the number of different bits in the two binary sequences, i.e., the number of 1 in the result of exclusive or of the two binary sequences.
When step 7 is executed, although the number of neurons of the mirror layer is the same as that of the input layer and the hidden layer, the connection weights of the neurons are different; the method comprises the steps that pictures are input from an input layer, and are recovered from a mirror image layer after passing through a binary bottleneck layer, certain errors are introduced into a binary bottleneck neural network in the middle, so that weight training needs to be carried out on the binary bottleneck neural network, and the purpose of the weight training is to minimize the errors; in the binary bottleneck neural network, the maximum information quantity which can be transmitted by the network is determined by the number of binary neurons, and the network is the bottleneck of information quantity transmission of the whole network.
The method for extracting the picture features by using the binary bottleneck neural network solves the technical problem of extracting the binary feature vectors of the picture, calculates the feature binary sequence of the image, and can obtain very good performance without depending on the manual design of the experience of a researcher; the image feature binary sequence calculated by the method can be used for quickly calculating the similarity of the images, and has important value for similarity retrieval of pictures and videos.
Drawings
FIG. 1 is a schematic diagram of a binary bottleneck neural network of the present invention.
Detailed Description
Fig. 1 is a method for extracting picture features by using a binary bottleneck neural network, comprising the following steps:
step 1: establishing a binary bottleneck neural network, wherein the binary bottleneck neural network comprises an input layer, a hidden layer, an output layer and a mirror layer; the hidden layers comprise a first hidden layer, a second hidden layer and a third hidden layer;
step 2: after the pictures are obtained through the camera, the pictures are subjected to unified processing, so that the pictures are changed into the size of the resolution ratio suitable for processing in the binary bottleneck neural network, and the unified processing comprises amplification processing and reduction processing;
when pictures with 8-bit coding format are processed in a unified mode, since the range of pixel values of the pictures with 8-bit coding format is 0-255, all the pixel values in the pictures with 8-bit coding format are divided by 255 during processing, and the pixel values are normalized to be in the range of 0-1;
and step 3: inputting the pictures subjected to unified processing into the input layer, wherein the pixel values of the pictures subjected to unified processing are used as state values of the input layer;
and 4, step 4: the hidden layer obtains the state value of the input layer and calculates according to the following formula 1:
where the vector j represents the state value of the third hidden layer, k represents the bias value of the output layer, the index i represents the i-th element of the output layer, P represents the probability that the i-th element of the output layer is active, P (O)i1) represents OiProbability of 1, OiOnly two values are taken, namely 1 or 0, wherein 1 is taken to represent activation, 0 is taken to represent non-activation, and the probability of activation of the ith neuron of the output layer is given by formula 2.
The formula for the 1 st hidden layer, the 2 nd hidden layer and further hidden layers are the same. Firstly, the state value of an input layer is used as x, the state of the 1 st hidden layer is calculated by using the formula, then the state of the 1 st hidden layer is used as x, the state of the 2 nd hidden layer is calculated by using the same formula, and the rest hidden layers are analogized in sequence.
And 5: inputting the state value of the hidden layer into the output layer, and calculating the neuron activation probability of the output layer, wherein the calculation formula 2 is as follows:
where the vector j represents the state value of the third hidden layer, k represents the bias value of the output layer, the index i represents the i-th element of the output layer, P represents the probability that the i-th element of the output layer is active, P (O)i1) represents OiProbability of 1, OiOnly two values are taken, namely 1 or 0, wherein 1 is taken to represent activation, 0 is taken to represent non-activation, and the probability of activation of the ith neuron of the output layer is given by formula 2;
the neural network firstly calculates the neuron activation probability value P of an output layer in the calculation process, and then carries out random sampling according to the probability value P, thereby finally obtaining the activation state of the output neuron; the neural network maps any one picture into a binary sequence code with a fixed length, namely a binary characteristic vector of the picture;
step 6: when the similarity between the picture N and the picture M needs to be compared, firstly, the binary sequence codes corresponding to the picture N and the picture M are respectively calculated according to the methods from the step 1 to the step 5, and the calculated binary sequence code of the picture N is set as BNThe calculated binary sequence code of the picture M is BM;
Then calculate BNAnd BMHamming distance between H (B)N,BM) The smaller the Hamming distance is, the higher the similarity degree of the picture N and the picture M is;
and 7: a mirror image layer is arranged behind the output layer, and the mirror image layer is a mirror image of the hidden layer and the input layer by taking the output layer as a mirror surface; the number of the neurons of the last layer of the mirror image layer is the same as that of the neurons of the input layer, the number of the neurons of the penultimate layer of the mirror image layer is the same as that of the neurons of the first hidden layer, and the rest are analogized in sequence.
The hamming distance refers to the number of different bits in the two binary sequences, i.e., the number of 1 in the result of exclusive or of the two binary sequences.
When step 7 is executed, although the number of neurons of the mirror layer is the same as that of the input layer and the hidden layer, the connection weights of the neurons are different; the method comprises the steps that pictures are input from an input layer, and are recovered from a mirror image layer after passing through a binary bottleneck layer, certain errors are introduced into a binary bottleneck neural network in the middle, so that weight training needs to be carried out on the binary bottleneck neural network, and the purpose of the weight training is to minimize the errors; in the binary bottleneck neural network, the maximum information quantity which can be transmitted by the network is determined by the number of binary neurons, and the network is the bottleneck of information quantity transmission of the whole network.
The input layer and the hidden layer can be regarded as a lossy encoder of an image, the binary neuron can be regarded as an encoder of the image, and the mirror layer can be regarded as a decoder.
The purpose of the neural network training is that the weights of the network make the difference between the image output from the neuron at the last layer of the mirror layer and the input image as small as possible on the whole training set. Namely, images are input from an input layer and output from a mirror layer, a neural network transmitted in the middle introduces certain errors, and the purpose of weight training is to minimize the errors.
The method for extracting the picture features by using the binary bottleneck neural network solves the technical problem of extracting the binary feature vectors of the picture, calculates the feature binary sequence of the image, and can obtain very good performance without depending on the manual design of the experience of a researcher; the image feature binary sequence calculated by the method can be used for quickly calculating the similarity of the images, and has important value for similarity retrieval of pictures and videos.
Claims (3)
1. A method for extracting picture features using a binary bottleneck neural network, the method comprising: the method comprises the following steps:
step 1: establishing a binary bottleneck neural network, wherein the binary bottleneck neural network comprises an input layer, a hidden layer, an output layer and a mirror layer; the hidden layers comprise a first hidden layer, a second hidden layer and a third hidden layer;
step 2: after the pictures are obtained through the camera, the pictures are subjected to unified processing, so that the pictures are changed into the size of the resolution ratio suitable for processing in the binary bottleneck neural network, and the unified processing comprises amplification processing and reduction processing;
when pictures with 8-bit coding format are processed in a unified mode, since the range of pixel values of the pictures with 8-bit coding format is 0-255, all the pixel values in the pictures with 8-bit coding format are divided by 255 during processing, and the pixel values are normalized to be in the range of 0-1;
and step 3: inputting the pictures subjected to unified processing into the input layer, wherein the pixel values of the pictures subjected to unified processing are used as state values of the input layer;
and 4, step 4: the hidden layer obtains the state value of the input layer and calculates according to the following formula 1:
in the formula, a vector x represents a state value of an input layer, W represents a weight matrix from the input layer to a hidden layer, b represents a bias value of the hidden layer, and y represents a state value of the hidden layer;
each hidden layer takes the adjacent previous hidden layer as an input layer of the hidden layer, and the state value of the input layer is obtained through a formula 1;
and 5: inputting the state value of the hidden layer into the output layer, and calculating the neuron activation probability of the output layer, wherein the calculation formula 2 is as follows:
where the vector j represents the state value of the third hidden layer, k represents the bias value of the output layer, the index i represents the i-th element of the output layer, P represents the probability that the i-th element of the output layer is active, P (O)i1) represents OiProbability of 1, OiOnly two values are taken, namely 1 or 0, wherein 1 is taken to represent activation, 0 is taken to represent non-activation, and the probability of activation of the ith neuron of the output layer is given by formula 2;
the neural network firstly calculates the neuron activation probability value P of an output layer in the calculation process, and then carries out random sampling according to the probability value P, thereby finally obtaining the activation state of the output neuron; the neural network maps any one picture into a binary sequence code with a fixed length, namely a binary characteristic vector of the picture;
step 6: when the similarity between the picture N and the picture M needs to be compared, firstly, the binary sequence codes corresponding to the picture N and the picture M are respectively calculated according to the methods from the step 1 to the step 5, and the calculated binary sequence code of the picture N is set as BNThe calculated binary sequence code of the picture M is BM;
Then calculate BNAnd BMHamming distance between H (B)N,BM) The smaller the Hamming distance is, the higher the similarity degree of the picture N and the picture M is;
and 7: a mirror image layer is arranged behind the output layer, and the mirror image layer is a mirror image of the hidden layer and the input layer by taking the output layer as a mirror surface; the number of the neurons of the last layer of the mirror image layer is the same as that of the neurons of the input layer, and the number of the neurons of the penultimate layer of the mirror image layer is the same as that of the neurons of the first hidden layer.
2. A method for extracting picture features using a binary bottleneck neural network as claimed in claim 1, wherein: the hamming distance refers to the number of different bits in the two binary sequences, i.e., the number of 1 in the result of exclusive or of the two binary sequences.
3. A method for extracting picture features using a binary bottleneck neural network as claimed in claim 1, wherein: when step 7 is executed, although the number of neurons of the mirror layer is the same as that of the input layer and the hidden layer, the connection weights of the neurons are different; the method comprises the steps that pictures are input from an input layer, and are recovered from a mirror image layer after passing through a binary bottleneck layer, certain errors are introduced into a binary bottleneck neural network in the middle, so that weight training needs to be carried out on the binary bottleneck neural network, and the purpose of the weight training is to minimize the errors; in the binary bottleneck neural network, the maximum information quantity which can be transmitted by the network is determined by the number of binary neurons, and the network is the bottleneck of information quantity transmission of the whole network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710568350.1A CN107463932B (en) | 2017-07-13 | 2017-07-13 | Method for extracting picture features by using binary bottleneck neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710568350.1A CN107463932B (en) | 2017-07-13 | 2017-07-13 | Method for extracting picture features by using binary bottleneck neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107463932A CN107463932A (en) | 2017-12-12 |
CN107463932B true CN107463932B (en) | 2020-07-10 |
Family
ID=60544162
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710568350.1A Active CN107463932B (en) | 2017-07-13 | 2017-07-13 | Method for extracting picture features by using binary bottleneck neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107463932B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11966835B2 (en) * | 2018-06-05 | 2024-04-23 | Nvidia Corp. | Deep neural network accelerator with fine-grained parallelism discovery |
US11769040B2 (en) | 2018-09-10 | 2023-09-26 | Nvidia Corp. | Scalable multi-die deep learning system |
CN109299306B (en) * | 2018-12-14 | 2021-09-07 | 央视国际网络无锡有限公司 | Image retrieval method and device |
US11270197B2 (en) | 2019-03-12 | 2022-03-08 | Nvidia Corp. | Efficient neural network accelerator dataflows |
CN113554145B (en) * | 2020-04-26 | 2024-03-29 | 伊姆西Ip控股有限责任公司 | Method, electronic device and computer program product for determining output of neural network |
CN111709913B (en) * | 2020-05-21 | 2023-04-18 | 四川虹美智能科技有限公司 | Method, device and system for detecting deteriorated food in refrigerator |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101795344A (en) * | 2010-03-02 | 2010-08-04 | 北京大学 | Digital hologram compression method and system, decoding method and system, and transmission method and system |
CN104732243A (en) * | 2015-04-09 | 2015-06-24 | 西安电子科技大学 | SAR target identification method based on CNN |
CN105631296A (en) * | 2015-12-30 | 2016-06-01 | 北京工业大学 | Design method of safety face verification system based on CNN (convolutional neural network) feature extractor |
CN106251292A (en) * | 2016-08-09 | 2016-12-21 | 央视国际网络无锡有限公司 | A kind of photo resolution method for improving |
CN106909924A (en) * | 2017-02-18 | 2017-06-30 | 北京工业大学 | A kind of remote sensing image method for quickly retrieving based on depth conspicuousness |
-
2017
- 2017-07-13 CN CN201710568350.1A patent/CN107463932B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101795344A (en) * | 2010-03-02 | 2010-08-04 | 北京大学 | Digital hologram compression method and system, decoding method and system, and transmission method and system |
CN104732243A (en) * | 2015-04-09 | 2015-06-24 | 西安电子科技大学 | SAR target identification method based on CNN |
CN105631296A (en) * | 2015-12-30 | 2016-06-01 | 北京工业大学 | Design method of safety face verification system based on CNN (convolutional neural network) feature extractor |
CN106251292A (en) * | 2016-08-09 | 2016-12-21 | 央视国际网络无锡有限公司 | A kind of photo resolution method for improving |
CN106909924A (en) * | 2017-02-18 | 2017-06-30 | 北京工业大学 | A kind of remote sensing image method for quickly retrieving based on depth conspicuousness |
Non-Patent Citations (2)
Title |
---|
"ANN构造设计中基于GA优选神经元激活函数类型";王仲宇等;《计算机工程与应用》;20040811;第46-49页 * |
"Deep Adaptive Network: An Efficient Deep Neural Network with Sparse Binary Connections";Xichuan Zhou等;《网页在线公开:https://arxiv.org/abs/1604.06154》;20160419;第1-10页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107463932A (en) | 2017-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107463932B (en) | Method for extracting picture features by using binary bottleneck neural network | |
Varga et al. | Fully automatic image colorization based on Convolutional Neural Network | |
CN109711422B (en) | Image data processing, model building method, device, computer equipment and storage medium | |
CN106845478B (en) | A kind of secondary licence plate recognition method and device of character confidence level | |
CN106228177A (en) | Daily life subject image recognition methods based on convolutional neural networks | |
WO2020155614A1 (en) | Image processing method and device | |
Gu et al. | Blind image quality assessment via learnable attention-based pooling | |
CN109903299B (en) | Registration method and device for heterogenous remote sensing image of conditional generation countermeasure network | |
CN111127308A (en) | Mirror image feature rearrangement repairing method for single sample face recognition under local shielding | |
CN112861976B (en) | Sensitive image identification method based on twin graph convolution hash network | |
CN112580502B (en) | SICNN-based low-quality video face recognition method | |
CN114821058A (en) | Image semantic segmentation method and device, electronic equipment and storage medium | |
CN115424051B (en) | A Method for Panoramic Stitching Image Quality Evaluation | |
US20220164533A1 (en) | Optical character recognition using a combination of neural network models | |
Golestaneh et al. | No-reference image quality assessment via feature fusion and multi-task learning | |
CN109299306B (en) | Image retrieval method and device | |
CN109003247B (en) | A Method of Removing Mixed Noise in Color Image | |
Zhang et al. | Adapting convolutional neural networks on the shoeprint retrieval for forensic use | |
CN114494934A (en) | An Unsupervised Moving Object Detection Method Based on Information Reduction Rate | |
CN110751271B (en) | Image traceability feature characterization method based on deep neural network | |
Salem et al. | Semantic image inpainting using self-learning encoder-decoder and adversarial loss | |
CN114897711A (en) | Method, device and equipment for processing images in video and storage medium | |
El Alami et al. | Color face recognition by using quaternion and deep neural networks | |
CN115988260A (en) | Image processing method and device and electronic equipment | |
CN113554569A (en) | Face image restoration system based on double memory dictionaries |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |