CN106485268B - Image identification method and device - Google Patents
Image identification method and device Download PDFInfo
- Publication number
- CN106485268B CN106485268B CN201610854506.8A CN201610854506A CN106485268B CN 106485268 B CN106485268 B CN 106485268B CN 201610854506 A CN201610854506 A CN 201610854506A CN 106485268 B CN106485268 B CN 106485268B
- Authority
- CN
- China
- Prior art keywords
- scanned
- target
- image
- image features
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
The application relates to an image identification method and device, wherein the method comprises the following steps: acquiring a scanning image; performing feature extraction on the scanned image to obtain extracted image features; performing target detection by using the extracted image features and a target detection model based on a deep convolution multilayer neural network to obtain a candidate target; identifying the candidate target by using the extracted image features and a deep convolution-based multilayer neural network target classification model to obtain an image identification result; the feature extraction of the scanned image is performed to obtain extracted image features, which specifically include: and acquiring image features of each layer of the deep convolutional multilayer neural network, fusing the image features of each layer, and acquiring the fused image features as extracted image features. The method and the device can improve the accuracy and efficiency of target detection.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image recognition method and apparatus.
Background
In customs supervision areas such as trade ports, railway stations, airport terminals and the like, security inspection equipment is often required to detect items carried by passengers to determine whether the items are dangerous goods or smuggling goods. How to rapidly and accurately detect dangerous goods or smuggled goods in the security inspection process becomes a problem to be solved urgently.
In the prior art, when a passenger carries an article and passes through a security inspection device, an image scanned through X-rays appears on a display screen connected with the security inspection device, and a security inspector manually observes the image on the screen to identify whether the article is a dangerous article or a smuggled article. The method for manually identifying the dangerous goods or the smuggled goods in the prior art has the problems of large workload, low efficiency and low accuracy.
Disclosure of Invention
In order to solve the existing technical problems, the application expects to provide an image identification method and device, which can improve the accuracy and efficiency of detection by automatically identifying the image.
According to a first aspect of embodiments of the present application, there is provided an image recognition method, the method including: acquiring a scanning image; performing feature extraction on the scanned image to obtain extracted image features; performing target detection by using the extracted image features and a target detection model based on a deep convolution multilayer neural network to obtain a candidate target; identifying the candidate target by using the extracted image features and a deep convolution-based multilayer neural network target classification model to obtain an image identification result; the feature extraction of the scanned image is performed to obtain extracted image features, which specifically include: and acquiring image features of each layer of the deep convolutional multilayer neural network, fusing the image features of each layer, and acquiring the fused image features as extracted image features.
Optionally, before the feature extraction is performed on the scanned image, the method further includes: and preprocessing the scanned images, and setting different colors for the scanned article images of different categories based on the scanned article classification result.
Optionally, the preprocessing the scanned image, and setting different colors for the scanned article images of different categories based on the scanned article classification result includes: acquiring the atomic number of a scanned article, and acquiring the density of the scanned article based on the atomic number; determining the classification of the scanned articles according to the density of the scanned articles to obtain a classification result of the scanned articles; setting different colors for different categories of scanned item images based on the scanned item classification results.
Optionally, the performing feature extraction on the scanned image, and obtaining the extracted image feature includes: determining a target candidate region based on the color features of the scanned item; and performing feature extraction processing in the target candidate region to obtain the extracted image features.
Optionally, the performing target detection by using the extracted image features and a target detection model based on a deep convolutional multi-layer neural network, and obtaining a candidate target includes: performing target detection by using the extracted image features and based on a plurality of deep convolution multilayer neural network target detection models to obtain a plurality of detection results; and fusing the plurality of detection results to obtain a final detection result as a candidate target.
Optionally, the fusing the plurality of detection results and obtaining a final detection result as a candidate target includes: and fusing the plurality of detection results based on the confidence degree calculation result to obtain a final detection result.
Optionally, the method further comprises: judging whether dangerous goods or smuggled goods exist or not based on the image identification result; and if the dangerous goods or the smuggled goods exist, outputting prompt information.
Optionally, the method further comprises: comparing the image recognition result with an article list to obtain a comparison result; and outputting the comparison result.
According to a second aspect of embodiments of the present application, there is provided an image recognition apparatus, the apparatus including: the image acquisition module is used for acquiring a scanning image; the characteristic extraction module is used for extracting the characteristics of the scanned image to obtain the extracted image characteristics; the feature extraction of the scanned image is to obtain extracted image features specifically as follows: acquiring image features of each layer of a deep convolution multilayer neural network, performing fusion processing on the image features of each layer, and acquiring the fused image features as extracted image features; the target detection module is used for carrying out target detection by utilizing the extracted image characteristics and a target detection model based on the deep convolution multilayer neural network to obtain a candidate target; and the target classification module is used for identifying the candidate target by using the extracted image features and a deep convolution based multilayer neural network target classification model to obtain an image identification result.
Optionally, the apparatus further comprises: and the preprocessing module is used for preprocessing the scanned images and setting different colors for the scanned article images of different categories based on the scanned article classification result.
Optionally, the preprocessing module specifically includes: the density acquisition unit is used for acquiring the atomic number of the scanned item and acquiring the density of the scanned item based on the atomic number; the classification unit is used for determining the classification of the scanned articles according to the density of the scanned articles and obtaining the classification result of the scanned articles; a color setting unit for setting different colors for the scanned article images of different categories based on the scanned article classification result.
Optionally, the feature extraction module is specifically configured to determine a target candidate region based on a color feature of the scanned item; and performing feature extraction processing in the target candidate region to obtain the extracted image features.
Optionally, the target detection module specifically includes: the multi-model detection unit is used for carrying out target detection on the basis of a plurality of deep convolution multilayer neural network target detection models by utilizing the extracted image characteristics to obtain a plurality of detection results; and the result fusion unit is used for fusing the plurality of detection results to obtain a final detection result as a candidate target.
Optionally, the result fusion unit is specifically configured to fuse the multiple detection results based on the confidence degree calculation result to obtain a final detection result.
Optionally, the apparatus further comprises: the judging module is used for judging whether dangerous goods or smuggled goods exist or not based on the image identification result; and the first output module is used for outputting prompt information if the dangerous goods or the smuggled goods are judged to exist.
Optionally, the apparatus further comprises: the comparison module is used for comparing the image identification result with an article list to obtain a comparison result; and the second output module is used for outputting the comparison result.
According to a third aspect of embodiments herein there is provided apparatus for image recognition comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by the one or more processors to include instructions for:
acquiring a scanning image; performing feature extraction on the scanned image to obtain extracted image features; performing target detection by using the extracted image features and a target detection model based on a deep convolution multilayer neural network to obtain a candidate target; identifying the candidate target by using the extracted image features and a deep convolution-based multilayer neural network target classification model to obtain an image identification result; the feature extraction of the scanned image is performed to obtain extracted image features, which specifically include: and acquiring image features of each layer of the deep convolutional multilayer neural network, fusing the image features of each layer, and acquiring the fused image features as extracted image features.
Optionally, the processor is specifically further configured to execute the one or more programs including instructions for: and preprocessing the scanned images, and setting different colors for the scanned article images of different categories based on the scanned article classification result.
Optionally, the processor is specifically further configured to execute the one or more programs including instructions for: acquiring the atomic number of a scanned article, and acquiring the density of the scanned article based on the atomic number; determining the classification of the scanned articles according to the density of the scanned articles to obtain a classification result of the scanned articles; setting different colors for different categories of scanned item images based on the scanned item classification results.
Optionally, the processor is specifically further configured to execute the one or more programs including instructions for: determining a target candidate region based on the color features of the scanned item; and performing feature extraction processing in the target candidate region to obtain the extracted image features.
Optionally, the processor is specifically further configured to execute the one or more programs including instructions for: performing target detection by using the extracted image features and based on a plurality of deep convolution multilayer neural network target detection models to obtain a plurality of detection results; and fusing the plurality of detection results to obtain a final detection result as a candidate target.
Optionally, the processor is specifically further configured to execute the one or more programs including instructions for: and fusing the plurality of detection results based on the confidence degree calculation result to obtain a final detection result.
Optionally, the processor is specifically further configured to execute the one or more programs including instructions for: judging whether dangerous goods or smuggled goods exist or not based on the image identification result; and if the dangerous goods or the smuggled goods exist, outputting prompt information.
Optionally, the processor is specifically further configured to execute the one or more programs including instructions for: comparing the image recognition result with an article list to obtain a comparison result; and outputting the comparison result.
The image recognition method and the image recognition device provided by the embodiment of the application can extract the scanned image based on the features, and detect and classify the target by using the extracted image features based on the deep convolution multilayer neural network target detection model and the deep convolution multilayer neural network target classification model, so that the image recognition result is automatically obtained, and the detection efficiency is improved. In addition, when the features are extracted, the image features of each layer of the deep convolution multilayer neural network are respectively obtained, the image features of each layer are subjected to fusion processing, and the fused image features are obtained as the extracted image features, so that the obtained image features are more accurate, and the accuracy and the efficiency of image detection and classification are effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of an image recognition method according to an embodiment of the present application;
fig. 2 is a schematic diagram of image fusion processing provided in the embodiment of the present application;
FIG. 3 is a schematic diagram of a multi-model fusion process provided in an embodiment of the present application;
FIG. 4 is a flowchart of an image recognition method according to another embodiment of the present application
Fig. 5 is a schematic diagram of an image recognition apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of an image recognition apparatus according to another embodiment of the present application.
Detailed Description
The application aims to provide an image identification method and an image identification device, which can improve the accuracy and efficiency of detection by automatically identifying an image.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, a flowchart of an image recognition method according to an embodiment of the present application may specifically include:
s101, acquiring a scanning image.
The scanning image may be an X-ray image acquired by an X-ray security check device.
And S102, performing feature extraction on the scanned image to obtain the extracted image features.
In a specific implementation, before performing feature extraction on the scanned image, the scanned image may be further preprocessed, where the preprocessing may include: different colors are set for different categories of scanned item images based on the scanned item classification results. Specifically, the atomic number of the scanned item may be obtained, and the density of the scanned item is obtained based on the atomic number; determining the classification of the scanned articles according to the density of the scanned articles to obtain a classification result of the scanned articles; setting different colors for different categories of scanned item images based on the scanned item classification results. For example, scanned objects can be classified into organic matters and inorganic matters through image preprocessing, so that prior information is provided for image target detection, and the target detection accuracy is improved. For example, two substances, namely mild steel (corresponding to inorganic substances) and organic glass (corresponding to organic substances), can be adopted to configure different densities of substances with effective atomic coefficients of (7, 25), and a lookup table is established through linear interpolation. When a scanning image is obtained, the high-energy and low-energy X rays are used for irradiating the object to obtain the atomic numbers of different objects, and the density of the object can be obtained by looking up the table according to the atomic numbers. Then, whether the object is inorganic or organic can be determined according to the density of the object, and different colors are set for the inorganic and organic. For example, inorganic may be represented by blue and organic may be represented by orange. In this way, color information is given to the scanned image. Of course, different colors can be set for different articles according to different density values, and are not limited herein.
In some embodiments, when feature extraction is performed on the scanned image to obtain extracted image features, a target candidate region may be determined based on color features of the scanned article; and performing feature extraction processing in the target candidate region to obtain the extracted image features. For example, if the dangerous material to be detected is an inorganic material such as a knife or a gun, and the dangerous material is blue of a color set in advance for an inorganic material, the blue region is set as a target candidate region in the feature extraction, and the feature extraction can be performed only for the image of the target candidate region, thereby improving the efficiency and accuracy of the image processing.
In some embodiments, the performing feature extraction on the scanned image to obtain the extracted image features specifically includes: and acquiring image features of each layer of the deep convolutional multilayer neural network, fusing the image features of each layer, and acquiring the fused image features as extracted image features. It should be noted that, in order to improve the accuracy of image recognition, when the image features are extracted, a multi-level feature fusion mode is adopted to obtain the image features. Specifically, the neural network shallow image features and the deep image features are fused to be used as final image features. The fused image features can better improve the detection accuracy and have obvious advantages for identifying and classifying small objects. As shown in fig. 2, a schematic diagram of image fusion processing provided by the present application is shown. Wherein conv1 represents the first layer of the neural network, conv2 represents the second layer of the neural network, conv3 represents the third layer of the neural network, conv4 represents the fourth layer of the neural network, and conv5 represents the fifth layer of the neural network. In specific processing, assuming that the number of layers of the deep convolution multilayer neural network is 5, extracting image features of the 5 layers respectively, and then performing fusion processing on the extracted 5 layers of image features to obtain fused image features as finally extracted image features.
S103, performing target detection by using the extracted image features and a target detection model based on the deep convolution multilayer neural network to obtain candidate targets.
In specific implementation, a deep convolution multilayer neural network target detection model and a target classification network model are established in advance, and for example, the deep convolution multilayer neural network target detection model and the target classification network model can be obtained by utilizing sample picture training. For example, a pre-trained deep convolutional neural network model may be selected for initialization at an initial training stage, and then parameter fine-tuning (fine-tuning) is performed by using a pre-collected X-ray sample image to generate a target detection network model and a target classification network model, respectively. The pre-trained initial network model may be a ZF network model (a deep learning neural network model), or may be a VGG (a deep learning neural network model). And training the initial training model by using the pre-collected X-ray sample image to obtain a deep convolution multilayer neural network target detection model and a target classification network model. In some embodiments, in order to improve algorithm performance, the deep convolution multi-layer neural network object detection network model and the deep convolution multi-layer neural network object classification network model are subjected to convolution feature sharing. That is, in the present application, feature extraction may be performed only once, and the extracted features are applied to the target detection model and the target classification model, respectively, so that the processing efficiency of the algorithm may be improved.
In some embodiments, the performing target detection by using the extracted image features and a deep convolutional multi-layer neural network-based target detection model to obtain candidate targets includes: performing target detection by using the extracted image features and based on a plurality of deep convolution multilayer neural network target detection models to obtain a plurality of detection results; and fusing the plurality of detection results to obtain a final detection result as a candidate target. For example, when the target detection model is trained, a plurality of different deep convolutional multi-layer neural network target detection models, for example, 3 deep convolutional multi-layer neural network target detection models, may be obtained based on different training samples. And then, detecting the target by using the trained 3 deep convolution multilayer neural network target detection models to obtain 3 detection results. Fig. 3 is a schematic diagram of a multi-model fusion process provided in the embodiment of the present application. For the same scanned image, a first detection result may be obtained by using the target detection Model 1(Model 1), a second detection result may be obtained by using the target detection Model 2(Model 2), and a third detection result may be obtained by using the target detection Model 3(Model 3). And then carrying out fusion processing on the 3 detection results to obtain a fused result as a final output result. Wherein the fusing the plurality of detection results to obtain a final detection result as a candidate target comprises: and fusing the plurality of detection results based on the confidence degree calculation result to obtain a final detection result. For example, each detection result corresponds to a confidence calculation result, and if the confidence in the first detection result is 0.9, the confidence in the second detection result is 0.8, and the confidence in the third detection result is 0.7, the detection result with the highest confidence is the final detection result. Of course, the detection result after fusion may be obtained in other manners, and is not limited herein.
And S104, identifying the candidate target by using the extracted image features and a target classification model based on the deep convolution multilayer neural network to obtain an image identification result.
As mentioned previously, the sample picture can be used to pre-establish a deep convolution multi-layer neural network target classification model. And then, identifying the candidate target by using the extracted image features and a deep convolution based multilayer neural network target classification model to obtain an image identification result. In specific implementation, the extracted image features are input into the deep convolutional multi-layer neural network target classification model, that is, a recognition result can be obtained, and the recognition result is used for identifying the classification of an article, such as whether the article is a knife or a gun.
Referring to fig. 4, a flowchart of an image recognition method according to another embodiment of the present application is provided, where the method may include:
s401, acquiring a scanning image.
S402, preprocessing the scanned image.
In particular implementations, different colors may be set for different categories of scanned item images based on the scanned item classification results. Therefore, through image preprocessing, organic matters and inorganic matters are distinguished, prior information is provided for image target detection, and target detection accuracy is improved.
And S403, performing feature extraction on the scanned image to obtain the extracted image features.
S404, performing target detection by using the extracted image features and based on a plurality of deep convolution multilayer neural network target detection models to obtain candidate targets.
S405, identifying the candidate target by using the extracted image features and based on a plurality of deep convolution multilayer neural network target classification models to obtain an image identification result.
S406, judging whether dangerous goods or smuggled goods exist or not based on the image recognition result. If yes, the process proceeds to S409, and prompt information is output.
S407, if the comparison result does not exist, comparing the image identification result with an article list to obtain a comparison result.
S408, outputting the comparison result.
And if the comparison result shows that the image recognition result is matched with the article list, ending the program. And if the comparison result shows that the image identification result is not matched with the article list, outputting alarm information and entering a manual rechecking program.
And S409, outputting alarm prompt information.
The image identification method provided by the embodiment of the application can automatically detect and identify the scanned image, and can rapidly detect various dangerous goods, contraband goods or smuggling goods, so that the workload of security personnel is effectively relieved, the accuracy of the detection of the contraband goods is improved, meanwhile, automatic clearance can be realized by matching the identified goods with clearance data, and the clearance speed is effectively improved. In addition, the method and the device distinguish organic matters and inorganic matters through image preprocessing, provide prior information for subsequent image target detection, and improve target detection accuracy. In addition, when the features are extracted, the shallow features and the deep features are fused to be used as final image features, so that the detection accuracy can be better improved, and the small objects appearing in the scanned image can be better identified and classified. Finally, the method and the device adopt a multi-model parallel means to detect and classify the targets, and give out a final result through a proper rule, so that the accuracy is effectively improved.
The above is a detailed description of the image recognition method provided in the embodiments of the present application, and the following is a detailed description of the image recognition apparatus provided in the present application.
Fig. 5 is a schematic diagram of an image recognition apparatus according to an embodiment of the present application.
An image recognition device 500, the device 500 comprising:
an image obtaining module 501, configured to obtain a scanned image.
A feature extraction module 502, configured to perform feature extraction on the scanned image to obtain extracted image features; the feature extraction of the scanned image is performed to obtain extracted image features, which specifically include: and acquiring image features of each layer of the deep convolutional multilayer neural network, fusing the image features of each layer, and acquiring the fused image features as extracted image features.
And the target detection module 503 is configured to perform target detection by using the extracted image features and a deep convolution-based multilayer neural network target detection model to obtain candidate targets.
And the target classification module 504 is configured to identify the candidate target by using the extracted image features and a deep convolution-based multilayer neural network target classification model, so as to obtain an image identification result.
In some embodiments, the apparatus further comprises: and the preprocessing module is used for preprocessing the scanned images and setting different colors for the scanned article images of different categories based on the scanned article classification result.
In some embodiments, the preprocessing module specifically includes: the density acquisition unit is used for acquiring the atomic number of the scanned item and acquiring the density of the scanned item based on the atomic number; the classification unit is used for determining the classification of the scanned articles according to the density of the scanned articles and obtaining the classification result of the scanned articles; a color setting unit for setting different colors for the scanned article images of different categories based on the scanned article classification result.
In some embodiments, the feature extraction module is specifically configured to determine a target candidate region based on color features of the scanned item; and performing feature extraction processing in the target candidate region to obtain the extracted image features.
In some embodiments, the target detection module specifically comprises: the multi-model detection unit is used for carrying out target detection on the basis of a plurality of deep convolution multilayer neural network target detection models by utilizing the extracted image characteristics to obtain a plurality of detection results; and the result fusion unit is used for fusing the plurality of detection results to obtain a final detection result as a candidate target.
In some embodiments, the result fusion unit is specifically configured to fuse the plurality of detection results based on the confidence calculation result to obtain a final detection result.
In some embodiments, the apparatus further comprises: the judging module is used for judging whether dangerous goods or smuggled goods exist or not based on the image identification result; and the first output module is used for outputting prompt information if the dangerous goods or the smuggled goods are judged to exist.
In some embodiments, the apparatus further comprises: the comparison module is used for comparing the image identification result with an article list to obtain a comparison result; and the second output module is used for outputting the comparison result.
The functions of the modules may correspond to the processing steps of the image recognition method described in detail in fig. 1 and 4, and are not described herein again.
Referring to fig. 6, a block diagram of an apparatus for image recognition according to another embodiment of the present application is shown. The method comprises the following steps: at least one processor 601 (e.g., CPU), memory 602, and at least one communication bus 603 for enabling communications among the devices. The processor 601 is used to execute executable modules, such as computer programs, stored in the memory 602. The Memory 602 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. One or more programs are stored in the memory and configured to be executed by the one or more processors 601 include instructions for:
acquiring a scanning image; performing feature extraction on the scanned image to obtain extracted image features; performing target detection by using the extracted image features and a target detection model based on a deep convolution multilayer neural network to obtain a candidate target; identifying the candidate target by using the extracted image features and a deep convolution-based multilayer neural network target classification model to obtain an image identification result; the feature extraction of the scanned image is performed to obtain extracted image features, which specifically include: and acquiring image features of each layer of the deep convolutional multilayer neural network, fusing the image features of each layer, and acquiring the fused image features as extracted image features.
In some embodiments, processor 601 is specifically configured to execute the one or more programs including instructions for:
and preprocessing the scanned images, and setting different colors for the scanned article images of different categories based on the scanned article classification result.
In some embodiments, processor 601 is specifically configured to execute the one or more programs including instructions for:
acquiring the atomic number of a scanned article, and acquiring the density of the scanned article based on the atomic number; determining the classification of the scanned articles according to the density of the scanned articles to obtain a classification result of the scanned articles; setting different colors for different categories of scanned item images based on the scanned item classification results.
In some embodiments, processor 601 is specifically configured to execute the one or more programs including instructions for:
determining a target candidate region based on the color features of the scanned item; and performing feature extraction processing in the target candidate region to obtain the extracted image features.
In some embodiments, processor 601 is specifically configured to execute the one or more programs including instructions for:
performing target detection by using the extracted image features and based on a plurality of deep convolution multilayer neural network target detection models to obtain a plurality of detection results; and fusing the plurality of detection results to obtain a final detection result as a candidate target.
In some embodiments, processor 601 is specifically configured to execute the one or more programs including instructions for:
and fusing the plurality of detection results based on the confidence degree calculation result to obtain a final detection result.
In some embodiments, processor 601 is specifically configured to execute the one or more programs including instructions for:
judging whether dangerous goods or smuggled goods exist or not based on the image identification result; and if the dangerous goods or the smuggled goods exist, outputting prompt information.
In some embodiments, processor 601 is specifically configured to execute the one or more programs including instructions for:
comparing the image recognition result with an article list to obtain a comparison result; and outputting the comparison result.
Those of skill would further appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application.
Claims (15)
1. An image recognition method, characterized in that the method comprises:
acquiring a scanning image and acquiring color information of a scanned object corresponding to the scanning image, wherein the color information is used for representing an object type of the scanned object, and the object type comprises organic matters or inorganic matters;
determining a target candidate region from the scanned image according to the color information, and performing feature extraction in the target candidate region to obtain extracted image features;
performing target detection by using the extracted image features and a target detection model based on a deep convolution multilayer neural network to obtain a candidate target;
identifying the candidate target by using the extracted image features and a deep convolution-based multilayer neural network target classification model to obtain an image identification result;
wherein, the feature extraction in the target candidate region is performed to obtain extracted image features specifically as follows: and acquiring image features of each layer of the deep convolutional multilayer neural network, fusing the image features of each layer, and acquiring the fused image features as extracted image features.
2. The method according to claim 1, wherein after the acquiring the scan image and before the acquiring color information of the scan object corresponding to the scan image, the method further comprises:
and preprocessing the scanned images, and setting different colors for the scanned article images of different categories based on the scanned article classification result.
3. The method of claim 2, wherein preprocessing the scanned image and setting different colors for different categories of scanned item images based on scanned item classification results comprises:
acquiring the atomic number of a scanned article, and acquiring the density of the scanned article based on the atomic number;
determining the classification of the scanned articles according to the density of the scanned articles to obtain a classification result of the scanned articles;
setting different colors for different categories of scanned item images based on the scanned item classification results.
4. The method of claim 1, wherein the performing target detection by using the extracted image features and a deep convolutional multi-layer neural network-based target detection model to obtain candidate targets comprises:
performing target detection by using the extracted image features and based on a plurality of deep convolution multilayer neural network target detection models to obtain a plurality of detection results;
and fusing the plurality of detection results to obtain a final detection result as a candidate target.
5. The method of claim 4, wherein the fusing the plurality of detection results to obtain a final detection result as a candidate target comprises:
and fusing the plurality of detection results based on the confidence degree calculation result to obtain a final detection result.
6. The method of claim 1, further comprising:
judging whether dangerous goods or smuggled goods exist or not based on the image identification result;
and if the dangerous goods or the smuggled goods exist, outputting prompt information.
7. The method according to claim 1 or 5, characterized in that the method further comprises:
comparing the image recognition result with an article list to obtain a comparison result;
and outputting the comparison result.
8. An image recognition apparatus, characterized in that the apparatus comprises:
the device comprises an image acquisition module, a color acquisition module and a color acquisition module, wherein the image acquisition module is used for acquiring a scanning image and acquiring color information of a scanning object corresponding to the scanning image, the color information is used for representing an object type of the scanning object, and the object type comprises organic matters or inorganic matters;
the characteristic extraction module is used for determining a target candidate region from the scanned image according to the color information and extracting characteristics in the target candidate region to obtain extracted image characteristics; wherein, the feature extraction in the target candidate region is performed to obtain extracted image features specifically as follows: acquiring image features of each layer of a deep convolution multilayer neural network, performing fusion processing on the image features of each layer, and acquiring the fused image features as extracted image features;
the target detection module is used for carrying out target detection by utilizing the extracted image characteristics and a target detection model based on the deep convolution multilayer neural network to obtain a candidate target;
and the target classification module is used for identifying the candidate target by using the extracted image features and a deep convolution based multilayer neural network target classification model to obtain an image identification result.
9. The apparatus of claim 8, further comprising:
and the preprocessing module is used for preprocessing the scanned images and setting different colors for the scanned article images of different categories based on the scanned article classification result.
10. The apparatus according to claim 9, wherein the preprocessing module specifically comprises:
the density acquisition unit is used for acquiring the atomic number of the scanned item and acquiring the density of the scanned item based on the atomic number;
the classification unit is used for determining the classification of the scanned articles according to the density of the scanned articles and obtaining the classification result of the scanned articles;
a color setting unit for setting different colors for the scanned article images of different categories based on the scanned article classification result.
11. The apparatus according to claim 8, wherein the target detection module specifically comprises:
the multi-model detection unit is used for carrying out target detection on the basis of a plurality of deep convolution multilayer neural network target detection models by utilizing the extracted image characteristics to obtain a plurality of detection results;
and the result fusion unit is used for fusing the plurality of detection results to obtain a final detection result as a candidate target.
12. The apparatus according to claim 11, wherein the result fusion unit is specifically configured to:
and fusing the plurality of detection results based on the confidence degree calculation result to obtain a final detection result.
13. The apparatus of claim 8, further comprising:
the judging module is used for judging whether dangerous goods or smuggled goods exist or not based on the image identification result;
and the first output module is used for outputting prompt information if the dangerous goods or the smuggled goods are judged to exist.
14. The apparatus of claim 8 or 12, further comprising:
the comparison module is used for comparing the image identification result with an article list to obtain a comparison result; and the second output module is used for outputting the comparison result.
15. An apparatus for image recognition comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for:
acquiring a scanning image and acquiring color information of a scanned object corresponding to the scanning image, wherein the color information is used for representing an object type of the scanned object, and the object type comprises organic matters or inorganic matters;
determining a target candidate region from the scanned image according to the color information, and performing feature extraction in the target candidate region to obtain extracted image features; wherein, the feature extraction in the target candidate region is performed to obtain extracted image features specifically as follows: acquiring image features of each layer of a deep convolution multilayer neural network, performing fusion processing on the image features of each layer, and acquiring the fused image features as extracted image features;
performing target detection by using the extracted image features and a target detection model based on a deep convolution multilayer neural network to obtain a candidate target;
and identifying the candidate target by using the extracted image features and a deep convolution-based multilayer neural network target classification model to obtain an image identification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610854506.8A CN106485268B (en) | 2016-09-27 | 2016-09-27 | Image identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610854506.8A CN106485268B (en) | 2016-09-27 | 2016-09-27 | Image identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106485268A CN106485268A (en) | 2017-03-08 |
CN106485268B true CN106485268B (en) | 2020-01-21 |
Family
ID=58268114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610854506.8A Active CN106485268B (en) | 2016-09-27 | 2016-09-27 | Image identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106485268B (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960186B (en) * | 2017-03-17 | 2020-02-07 | 王宇宁 | Ammunition identification method based on deep convolutional neural network |
CN108229523B (en) * | 2017-04-13 | 2021-04-06 | 深圳市商汤科技有限公司 | Image detection method, neural network training method, device and electronic equipment |
CN107273936B (en) * | 2017-07-07 | 2020-09-11 | 广东工业大学 | GAN image processing method and system |
CN107563290A (en) * | 2017-08-01 | 2018-01-09 | 中国农业大学 | A kind of pedestrian detection method and device based on image |
CN107463906A (en) * | 2017-08-08 | 2017-12-12 | 深图(厦门)科技有限公司 | The method and device of Face datection |
CN107463965B (en) * | 2017-08-16 | 2024-03-26 | 湖州易有科技有限公司 | Deep learning-based fabric attribute picture acquisition and recognition method and recognition system |
CN109557114B (en) * | 2017-09-25 | 2021-07-16 | 清华大学 | Inspection method and inspection apparatus and computer readable medium |
CN109583266A (en) * | 2017-09-28 | 2019-04-05 | 杭州海康威视数字技术股份有限公司 | A kind of object detection method, device, computer equipment and storage medium |
CN107909093B (en) * | 2017-10-27 | 2021-02-02 | 浙江大华技术股份有限公司 | Method and equipment for detecting articles |
CN107871122A (en) * | 2017-11-14 | 2018-04-03 | 深圳码隆科技有限公司 | Safety check detection method, device, system and electronic equipment |
CN110119734A (en) * | 2018-02-06 | 2019-08-13 | 同方威视技术股份有限公司 | Cutter detecting method and device |
CN108647559A (en) * | 2018-03-21 | 2018-10-12 | 四川弘和通讯有限公司 | A kind of danger recognition methods based on deep learning |
CN108510116B (en) * | 2018-03-29 | 2020-06-30 | 哈尔滨工业大学 | Case and bag space planning system based on mobile terminal |
CN109001833A (en) * | 2018-06-22 | 2018-12-14 | 天和防务技术(北京)有限公司 | A kind of Terahertz hazardous material detection method based on deep learning |
CN109034245B (en) * | 2018-07-27 | 2021-02-05 | 燕山大学 | Target detection method using feature map fusion |
CN111103629A (en) * | 2018-10-25 | 2020-05-05 | 杭州海康威视数字技术股份有限公司 | Target detection method and device, NVR (network video recorder) equipment and security check system |
CN111241893B (en) * | 2018-11-29 | 2023-06-16 | 阿里巴巴集团控股有限公司 | Identification recognition method, device and system |
CN109799544B (en) * | 2018-12-28 | 2021-03-19 | 深圳市重投华讯太赫兹科技有限公司 | Intelligent detection method and device applied to millimeter wave security check instrument and storage device |
CN109816037B (en) * | 2019-01-31 | 2021-05-25 | 北京字节跳动网络技术有限公司 | Method and device for extracting feature map of image |
CN109978827A (en) * | 2019-02-25 | 2019-07-05 | 平安科技(深圳)有限公司 | Violated object recognition methods, device, equipment and storage medium based on artificial intelligence |
CN111856445B (en) * | 2019-04-11 | 2023-07-04 | 杭州海康威视数字技术股份有限公司 | Target detection method, device, equipment and system |
CN110245564B (en) * | 2019-05-14 | 2024-07-09 | 平安科技(深圳)有限公司 | Pedestrian detection method, system and terminal equipment |
CN110222641B (en) * | 2019-06-06 | 2022-04-19 | 北京百度网讯科技有限公司 | Method and apparatus for recognizing image |
CN112185077A (en) * | 2019-07-01 | 2021-01-05 | 云丁网络技术(北京)有限公司 | Intelligent reminding method, device and system and camera equipment |
CN110459225B (en) * | 2019-08-14 | 2022-03-22 | 南京邮电大学 | Speaker recognition system based on CNN fusion characteristics |
CN110781911B (en) * | 2019-08-15 | 2022-08-19 | 腾讯科技(深圳)有限公司 | Image matching method, device, equipment and storage medium |
CN110909604B (en) * | 2019-10-23 | 2024-04-19 | 深圳市重投华讯太赫兹科技有限公司 | Security check image detection method, terminal equipment and computer storage medium |
CN112730468B (en) * | 2019-10-28 | 2022-07-01 | 同方威视技术股份有限公司 | Article detection device and method for detecting article |
CN110942453A (en) * | 2019-11-21 | 2020-03-31 | 山东众阳健康科技集团有限公司 | CT image lung lobe identification method based on neural network |
CN110796127A (en) * | 2020-01-06 | 2020-02-14 | 四川通信科研规划设计有限责任公司 | Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal |
CN111340775B (en) * | 2020-02-25 | 2023-09-29 | 湖南大学 | Parallel method, device and computer equipment for acquiring ultrasonic standard section |
CN112215095A (en) * | 2020-09-24 | 2021-01-12 | 西北工业大学 | Contraband detection method, device, processor and security inspection system |
CN114241017A (en) * | 2021-11-16 | 2022-03-25 | 沈阳先进医疗设备技术孵化中心有限公司 | Image registration method and device, storage medium and computer equipment |
CN114549900A (en) * | 2022-02-23 | 2022-05-27 | 智慧航安(北京)科技有限公司 | Article classification method, device and system |
CN114693612A (en) * | 2022-03-16 | 2022-07-01 | 深圳大学 | Knee joint bone tumor detection method based on deep learning and related device |
CN119515234A (en) * | 2025-01-20 | 2025-02-25 | 山东港口烟台港集团有限公司 | A port customs control method and system based on artificial intelligence |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102013098A (en) * | 2010-10-11 | 2011-04-13 | 公安部第一研究所 | Method for removing organic and inorganic substances in security inspection images |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105160361A (en) * | 2015-09-30 | 2015-12-16 | 东软集团股份有限公司 | Image identification method and apparatus |
CN105320945A (en) * | 2015-10-30 | 2016-02-10 | 小米科技有限责任公司 | Image classification method and apparatus |
CN105740758A (en) * | 2015-12-31 | 2016-07-06 | 上海极链网络科技有限公司 | Internet video face recognition method based on deep learning |
CN105631482A (en) * | 2016-03-03 | 2016-06-01 | 中国民航大学 | Convolutional neural network model-based dangerous object image classification method |
CN105809164B (en) * | 2016-03-11 | 2019-05-14 | 北京旷视科技有限公司 | Character recognition method and device |
-
2016
- 2016-09-27 CN CN201610854506.8A patent/CN106485268B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102013098A (en) * | 2010-10-11 | 2011-04-13 | 公安部第一研究所 | Method for removing organic and inorganic substances in security inspection images |
Non-Patent Citations (2)
Title |
---|
"深度卷积神经网络在计算机视觉中的应用研究综述";卢宏涛等;《数据采集与处理》;20160131;第31卷(第1期);第12页第5节,图16 * |
"目标检测 Faster RCNN算法详解";shenxiaolu1984;《CSDN 博客》;20160421;第1-3页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106485268A (en) | 2017-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106485268B (en) | Image identification method and device | |
US10885618B2 (en) | Inspection apparatus, data generation apparatus, data generation method, and data generation program | |
CN110020647B (en) | Contraband target detection method and device and computer equipment | |
US10878283B2 (en) | Data generation apparatus, data generation method, and data generation program | |
CN109902643B (en) | Intelligent security inspection method, device and system based on deep learning and electronic equipment thereof | |
EP3834129B1 (en) | Systems and methods for image processing | |
US20230162342A1 (en) | Image sample generating method and system, and target detection method | |
CN108154168B (en) | Comprehensive cargo inspection system and method | |
US10163200B2 (en) | Detection of items in an object | |
US20190156139A1 (en) | Inspection methods and systems | |
KR20190075707A (en) | Method for sorting products using deep learning | |
CN109978892B (en) | Intelligent security inspection method based on terahertz imaging | |
US20220244194A1 (en) | Automated inspection method for a manufactured article and system for performing same | |
CN106874845B (en) | Image recognition method and device | |
JP6764709B2 (en) | X-ray automatic judgment device, X-ray automatic judgment method | |
CN118169547B (en) | Single-use circuit detection method and system for electric anastomat | |
Mery et al. | Image processing for fault detection in aluminum castings | |
CN114037939B (en) | Dangerous goods identification method, dangerous goods identification device, electronic equipment and storage medium | |
CN111539251B (en) | Security check article identification method and system based on deep learning | |
CN118279304B (en) | Abnormal recognition method, device and medium for special-shaped metal piece based on image processing | |
JP7422023B2 (en) | X-ray image processing device and X-ray image processing method | |
CN115081469A (en) | Article category identification method, device and equipment based on X-ray security inspection equipment | |
Todoroki et al. | Automated knot detection with visual post-processing of Douglas-fir veneer images | |
CN112037243A (en) | Passive terahertz security inspection method, system and medium | |
KR102325017B1 (en) | Method for identifying cargo based on deep-learning and apparatus performing the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |