CN111680549B - A paper pattern identification method - Google Patents
A paper pattern identification method Download PDFInfo
- Publication number
- CN111680549B CN111680549B CN202010348238.9A CN202010348238A CN111680549B CN 111680549 B CN111680549 B CN 111680549B CN 202010348238 A CN202010348238 A CN 202010348238A CN 111680549 B CN111680549 B CN 111680549B
- Authority
- CN
- China
- Prior art keywords
- paper
- texture image
- identified
- image
- paper texture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 239000011159 matrix material Substances 0.000 claims abstract description 56
- 230000009466 transformation Effects 0.000 claims abstract description 42
- 239000000835 fiber Substances 0.000 claims abstract description 33
- 230000004044 response Effects 0.000 claims description 42
- 239000013598 vector Substances 0.000 claims description 20
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 12
- 238000012567 pattern recognition method Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 abstract description 13
- 238000005286 illumination Methods 0.000 abstract description 12
- 230000008569 process Effects 0.000 abstract description 5
- 230000002159 abnormal effect Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 12
- 238000003909 pattern recognition Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 229920001410 Microfiber Polymers 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 239000003658 microfiber Substances 0.000 description 3
- 238000000386 microscopy Methods 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000000356 contaminant Substances 0.000 description 2
- 238000011109 contamination Methods 0.000 description 2
- 238000005202 decontamination Methods 0.000 description 2
- 230000003588 decontaminative effect Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000976 ink Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/80—Recognising image objects characterised by unique random patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Inspection Of Paper Currency And Valuable Securities (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种纸纹识别方法,包括以下步骤:S1.在透射光源条件下,拍摄参考纸质文件和待识别纸质文件的内部纤维微米精度的显微图像,分别作为参考纸纹图像和待识别纸纹图像;S2.提取所述参考纸纹图像和所述待识别纸纹图像的特征点进行匹配,生成特征点匹配对;S3.根据特征点匹配对估计所述参考纸纹图像和所述待识别纸纹图像之间的变换矩阵,分别获取所述参考纸纹图像与所述待识别纸纹图像的感兴趣区域;S4.分别增强所述感兴趣区域的纤维纹理;S5.根据增强后的纹理结构衡量相似性,输出识别结果。本方法不仅能够对抗纸纹采集过程中光照变化或者人为以及设备操作偏差造成的纸纹图像的平移、旋转、缩放,还能对抗纸张表面出现赃物等异常情形。
The invention discloses a paper grain identification method, which includes the following steps: S1. Under transmitted light source conditions, take microscopic images with micron precision of the internal fibers of the reference paper document and the paper document to be identified, and use them as reference paper grain images respectively. and the paper texture image to be identified; S2. Extract and match the feature points of the reference paper texture image and the paper texture image to be identified, and generate a feature point matching pair; S3. Estimate the reference paper texture image based on the feature point matching pair. and the transformation matrix between the paper texture image to be identified, and obtain the areas of interest of the reference paper texture image and the paper texture image to be identified respectively; S4. Enhance the fiber texture of the area of interest respectively; S5. The similarity is measured based on the enhanced texture structure and the recognition result is output. This method can not only resist the translation, rotation, and scaling of the paper pattern image caused by illumination changes or human and equipment operation deviations during the paper pattern collection process, but can also resist abnormal situations such as stolen goods appearing on the surface of the paper.
Description
技术领域Technical field
本发明涉及物品识别领域,尤其涉及一种基于纸张本身显微纹理结构的纸纹识别方法。The invention relates to the field of object identification, and in particular to a paper pattern identification method based on the microscopic texture structure of the paper itself.
背景技术Background technique
随着计算机硬件以及计算机视觉技术的快速发展,更多的基于图像处理的技术在物品的防伪识别领域受到广泛的关注。在重要的纸质文件,例如合同、票据、演出门票等防伪问题上,由于该类文件缺少显著的标志特征,以及由于现如今打印技术、打印精度的发展使得对它们的仿造难度大大降低,不仅仿造的技术门槛低,仿造陈本也十分低下,也就成为了一些伪造者的重点攻击对象。With the rapid development of computer hardware and computer vision technology, more image processing-based technologies have received widespread attention in the field of anti-counterfeiting identification of items. Regarding the anti-counterfeiting issues of important paper documents, such as contracts, bills, performance tickets, etc., due to the lack of significant marking features of such documents, and the development of today's printing technology and printing accuracy, the difficulty of counterfeiting them has been greatly reduced. Not only The technical threshold for counterfeiting is low, and the counterfeit copies are also very low, which makes them a key target of some counterfeiters.
传统的方法大多是通过签字、盖章、打印防伪标签等来对重要文件进行标识防伪。虽然这些方法成本低,易于实施,但是容易被仿造攻击,不具备很好的防伪性能。另外还有利用特殊的防伪纸,特殊油墨,以及通过物理或者化学方式给物品人工添加随机纤维、随机气泡防伪标签等防伪技术,但这些方式成本以及技术门槛高,不利于推广应用,一般只用于昂贵商品的包装防伪。近几年,由于图像采集、图像处理以及计算机技术的高速发展,一种基于纸张本身天然不可克隆特征的识别方法成为一个热点问题;Most of the traditional methods are to mark and prevent counterfeiting of important documents by signing, stamping, printing anti-counterfeiting labels, etc. Although these methods are low-cost and easy to implement, they are prone to counterfeit attacks and do not have good anti-counterfeiting performance. In addition, there are anti-counterfeiting technologies such as the use of special anti-counterfeiting paper, special inks, and the artificial addition of random fibers and random bubble anti-counterfeiting labels to items through physical or chemical methods. However, these methods have high costs and technical barriers, which are not conducive to promotion and application. Generally, only For packaging of expensive goods to prevent counterfeiting. In recent years, due to the rapid development of image acquisition, image processing and computer technology, an identification method based on the natural unclonable characteristics of paper itself has become a hot issue;
特别的,专利CN 102073865 A提出一种利用纸张自身纤维纹理的防伪方法,可以在无需额外对纸张进行技术处理的条件下实现纸张的防伪识别,但该方法基于所提取的纹理特征点的模式识别结果判断纸张真伪,而所提取的特征点依赖于每次采集时纸张的位置、采集设备与纸张的相对位置、采集设备光学放大倍率;因此,为保证识别结果的准确性,要求采集图像时上述采集条件高度一致,不能很好的对抗纸张的平移、旋转以及缩放,另外,当纸张表面出现污染物,例如字迹、水渍等时,所提取的特征点也会改变,导致纸张的识别结果不准确;In particular, patent CN 102073865 A proposes an anti-counterfeiting method that utilizes the fiber texture of paper itself, which can achieve anti-counterfeiting recognition of paper without additional technical processing of the paper. However, this method is based on pattern recognition of extracted texture feature points. The results determine the authenticity of the paper, and the extracted feature points depend on the position of the paper during each collection, the relative position of the collection equipment and the paper, and the optical magnification of the collection equipment; therefore, in order to ensure the accuracy of the recognition results, it is required to collect images when The above collection conditions are highly consistent and cannot resist the translation, rotation and scaling of the paper. In addition, when contaminants appear on the surface of the paper, such as writing, water stains, etc., the extracted feature points will also change, resulting in paper recognition results. Inaccurate;
专利CN 102955930 A所述的一种利用物质自身物理特征识别的防伪方法,与上述专利CN 102073865 A相比区别在于所述物质可透光,通过获取透射后光学采集到的物理特征图像进行模式识别;所述采用透射光的目的在于能够低成本的实现对其物理特征的图像采集,仍具有专利CN 102073865 A中所具备的问题;Patent CN 102955930 A describes an anti-counterfeiting method that uses the physical characteristics of a substance to identify itself. Compared with the above-mentioned patent CN 102073865 A, the difference is that the substance is light-transmissive, and pattern recognition is performed by obtaining the physical characteristics image collected optically after transmission. ; The purpose of using transmitted light is to realize image acquisition of its physical characteristics at low cost, but it still has the problems of patent CN 102073865 A;
专利CN 110599665 A所述的一种纸纹识别方法,和专利CN 102073865 A相比,在提取纸纹图像特征点之前,通过Yolo-v2模型获取待验证纸纹的去污特征区域图像,由于该识别方法的最终识别仍然是基于所采集纸纹图像的特征点的特征向量,识别结果受污染物区域位置,面积大小,去污染物后剩余区域位置分布,以及去污染物后剩余区域特征点分布的影响;A paper texture identification method described in patent CN 110599665 A. Compared with patent CN 102073865 A, before extracting the feature points of the paper texture image, the decontamination feature area image of the paper texture to be verified is obtained through the Yolo-v2 model. Due to this The final identification of the identification method is still based on the feature vector of the feature points of the collected paper texture image, the identification result is the location of the contaminated area, the area size, the location distribution of the remaining area after the contamination is removed, and the distribution of feature points in the remaining area after the contamination is removed. Impact;
综上,现有的基于纸纹天然不可克隆特征的纸纹识别方法具有两个明显的局限性。一是纸纹识别方法无法很好对抗纸纹采集时人为以及设备操作偏差导致的纸纹图像的平移、旋转以及缩放等;二是纸纹识别方法无法对抗纸张表面的脏污,识别结果受赃物影响大。In summary, the existing paper pattern recognition methods based on the natural unclonable characteristics of paper patterns have two obvious limitations. First, the paper texture recognition method cannot resist the translation, rotation and scaling of the paper texture image caused by human and equipment operation deviations during paper texture collection. Second, the paper texture recognition method cannot resist the dirt on the paper surface, and the recognition results will be stolen. Great impact.
发明内容Contents of the invention
本发明要解决的技术问题在于,针对现有技术的缺陷,本发明结合纸纹采集方案和纸纹识别算法两方面提供了一种新的纸纹识别方法。The technical problem to be solved by the present invention is that in view of the shortcomings of the existing technology, the present invention provides a new paper pattern recognition method by combining the paper pattern collection scheme and the paper pattern recognition algorithm.
本发明解决其技术问题所采用的技术方案是:构造一种纸纹识别方法,包括以下步骤:The technical solution adopted by the present invention to solve the technical problem is to construct a paper pattern recognition method, which includes the following steps:
S1.在透射光源条件下,拍摄参考纸质文件和待识别纸质文件的内部纤维微米精度显微图像,分别作为参考纸纹图像和待识别纸纹图像;S1. Under transmitted light source conditions, take micron-precision microscopic images of the internal fibers of the reference paper document and the paper document to be identified, respectively, as the reference paper grain image and the paper grain image to be identified;
S2.提取所述参考纸纹图像和所述待识别纸纹图像的特征点进行匹配,生成特征点匹配对;S2. Extract and match the feature points of the reference paper texture image and the paper texture image to be identified, and generate a feature point matching pair;
S3.根据特征点匹配对估计所述参考纸纹图像和所述待识别纸纹图像之间的变换矩阵,分别获取所述参考纸纹图像与所述待识别纸纹图像的感兴趣区域;S3. Estimate the transformation matrix between the reference paper texture image and the paper texture image to be identified according to the feature point matching pair, and obtain the areas of interest of the reference paper texture image and the paper texture image to be identified respectively;
S4.分别增强所述感兴趣区域的纤维纹理;S4. Enhance the fiber texture of the area of interest respectively;
S5.根据增强后的纹理结构衡量相似性,输出识别结果。S5. Measure the similarity based on the enhanced texture structure and output the recognition result.
优选地,在本发明所述的纸纹识别方法中,在步骤S1中,Preferably, in the paper texture identification method of the present invention, in step S1,
采集所述参考纸纹图像时,同时记录所采集的参考纸纹图像中心点在参考纸质文件上的对应点相对于参考纸质文件中心点的坐标位置;When collecting the reference paper grain image, simultaneously record the coordinate position of the corresponding point of the center point of the collected reference paper grain image on the reference paper document relative to the center point of the reference paper document;
采集所述待识别纸纹图像时,根据记录的坐标位置采集相应区域的待识别纸纹图像。When collecting the paper grain image to be identified, the paper grain image to be identified in the corresponding area is collected according to the recorded coordinate position.
优选地,在本发明所述的纸纹识别方法中,采集所述待识别纸纹图像时,所述记录的坐标位置在显微拍摄的视场范围内。Preferably, in the paper grain identification method of the present invention, when the paper grain image to be identified is collected, the recorded coordinate position is within the field of view of the microscopic photography.
优选地,在本发明所述的纸纹识别方法中,在对焦准确的情况下,拍摄所述参考纸质文件和所述待识别纸质文件的显微图像。Preferably, in the paper grain identification method of the present invention, when the focus is accurate, microscopic images of the reference paper document and the paper document to be identified are taken.
优选地,在本发明所述的纸纹识别方法中,所述步骤S2包括:Preferably, in the paper texture identification method of the present invention, the step S2 includes:
S2-1.对所述参考纸纹图像和所述待识别纸纹图像进行图像预处理;S2-1. Perform image preprocessing on the reference paper texture image and the paper texture image to be identified;
S2-2.对预处理后的所述参考纸纹图像和所述待识别纸纹图像进行纹理特征点提取,计算特征点的特征向量,根据特征向量匹配特征点,生成特征点匹配对。S2-2. Extract texture feature points from the preprocessed reference paper texture image and the paper texture image to be identified, calculate feature vectors of feature points, match feature points according to the feature vectors, and generate feature point matching pairs.
优选地,在本发明所述的纸纹识别方法中,所述根据特征向量匹配特征点包括:Preferably, in the paper pattern recognition method of the present invention, matching feature points according to feature vectors includes:
根据所提取的特征点对应的特征向量的相似性匹配特征点。Feature points are matched according to the similarity of the feature vectors corresponding to the extracted feature points.
优选地,在本发明所述的纸纹识别方法中,所述根据所提取的特征点对应的特征向量的相似性匹配特征点包括:Preferably, in the paper pattern recognition method of the present invention, matching feature points according to the similarity of feature vectors corresponding to the extracted feature points includes:
分别提取所述参考纸纹图像和所述待识别纸纹图像的特征点,分别计算所述参考纸纹图像和所述待识别纸纹图像特征点对应的特征向量的相似性,选取阈值,当所计算的相似性大于阈值时,生成特征点匹配对。Extract the feature points of the reference paper texture image and the paper texture image to be identified respectively, calculate the similarity of the feature vectors corresponding to the feature points of the reference paper texture image and the paper texture image to be identified, respectively, and select a threshold value. When the calculated similarity is greater than the threshold, a matching pair of feature points is generated.
优选地,在本发明所述的纸纹识别方法中,所述特征点为SURF或者SIFT特征点。Preferably, in the paper texture identification method of the present invention, the feature points are SURF or SIFT feature points.
优选地,在本发明所述的纸纹识别方法中,所述步骤S3包括:Preferably, in the paper texture identification method of the present invention, the step S3 includes:
S3-1.剔除所述步骤S2中得到的特征点匹配对中的错误特征点匹配对,根据余下有效的特征点匹配对,估计所述参考纸纹图像和所述待识别纸纹图像之间的变换矩阵;S3-1. Eliminate erroneous feature point matching pairs among the feature point matching pairs obtained in step S2, and estimate the relationship between the reference paper texture image and the paper texture image to be identified based on the remaining valid feature point matching pairs. transformation matrix;
S3-2.利用所述变换矩阵对所述待识别纸纹图像进行变换;S3-2. Use the transformation matrix to transform the paper texture image to be identified;
S3-3.将变换后的待识别纸纹图像与参考纸纹图像进行叠放,分别从两个纸纹图像中截取重合部分的内接矩形作为感兴趣区域。S3-3. Overlay the transformed paper texture image to be identified and the reference paper texture image, and intercept the inscribed rectangles of the overlapping parts from the two paper texture images as regions of interest.
优选地,在本发明所述的纸纹识别方法中,Preferably, in the paper texture identification method of the present invention,
所述步骤S3-1包括:采用MSAC算法剔除所述步骤S2中得到的特征点匹配对中的错误特征点匹配对,以所述参考纸纹图像为标准,根据余下有效的特征点匹配对,估计所述待识别纸纹图像到所述参考纸纹图像的仿射变换矩阵;The step S3-1 includes: using the MSAC algorithm to eliminate erroneous feature point matching pairs among the feature point matching pairs obtained in the step S2, using the reference paper texture image as a standard, and based on the remaining valid feature point matching pairs, Estimate the affine transformation matrix from the paper texture image to be identified to the reference paper texture image;
所述步骤S3-2包括:利用所述仿射变换矩阵对所述待识别纸纹图像进行仿射变换;The step S3-2 includes: using the affine transformation matrix to perform affine transformation on the paper texture image to be identified;
所述步骤S3-3包括:将变换后的纸纹图像进行叠放,选定阈值S,若变换后的重合部分面积占所述参考纸纹图像或所述待识别纸纹图像总面积的比较大于S时,则截取变换后二者的重合部分的最大内接矩形作为感兴趣区域。The step S3-3 includes: stacking the transformed paper texture images, selecting a threshold S, and comparing the area of the overlapping portion after the transformation to the total area of the reference paper texture image or the paper texture image to be identified. When it is greater than S, the largest inscribed rectangle of the overlapping part after transformation is intercepted as the area of interest.
优选地,在本发明所述的纸纹识别方法中,所述步骤S3还包括:Preferably, in the paper texture identification method of the present invention, step S3 further includes:
若所述步骤S3-1中有效的特征点匹配对数量过少,不足以估计变换矩阵,或者感兴趣区域面积过小时,则以所述步骤S2-1中经过预处理的所述参考纸纹图像和所述待识别纸纹图像作为感兴趣区域。If the number of effective feature point matching pairs in step S3-1 is too small to estimate the transformation matrix, or the area of interest area is too small, then the reference paper texture preprocessed in step S2-1 is used. image and the paper texture image to be identified as the area of interest.
优选地,在本发明所述的纸纹识别方法中,所述步骤S4包括:Preferably, in the paper texture identification method of the present invention, the step S4 includes:
采用多个角度的Gabor滤波器分别增强两个所述感兴趣区域的纤维纹理,同时每个角度的Gabor滤波器均生成一个幅值响应矩阵和一个相角响应矩阵,然后将多个角度下的幅值响应矩阵和相角响应矩阵进行叠加,最后生成一个对应的幅值响应矩阵和相角响应矩阵。Gabor filters at multiple angles are used to enhance the fiber texture of the two regions of interest respectively. At the same time, the Gabor filter at each angle generates an amplitude response matrix and a phase angle response matrix, and then the Gabor filters at multiple angles are The amplitude response matrix and the phase angle response matrix are superimposed, and finally a corresponding amplitude response matrix and phase angle response matrix are generated.
优选地,在本发明所述的纸纹识别方法中,所述步骤S4之前包括:Preferably, in the paper texture identification method of the present invention, the step S4 includes:
按照事先给定的图像尺寸参数,将所述参考纸纹图像和待识别纸纹图像的感兴趣区域调整为相同大小。According to the image size parameters given in advance, the areas of interest of the reference paper texture image and the paper texture image to be identified are adjusted to the same size.
优选地,在本发明所述的纸纹识别方法中,所述步骤S5包括:Preferably, in the paper texture identification method of the present invention, the step S5 includes:
分别将待识别纸纹图像和参考纸纹图像的感兴趣区域增强后的纹理结构映射到01数字空间,利用海明距离生成相似性指标,衡量所述参考纸纹图像和所述待识别纸纹图像对应生成的比特流的相似性。Map the enhanced texture structure of the area of interest of the paper texture image to be identified and the reference paper texture image to 01 digital space respectively, and use Hamming distance to generate a similarity index to measure the reference paper texture image and the paper texture to be identified. Image correspondence to the similarity of the generated bitstream.
优选地,在本发明所述的纸纹识别方法中,所述将增强后的纹理结构映射到01数字空间,利用海明距离生成相似性指标,衡量所述参考纸纹图像和所述待识别纸纹图像对应生成的比特流的相似性包括:Preferably, in the paper pattern recognition method of the present invention, the enhanced texture structure is mapped to 01 digital space, and the Hamming distance is used to generate a similarity index to measure the reference paper pattern image and the to-be-identified The similarities of the bitstream generated corresponding to the paper texture image include:
分别计算所述参考纸纹图像和所述待识别纸纹图像的感兴趣区域最终的幅值响应矩阵和相角响应矩阵的均值,将响应矩阵中的每一个数字与均值进行大小比,比均值大的取1,比均值小的取0,将两个数字矩阵按行或列展开后,将得到的比特流拼接在一起,分别作为所述参考纸纹图像与所述待识别纸纹图像的数字纸纹;Calculate the mean of the final amplitude response matrix and phase angle response matrix of the reference paper texture image and the area of interest of the paper texture image to be identified respectively, compare each number in the response matrix with the mean, and compare the mean The larger one is 1, and the smaller one is 0. After expanding the two digital matrices by row or column, the resulting bit streams are spliced together and used as the reference paper texture image and the paper texture image to be identified respectively. digital paper texture;
计算所述参考纸纹图像和所述待识别纸纹图像的数字纸纹之间的海明距离,将海明距离占数字纸纹总长度的比例作为参考纸纹与待识别纸纹之间的相似度指标;Calculate the Hamming distance between the reference paper texture image and the digital paper texture of the paper texture image to be identified, and use the proportion of the Hamming distance to the total length of the digital paper texture as the distance between the reference paper texture and the paper texture to be identified. similarity index;
选取阈值t,若所述相似度指标大于阈值t时,则识别失败,若所述相似度指标小于阈值t时,则识别成功。Select a threshold t. If the similarity index is greater than the threshold t, the recognition fails. If the similarity index is less than the threshold t, the recognition is successful.
本发明提出了一种基于纸张内部纤维微米精度显微结构的纸纹识别方法,主要利用纸张内部具备随机性的显微纤维交织的纹理结构进行纸纹识别。与现有技术相比,本发明提出的纸纹识别方法至少具备以下几个有益效果:(1)采用显微加透射照明的纸纹采集方案,可以直接采集纸张表面到以下50微米范围内的微米精度的纤维结构纹理图像,且纤维纹理稳定,对光照变化以及纸张表面脏污不敏感;(2)采用纸张微米精度的纤维纹理特征进行纸纹识别,保证了本发明提出的基于特征点预估纸纹变换矩阵的纸纹校准方法的稳定性,可以对抗纸纹采集过程中光照变化或者人为以及设备操作偏差带来的纸纹图像的平移、旋转以及缩放;(3)由于微米精度的纤维纹理特征在纸张表面出现脏污时不容易丢失,保证本发明提出的纸纹识别算法在赃物情况下仍能实现纸纹校准以及识别工作,使得本发明提出的纸纹识别方法可以对抗纸张表面赃物等不利情形。The present invention proposes a paper pattern recognition method based on the micron-precision microstructure of fibers inside the paper. It mainly uses the texture structure of the random microfiber interweaving inside the paper to identify the paper pattern. Compared with the existing technology, the paper pattern identification method proposed by the present invention has at least the following beneficial effects: (1) Using a paper pattern collection scheme using microscopy and transmitted illumination, it can directly collect the paper surface to the following 50 micron range Micron-precision fiber structure texture image, and the fiber texture is stable and insensitive to changes in illumination and paper surface dirt; (2) Using micron-precision fiber texture characteristics of paper for paper pattern recognition ensures that the feature point prediction method proposed by the present invention is Estimating the stability of the paper texture calibration method of the paper texture transformation matrix can resist the translation, rotation and scaling of the paper texture image caused by illumination changes or human and equipment operation deviations during the paper texture collection process; (3) Due to micron precision fibers Texture features are not easily lost when the paper surface is dirty, ensuring that the paper pattern recognition algorithm proposed in the present invention can still achieve paper texture calibration and recognition work in the case of stolen goods, so that the paper pattern recognition method proposed in the present invention can combat stolen goods on the paper surface. and other unfavorable situations.
附图说明Description of drawings
下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below in conjunction with the accompanying drawings and examples. In the accompanying drawings:
图1是本发明纸纹识别方法的流程图;Figure 1 is a flow chart of the paper pattern identification method of the present invention;
图2是本发明获得参考纸纹图像与待识别纸纹图像的感兴趣区域的流程图;Figure 2 is a flow chart of the present invention for obtaining the reference paper texture image and the area of interest of the paper texture image to be identified;
图3是本发明获取参考纸纹图像与待识别纸纹图像的感兴趣区域实例示意图;Figure 3 is a schematic diagram of an example of the area of interest of the reference paper texture image and the paper texture image to be identified according to the present invention;
图4是采用多个角度的Gabor滤波器增强感兴趣区域的纤维纹理的实例示意图。Figure 4 is a schematic diagram of an example of using Gabor filters at multiple angles to enhance the fiber texture in the area of interest.
具体实施方式Detailed ways
为了对本发明的技术特征、目的和效果有更加清楚的理解,现对照附图详细说明本发明的具体实施方式。In order to have a clearer understanding of the technical features, purposes and effects of the present invention, the specific embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
如图1所示,本发明构造了一种纸纹识别方法,是基于纸张内部纤维显微纹理结构的,易于实施,不需要引入额外打印操作,能够对抗纸张平移、旋转、以及采集时图像缩放,同时实现了在纸张表面出现不同污染物时,能够保证纸纹识别方法结果的准确性,该方法包括以下步骤:As shown in Figure 1, the present invention constructs a paper texture recognition method, which is based on the microscopic texture structure of the internal fibers of the paper. It is easy to implement, does not require the introduction of additional printing operations, and can resist paper translation, rotation, and image scaling during collection. , and at the same time, it is possible to ensure the accuracy of the results of the paper pattern identification method when different contaminants appear on the surface of the paper. The method includes the following steps:
步骤S1:在透射光源条件下,拍摄参考纸质文件和待识别纸质文件的内部纤维微米精度的显微图像,分别作为参考纸纹图像和待识别纸纹图像。Step S1: Under transmitted light source conditions, take microscopic images with micron precision of the internal fibers of the reference paper document and the paper document to be identified, respectively, as the reference paper grain image and the paper grain image to be identified.
具体地,在步骤S1中,参考纸纹图像需要预先采集、存储,用于接下来的纸纹识别。待识别纸纹图像用于和参考纸纹图像进行识别匹配,当待识别纸纹图像和参考纸纹图像采集自同一张纸张的相同区域时,纸纹匹配成功;当待识别纸纹图像和参考纸纹图像采集自同一张纸张的不同区域或者不同纸张时,纸纹匹配失败。Specifically, in step S1, the reference paper texture image needs to be collected and stored in advance for subsequent paper texture recognition. The paper texture image to be identified is used to identify and match the reference paper texture image. When the paper texture image to be identified and the reference paper texture image are collected from the same area of the same paper, the paper texture matching is successful; when the paper texture image to be identified and the reference paper texture image are When paper texture images are collected from different areas of the same paper or different papers, paper texture matching fails.
上述同一张纸张的相同区域指的是采集的参考纸纹图像和待识别纸纹图像包含同一个纸质文件上的一个公共区域,但不要求两次采集图像对应区域完全一样,也就是采集纸纹图像时,允许人工采集造成的平移和旋转。The above-mentioned same area of the same paper means that the collected reference paper texture image and the paper texture image to be identified contain a common area on the same paper document, but it does not require that the corresponding areas of the two collected images are exactly the same, that is, the collected paper Translation and rotation caused by manual acquisition are allowed when texture images are taken.
其中,步骤S1具体包括:采集参考纸纹图像时,同时记录所采集的参考纸纹图像中心点在参考纸质文件上的对应点相对于参考纸质文件中心点的坐标位置;当采集待识别纸纹图像时,根据记录的坐标位置采集相应区域的待识别纸纹图像。Among them, step S1 specifically includes: when collecting the reference paper grain image, simultaneously recording the coordinate position of the corresponding point of the center point of the collected reference paper grain image on the reference paper document relative to the center point of the reference paper document; when collecting the reference paper grain image to be identified When extracting a paper texture image, collect the paper texture image to be identified in the corresponding area according to the recorded coordinate position.
并且,采集待识别纸纹图像时,只需保证所记录的相对于参考纸质文件中心点的坐标位置在显微拍摄的视场范围内即可。Moreover, when collecting the paper pattern image to be identified, it is only necessary to ensure that the recorded coordinate position relative to the center point of the reference paper document is within the field of view of the microscope photography.
另外,采集参考纸质文件和待识别纸质文件的显微图像时,只需保证每次采集时对焦准确即可,不要求采集图像时的放大倍率高度一致,也就是可以容许采集参考纸纹图像和待识别纸纹图像时放大倍率不一致。因此,在本实施例中,在对焦准确的情况下,拍摄参考纸质文件和待识别纸质文件的显微图像。In addition, when collecting microscopic images of the reference paper document and the paper document to be identified, you only need to ensure that the focus is accurate during each collection. It does not require that the magnification when collecting the image is highly consistent, which means that the reference paper texture can be collected. The magnification of the image and the paper texture image to be recognized are inconsistent. Therefore, in this embodiment, when the focus is accurate, microscopic images of the reference paper document and the paper document to be identified are captured.
步骤S1所述的显微加透射照明的采集方案,可以采集到纸张内部纤维微米精度的纤维图像,纹理特征稳定,对光照变化以及纸张表面脏污等异常情形不敏感。The acquisition scheme of microscopy and transmitted illumination described in step S1 can collect fiber images with micron precision of the fibers inside the paper. The texture features are stable and insensitive to abnormal conditions such as changes in illumination and dirt on the paper surface.
步骤S2:提取参考纸纹图像和待识别纸纹图像的特征点进行匹配,生成特征点匹配对。Step S2: Extract and match the feature points of the reference paper texture image and the paper texture image to be identified, and generate a feature point matching pair.
具体地,步骤S2包括:Specifically, step S2 includes:
步骤S2-1:对参考纸纹图像和待识别纸纹图像进行图像预处理;Step S2-1: Perform image preprocessing on the reference paper texture image and the paper texture image to be identified;
其中,图像预处理的目的为简化参考纸纹图像与待识别纸纹图像数据,并使之具有相同的尺寸和数据格式,可以有效避免图像数据格式不一致带来的识别误差,若所采集的图像具备多个通道,选择其中一个通道数据进行处理。具体地,将参考纸纹图像和待识别纸纹图像灰度化并调整为相同的尺寸,在本实施例中,将将参考纸纹图像和待识别纸纹图像灰度化,并调整成640*640大小。Among them, the purpose of image preprocessing is to simplify the reference paper texture image and the paper texture image data to be identified, and make them have the same size and data format, which can effectively avoid recognition errors caused by inconsistent image data formats. If the collected image With multiple channels, select one of the channel data for processing. Specifically, the reference paper grain image and the paper grain image to be identified are grayscaled and adjusted to the same size. In this embodiment, the reference paper grain image and the paper grain image to be identified are grayscaled and adjusted to 640 *640 size.
步骤S2-2:对预处理后的参考纸纹图像和待识别纸纹图像进行纹理特征点提取,计算特征点的特征向量,根据特征向量匹配特征点,生成特征点匹配对。Step S2-2: Extract texture feature points from the preprocessed reference paper texture image and the paper texture image to be identified, calculate the feature vectors of the feature points, match the feature points according to the feature vectors, and generate matching pairs of feature points.
其中,特征点一般指的是具备旋转或者平移或者仿射变换不变性的点,采用特征匹配的方法,可以有效对抗采集图像的平移,旋转,错切,并且对光照变化、噪声和视点变换等不敏感,提高算法鲁棒性。Among them, feature points generally refer to points that are invariant to rotation, translation, or affine transformation. The feature matching method can effectively resist the translation, rotation, and miscut of the collected images, and also resist illumination changes, noise, and viewpoint transformations. Insensitive and improve algorithm robustness.
而本实施例中,根据特征向量匹配特征点包括:根据所提取的特征点对应的特征向量的相似性匹配特征点。具体地,分别提取参考纸纹图像和待识别纸纹图像的特征点,分别计算参考纸纹图像和待识别纸纹图像的特征点对应的特征向量的相似性,选取阈值,当所计算的相似性大于阈值时,生成特征点匹配对。其中,在一些实施例中,特征点为SURF或者SIFT特征点。而在本实施例中,特征点为SURF特征点,可以有效对抗采集纸纹图像时的平移、旋转、缩放、以及光照变化。In this embodiment, matching the feature points according to the feature vectors includes: matching the feature points according to the similarity of the feature vectors corresponding to the extracted feature points. Specifically, the feature points of the reference paper texture image and the paper texture image to be identified are respectively extracted, the similarity of the feature vectors corresponding to the feature points of the reference paper texture image and the paper texture image to be identified is calculated respectively, and a threshold is selected. When the calculated similarity When it is greater than the threshold, a matching pair of feature points is generated. In some embodiments, the feature points are SURF or SIFT feature points. In this embodiment, the feature points are SURF feature points, which can effectively resist translation, rotation, scaling, and illumination changes when collecting paper texture images.
现有技术相比,所述步骤S2中所述的匹配特征点步骤是为了估计步骤S3中的变换矩阵,以及获取参考纸纹图像与待识别纸纹图像的感兴趣区域做准备,而不是用于根据所述特征点的特征向量直接进行纸纹识别。Compared with the existing technology, the matching feature point step described in step S2 is to prepare for estimating the transformation matrix in step S3 and obtaining the reference paper texture image and the area of interest of the paper texture image to be identified, rather than using Paper texture recognition is directly performed based on the feature vectors of the feature points.
步骤S3:根据特征点匹配对估计参考纸纹图像和待识别纸纹图像之间的变换矩阵,分别获取参考纸纹图像与待识别纸纹图像的感兴趣区域;Step S3: Estimate the transformation matrix between the reference paper texture image and the paper texture image to be identified based on the feature point matching pair, and obtain the areas of interest of the reference paper texture image and the paper texture image to be identified respectively;
具体地,步骤S3包括:Specifically, step S3 includes:
步骤S3-1:剔除步骤S2中得到的特征点匹配对中的错误特征点匹配对,根据余下有效的特征点匹配对,估计参考纸纹图像和待识别纸纹图像之间的变换矩阵,使得变换之后二者之间对应的有效特征点匹配对在空间位置上尽可能的一致;Step S3-1: Eliminate erroneous feature point matching pairs among the feature point matching pairs obtained in step S2, and estimate the transformation matrix between the reference paper texture image and the paper texture image to be identified based on the remaining valid feature point matching pairs, such that After the transformation, the corresponding effective feature point matching pairs between the two are as consistent as possible in spatial position;
步骤S3-2:利用变换矩阵对待识别纸纹图像进行变换;Step S3-2: Use the transformation matrix to transform the paper texture image to be identified;
步骤S3-3:将变换后的待识别纸纹图像与参考纸纹图像进行叠放,分别从两个纸纹图像中截取重合部分的内接矩形作为感兴趣区域,实现纸纹校准。这里的感兴趣区域是两个,一个是参考纸纹图像的,一个是待识别纸纹图像的。Step S3-3: Overlay the transformed paper texture image to be identified and the reference paper texture image, and intercept the inscribed rectangles of the overlapping parts from the two paper texture images as regions of interest to achieve paper texture calibration. There are two areas of interest here, one is the reference paper texture image, and the other is the paper texture image to be recognized.
在步骤S3-1中所述的剔除特征点匹配对的目的在于有效降低噪声对于特征点选取的影响,剔除由噪声干扰产生的特征点错误匹配对,保证步骤S3-1中所述估计的待识别纸纹图像与参考纸纹图像之间的变换矩阵的准确性。The purpose of eliminating matching pairs of feature points described in step S3-1 is to effectively reduce the impact of noise on feature point selection, eliminate erroneous matching pairs of feature points caused by noise interference, and ensure that the estimated value to be estimated in step S3-1 is Accuracy in identifying transformation matrices between paper texture images and reference paper texture images.
所述步骤S3-1中所述的变换矩阵,当参考纸纹图像和待识别纸纹图像采集自同一张纸张的相同区域时,可以自动调整待识别纸纹相对于参考纸纹的位置平移、旋转以及缩放。The transformation matrix described in step S3-1, when the reference paper texture image and the paper texture image to be identified are collected from the same area of the same paper, can automatically adjust the position translation of the paper texture to be identified relative to the reference paper texture, Rotate and scale.
所述步骤S3-1中所述的感兴趣区域,当参考纸纹图像和待识别纸纹图像采集自同一张纸张的相同区域时,所述所获取的感兴趣区域为两次采集的公共区域。The area of interest described in step S3-1, when the reference paper texture image and the paper texture image to be identified are collected from the same area of the same piece of paper, the obtained area of interest is the common area of the two collections. .
在本实施例中,优选地,步骤S3-1包括:采用MSAC算法剔除步骤S2中得到的特征点匹配对中的错误特征点匹配对,以参考纸纹图像为标准,根据余下有效的特征点匹配对,估计待识别纸纹图像到参考纸纹图像的仿射变换矩阵;In this embodiment, preferably, step S3-1 includes: using the MSAC algorithm to eliminate erroneous feature point matching pairs from the feature point matching pairs obtained in step S2, using the reference paper texture image as the standard, and based on the remaining valid feature points Matching pairs, estimating the affine transformation matrix from the paper texture image to be recognized to the reference paper texture image;
步骤S3-2包括:利用仿射变换矩阵对待识别纸纹图像进行仿射变换,使得变换之后二者之间对应的有效特征点匹配对在空间位置上尽可能的一致;Step S3-2 includes: using an affine transformation matrix to perform affine transformation on the paper texture image to be identified, so that the corresponding effective feature point matching pairs between the two are as consistent as possible in spatial position after transformation;
步骤S3-3包括:将变换后的待识别纸纹图像与参考纸纹图像进行叠放,使参考纸纹图像和待识别纸纹图像具有较大的相似区域,选定阈值S,若变换后的重合部分面积占参考纸纹图像或待识别纸纹图像总面积的比较大于S时,则截取变换后二者的重合部分的最大内接矩形作为感兴趣区域。在本实施例中,S=20%。Step S3-3 includes: overlaying the transformed paper texture image to be identified and the reference paper texture image, so that the reference paper texture image and the paper texture image to be identified have a larger similar area, and selecting the threshold S. If after transformation When the area of the overlapping part accounts for the total area of the reference paper texture image or the paper texture image to be identified is greater than S, then the largest inscribed rectangle of the overlapping part of the two after transformation is intercepted as the area of interest. In this example, S=20%.
在一些实施例中,步骤S3还包括:若步骤S3-1中有效的特征点匹配对数量过少,不足以估计变换矩阵,或者感兴趣区域面积过小时,则以步骤S2-1中经过预处理的参考纸纹图像和待识别纸纹图像作为感兴趣区域。In some embodiments, step S3 also includes: if the number of effective feature point matching pairs in step S3-1 is too small to estimate the transformation matrix, or the area of the region of interest is too small, then use the predetermined method in step S2-1. The processed reference paper texture image and the paper texture image to be identified are used as regions of interest.
具体的,在本实施例中,由于估计一个仿射变换矩阵需要三个特征点对,如果步骤S3-1中剩余的有效的特征点匹配对数量少于3,不足以估计仿射变换矩阵或者变换后参考纸纹和待识别纸纹的重合部分面积比例小于20%,则以步骤S2-1中经过预处理的参考纸纹图像和待识别纸纹图像作为感兴趣区域。Specifically, in this embodiment, since estimating an affine transformation matrix requires three feature point pairs, if the number of remaining valid feature point matching pairs in step S3-1 is less than 3, it is not enough to estimate the affine transformation matrix or After the transformation, the overlapping area ratio of the reference paper texture and the paper texture to be identified is less than 20%, then the reference paper texture image and the paper texture image to be identified that were preprocessed in step S2-1 are used as the area of interest.
有现有技术相比,本发明通过截取感兴趣区域的方法,可以不用通过打印矩形、打印水印等额外操作实现所采集纸纹图像之间的配准。Compared with the existing technology, the present invention can achieve registration between the collected paper texture images by intercepting the area of interest without additional operations such as printing rectangles and watermarks.
与现有技术相比,本发明通过截取感兴趣区域的方法,保证纸纹识别不依赖于全局特征点的提取,当纸张表面出现局部涂鸦、水渍、打印文字时,无需事先去污,仍可以保证纸纹识别的准确性。Compared with the existing technology, the present invention ensures that paper texture recognition does not rely on the extraction of global feature points by intercepting areas of interest. When local graffiti, water stains, or printed text appear on the surface of the paper, no prior decontamination is required. The accuracy of paper grain identification can be guaranteed.
所述步骤S3利用所述步骤S2中提取的特征点匹配对获取感兴趣区域,可以有效对抗采集纸纹时纸张的平移、旋转、环境的光照变化、纸张与图像采集设备之间相对的位置变化以及图像采集光学放大倍率变化,与现有的依靠全局特征点的特征向量直接进行纸纹识别的方法相比,本发明所提出的方法可以很好的对抗采集图像时人工操作不一致带来的差异。The step S3 uses the feature point matching pairs extracted in the step S2 to obtain the area of interest, which can effectively resist the translation and rotation of the paper, the changes in the lighting of the environment, and the relative position changes between the paper and the image acquisition device when collecting paper textures. As well as changes in the optical magnification of image acquisition, compared with the existing method of directly identifying paper grains by relying on the feature vectors of global feature points, the method proposed by the present invention can well combat the differences caused by inconsistent manual operations when collecting images. .
步骤S3所述的获得参考纸纹图像与待识别纸纹图像的感兴趣区域的流程见图2,获取参考纸纹图像与待识别纸纹图像的感兴趣区域实例示意图(参考纸纹与待识别纸纹采集自同一张纸张的相同区域)见图3;The process of obtaining the area of interest of the reference paper texture image and the paper texture image to be identified in step S3 is shown in Figure 2. The schematic diagram of an example of obtaining the area of interest of the reference paper texture image and the paper texture image to be identified (reference paper texture and the area of interest to be identified) The paper texture is collected from the same area of the same paper) (see Figure 3;
所述步骤S2和S3基于纸张纤维显微图像特征点估计图像间变换的纸纹校准方法的稳定性由所述步骤S1采集的纸张内部纤维微米精度的显微图像纹理特征的稳定性所保障,使得所述校准算法可以对抗纸纹采集过程中光照变化或者人为以及设备操作偏差带来的纸纹图像的平移、旋转以及缩放。The stability of the paper texture calibration method based on the inter-image transformation of the paper fiber microscopic image feature point estimation in steps S2 and S3 is guaranteed by the stability of the micron-precision microscopic image texture features of the internal fibers of the paper collected in step S1. The calibration algorithm can resist the translation, rotation and scaling of the paper texture image caused by illumination changes or human and equipment operation deviations during the paper texture collection process.
步骤S4:分别增强感兴趣区域的纤维纹理。Step S4: Enhance the fiber texture of the area of interest respectively.
具体地,步骤S4包括:Specifically, step S4 includes:
采用多个角度的Gabor滤波器分别增强两个感兴趣区域的纤维纹理,同时每个角度的Gabor滤波器均生成一个幅值响应矩阵和一个相角响应矩阵,然后将多个角度下的幅值响应矩阵和相角响应矩阵进行叠加,最后一张纸纹图像生成一个对应的幅值响应矩阵和相角响应矩阵。Gabor filters at multiple angles are used to enhance the fiber texture of the two areas of interest. At the same time, the Gabor filter at each angle generates an amplitude response matrix and a phase angle response matrix, and then the amplitudes at multiple angles are The response matrix and the phase angle response matrix are superimposed, and the last paper texture image generates a corresponding amplitude response matrix and phase angle response matrix.
现有的纸纹防伪技术,多是利用单方向的Gabor滤波进行纸纹增强,然后利用所产生矩阵的特征向量或者奇异值进行最终的纸纹识别,这会造成纸纹的细节丢失,影响识别准确性。Existing paper texture anti-counterfeiting technologies mostly use unidirectional Gabor filtering to enhance the paper texture, and then use the eigenvectors or singular values of the generated matrix for final paper texture recognition. This will cause the details of the paper texture to be lost and affect the recognition. accuracy.
所述的多角度Gabor滤波增强,考虑到纸张表面以及内部显微纤维纹理排布以及方向的随机性,可以更大程度的保留纸纹纹理信息,提高识别准确性。The multi-angle Gabor filter enhancement described above, taking into account the randomness of the paper surface and internal microfiber texture arrangement and direction, can retain the paper texture information to a greater extent and improve the recognition accuracy.
具体的,采用多角度Gabor滤波器增强参考纸纹图像和待识别纸纹图像的感兴趣区域的纤维纹理,Gabor滤波器具有在空间域和频率域同时取得最优局部化的特性,因此能够很好地描述对应于空间频率、空间位置及方向选择性的局部结构信息。在本实施例中,同时将四个方向的Gabor滤波器作用在一个纸纹上,将四个Gabor滤波器的空间频率均设置为0.1,然后将方向参数分别设置为0°、30°、60°、90°,因此每个纸纹可以得到四个Gabor滤波器所对应的四个幅值响应与四个相角响应,分别将四个幅值响应和四个相角响应叠加得到联合的幅值响应和相角响应作为一张纸纹最终的响应结果。需要指出的是,这里幅值响应与相角响应矩阵的大小与输入的感兴趣区域的大小相同,因此为保证不同纸纹识别结果具有可比性,在增强纸纹之前需要按照事先给定的图像尺寸参数,将参考纸纹图像和待识别纸纹图像的感兴趣区域调整为相同的尺寸大小,因为不同纸纹图像重合区域大小不同,截取的感兴趣区域大小可能不同。在本实施例中,在增强纹理之前将感兴趣区域均调整为32*32大小图像。其中,采用多个角度的Gabor滤波器增强感兴趣区域的纤维纹理的实例示意图见图4。Specifically, a multi-angle Gabor filter is used to enhance the fiber texture of the reference paper texture image and the area of interest of the paper texture image to be identified. The Gabor filter has the characteristics of achieving optimal localization in the spatial domain and frequency domain at the same time, so it can easily Good description of local structural information corresponding to spatial frequency, spatial position and direction selectivity. In this embodiment, Gabor filters in four directions are applied to a paper pattern at the same time. The spatial frequencies of the four Gabor filters are set to 0.1, and then the direction parameters are set to 0°, 30°, and 60 respectively. °, 90°, so each paper pattern can obtain four amplitude responses and four phase angle responses corresponding to four Gabor filters. The four amplitude responses and four phase angle responses are superimposed respectively to obtain a joint amplitude response. The value response and phase angle response are the final response results of a piece of paper texture. It should be pointed out that the size of the amplitude response and phase angle response matrix here is the same as the size of the input area of interest. Therefore, in order to ensure that the recognition results of different paper textures are comparable, it is necessary to follow the pre-given image before enhancing the paper texture. The size parameter adjusts the area of interest of the reference paper texture image and the paper texture image to be identified to the same size. Because the overlapping areas of different paper texture images have different sizes, the size of the intercepted area of interest may be different. In this embodiment, before enhancing the texture, the regions of interest are all adjusted to 32*32 size images. Among them, a schematic diagram of an example of using Gabor filters at multiple angles to enhance the fiber texture in the area of interest is shown in Figure 4.
由于本方法采集的纸张内部纤维微米精度显微纹理特征稳定,在光照变化和纸张表面出现脏污的条件下仍能保持稳定,使得所述步骤S4所述的基于纸张纤维纹理图像的识别算法能够对抗光照变换和纸张脏污等异常情形。Since the micron-precision microtexture characteristics of the internal fibers of the paper collected by this method are stable and can remain stable under the conditions of changes in illumination and dirt on the surface of the paper, the recognition algorithm based on the paper fiber texture image described in step S4 can be Fight against abnormal conditions such as lighting changes and paper soiling.
步骤S5:根据增强后的纹理结构衡量相似性,输出识别结果。Step S5: Measure the similarity based on the enhanced texture structure and output the recognition result.
具体地,步骤S5中所述的根据增强后的纹理结构衡量相似性的方法为:Specifically, the method of measuring similarity based on the enhanced texture structure described in step S5 is:
分别将待识别纸纹图像和参考纸纹图像的感兴趣区域增强后的纹理结构映射到01数字空间,利用海明距离生成相似性指标,衡量参考纸纹图像和待识别纸纹图像对应生成的比特流的相似性。The enhanced texture structure of the area of interest of the paper texture image to be identified and the reference paper texture image is mapped to the 01 digital space respectively, and the Hamming distance is used to generate a similarity index to measure the corresponding generated results of the reference paper texture image and the paper texture image to be identified. Bitstream similarity.
优选地,将增强后的纹理结构映射到01数字空间,利用海明距离生成相似性指标,衡量参考纸纹图像和待识别纸纹图像对应生成的比特流的相似性包括:Preferably, the enhanced texture structure is mapped to 01 digital space, the Hamming distance is used to generate a similarity index, and the similarity of the bit stream generated corresponding to the reference paper texture image and the paper texture image to be identified includes:
分别计算参考纸纹图像和待识别纸纹图像的感兴趣区域最终的幅值响应矩阵和相角响应矩阵的均值,将响应矩阵中的每一个数字与均值进行大小比,比均值大的取1,比均值小的取0,将两个数字矩阵按行或列展开后,将得到的比特流拼接在一起,分别作为参考纸纹图像与待识别纸纹图像的数字纸纹;Calculate the mean of the final amplitude response matrix and phase angle response matrix of the reference paper texture image and the area of interest of the paper texture image to be identified respectively, compare each number in the response matrix with the mean, and take 1 if it is larger than the mean. , which is smaller than the mean value, is set to 0. After expanding the two digital matrices by row or column, the resulting bit streams are spliced together and used as the reference paper texture image and the digital paper texture of the paper texture image to be recognized respectively;
计算参考纸纹图像和待识别纸纹图像的数字纸纹之间的海明距离,将海明距离占数字纸纹总长度的比例作为参考纸纹与待识别纸纹之间的相似度指标;Calculate the Hamming distance between the reference paper texture image and the digital paper texture of the paper texture image to be identified, and use the proportion of the Hamming distance to the total length of the digital paper texture as the similarity index between the reference paper texture and the paper texture to be identified;
选取阈值t,若相似度指标大于阈值t时,则识别失败,若相似度指标小于阈值t时,则识别成功。Select the threshold t. If the similarity index is greater than the threshold t, the recognition fails. If the similarity index is less than the threshold t, the recognition is successful.
具体地,分别计算感兴趣区域最终的幅值响应与相角响应矩阵的均值,然后将响应矩阵中的每一个数字与均值进行大小比较,比均值大的取1,比均值小的取0;在本实施例中,对于一个纸纹的32*32的感兴趣区域,最终可以得到两个分别对应幅值响应与相角响应的32*32的01数字矩阵,将两个数字矩阵按行或者按列展开后,将得到的比特流拼接在一起,可以得到一个大小为32*32*2=2048的比特流,分别作为参考纸纹图像与待识别纸纹图像的数字纸纹。Specifically, calculate the mean of the final amplitude response and phase angle response matrix of the area of interest, and then compare each number in the response matrix with the mean. If it is larger than the mean, it will be 1, and if it is smaller than the mean, it will be 0; In this embodiment, for a 32*32 area of interest of a paper pattern, two 32*32 01 digital matrices corresponding to the amplitude response and phase angle response can finally be obtained. The two digital matrices are divided into rows or After expanding by columns, the obtained bit streams are spliced together to obtain a bit stream with a size of 32*32*2=2048, which is used as the reference paper texture image and the digital paper texture of the paper texture image to be recognized respectively.
在信息编码中,两个合法代码对应位上编码不同的位数称为海明距离;在本实施例中,计算上述步骤中,得到的参考纸纹图像与待识别纸纹图像的数字纸纹之间的海明距离,然后将海明距离占数字纸纹总长度2048的比例作为参考纸纹与待识别纸纹之间的相似度指标,例如,当参考纸纹图像与待识别纸纹图像所生成的数字纸纹对应位上不同的位数是80时,即海明距离为80时,所对应的相似度指标为0.039;需要指出的是,若参考纸纹图像与待识别纸纹图像采集自同一纸张的相同区域,参考纸纹图像与待识别纸纹图像对应的数字纸纹之间的相似性指标接近0;若参考纸纹图像与待识别纸纹图像采集自不同纸张,或者相同纸张的不同区域,由随机性,参考纸纹图像与待识别纸纹图像对应的数字纸纹之间的相似性指标接近0.5。在本实施例中,选取阈值t=0.25,当相似性指标大于0.25时识别失败,当相似性指标小于0.25时,识别成功。并且,需要说明的是,阈值的选取可以根据大量的实验结果最终给定。In information encoding, the number of different bits encoded on the corresponding bits of two legal codes is called the Hamming distance; in this embodiment, the reference paper texture obtained in the above steps is calculated and the digital paper texture of the paper texture image to be recognized Hamming distance between them, and then use the proportion of the Hamming distance to the total length of the digital paper texture 2048 as the similarity index between the reference paper texture and the paper texture to be identified. For example, when the reference paper texture image and the paper texture image to be identified When the number of different digits in the corresponding bits of the generated digital paper texture is 80, that is, when the Hamming distance is 80, the corresponding similarity index is 0.039; it should be pointed out that if the reference paper texture image and the paper texture image to be identified Collected from the same area of the same paper, the similarity index between the reference paper texture image and the digital paper texture corresponding to the paper texture image to be identified is close to 0; if the reference paper texture image and the paper texture image to be identified are collected from different papers, or they are the same In different areas of the paper, due to randomness, the similarity index between the reference paper texture image and the digital paper texture corresponding to the paper texture image to be identified is close to 0.5. In this embodiment, the threshold t=0.25 is selected. When the similarity index is greater than 0.25, the recognition fails. When the similarity index is less than 0.25, the recognition is successful. Moreover, it should be noted that the selection of the threshold can be finally determined based on a large number of experimental results.
本发明提出了一种基于纸张内部微米精度纤维显微结构的纸纹识别方法,主要利用纸张内部具备随机性的显微纤维交织的纹理结构进行纸纹识别。与现有技术相比,本发明提出的纸纹识别方法至少具备以下几个有益效果:(1)采用显微加透射照明的纸纹采集方案,可以直接采集纸张表面到以下50微米范围内的微米精度的纤维结构纹理图像,且纤维纹理稳定,对光照变化以及纸张表面脏污不敏感;(2)采用纸张微米精度的纤维纹理特征进行纸纹识别,保证了本发明提出的基于特征点预估纸纹变换矩阵的纸纹校准方法的稳定性,可以对抗纸纹采集过程中人为以及设备操作偏差带来的纸纹图像的平移、旋转以及缩放;(3)由于微米精度的纤维纹理特征在纸张表面出现脏污时不容易丢失,保证本发明提出的纸纹识别算法在赃物情况下仍能实现纸纹校准以及识别工作,使得本发明提出的纸纹识别方法可以对抗纸张表面赃物等不利情形。The present invention proposes a paper pattern identification method based on the micron-precision fiber microstructure inside the paper, which mainly uses the random microfiber interweaving texture structure inside the paper to identify the paper pattern. Compared with the existing technology, the paper pattern identification method proposed by the present invention has at least the following beneficial effects: (1) Using a paper pattern collection scheme using microscopy and transmitted illumination, it can directly collect the paper surface to the following 50 micron range Micron-precision fiber structure texture image, and the fiber texture is stable and insensitive to changes in illumination and paper surface dirt; (2) Using micron-precision fiber texture characteristics of paper for paper pattern recognition ensures that the feature point prediction method proposed by the present invention is Estimating the stability of the paper texture calibration method of the paper texture transformation matrix can resist the translation, rotation and scaling of the paper texture image caused by human and equipment operation deviations in the paper texture collection process; (3) Due to the fiber texture characteristics of micron precision in It is not easy to lose when the surface of the paper is dirty, ensuring that the paper pattern recognition algorithm proposed in the present invention can still achieve paper texture calibration and recognition work in the case of stolen goods, so that the paper pattern recognition method proposed in the present invention can resist unfavorable situations such as stolen goods on the paper surface. .
本发明是通过具体实施例进行说明的,本领域技术人员应当明白,在不脱离本发明范围的情况下,还可以对本发明进行各种变换和等同替代。另外,针对特定情形或具体情况,可以对本发明做各种修改,而不脱离本发明的范围。因此,本发明不局限于所公开的具体实施例,而应当包括落入本发明权利要求范围内的全部实施方式。The present invention is illustrated through specific embodiments. Those skilled in the art will understand that various transformations and equivalent substitutions can be made to the present invention without departing from the scope of the present invention. In addition, various modifications may be made to adapt the invention to a particular situation or situation without departing from the scope of the invention. Therefore, the present invention is not limited to the specific embodiments disclosed, but it is intended to include all embodiments falling within the scope of the claims of the present invention.
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010348238.9A CN111680549B (en) | 2020-04-28 | 2020-04-28 | A paper pattern identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010348238.9A CN111680549B (en) | 2020-04-28 | 2020-04-28 | A paper pattern identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111680549A CN111680549A (en) | 2020-09-18 |
CN111680549B true CN111680549B (en) | 2023-12-05 |
Family
ID=72452212
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010348238.9A Active CN111680549B (en) | 2020-04-28 | 2020-04-28 | A paper pattern identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111680549B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022082431A1 (en) * | 2020-10-20 | 2022-04-28 | Beijing Tripmonkey Technology Limited | Systems and methods for extracting information from paper media based on depth information |
CN115439974A (en) * | 2022-09-30 | 2022-12-06 | 武汉工程大学 | Microstructure-based paper authenticity verification device and method |
CN117893502B (en) * | 2024-01-15 | 2025-02-25 | 广州市科帕电子科技有限公司 | Image detection method, device, equipment and storage medium |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101894256A (en) * | 2010-07-02 | 2010-11-24 | 西安理工大学 | Iris Recognition Method Based on Odd Symmetrical 2D Log-Gabor Filter |
CN102073865A (en) * | 2010-12-24 | 2011-05-25 | 兆日科技(深圳)有限公司 | Anti-counterfeiting method and system using autologous fiber textures of paper |
CN102651074A (en) * | 2012-02-22 | 2012-08-29 | 大连理工大学 | Texture feature-based printed paper identification method |
CN103049905A (en) * | 2012-12-07 | 2013-04-17 | 中国人民解放军海军航空工程学院 | Method for realizing image registration of synthetic aperture radar (SAR) by using three components of monogenic signals |
CN103295241A (en) * | 2013-06-26 | 2013-09-11 | 中国科学院光电技术研究所 | Frequency domain significance target detection method based on Gabor wavelet |
CN103927527A (en) * | 2014-04-30 | 2014-07-16 | 长安大学 | Human face feature extraction method based on single training sample |
CN104834909A (en) * | 2015-05-07 | 2015-08-12 | 长安大学 | Image characteristic description method based on Gabor synthetic characteristic |
WO2015160340A1 (en) * | 2014-04-16 | 2015-10-22 | Halliburton Energy Services, Inc. | Ultrasonic signal time-frequency decomposition for borehole evaluation or pipeline inspection |
WO2016146265A1 (en) * | 2015-03-17 | 2016-09-22 | Zynaptiq Gmbh | Methods for extending frequency transforms to resolve features in the spatio-temporal domain |
CN106022391A (en) * | 2016-05-31 | 2016-10-12 | 哈尔滨工业大学深圳研究生院 | Hyperspectral image characteristic parallel extraction and classification method |
CN106875543A (en) * | 2017-01-25 | 2017-06-20 | 杭州视氪科技有限公司 | A kind of visually impaired people's bill acceptor system and recognition methods based on RGB D cameras |
CN108596197A (en) * | 2018-05-15 | 2018-09-28 | 汉王科技股份有限公司 | A kind of seal matching process and device |
WO2018192023A1 (en) * | 2017-04-21 | 2018-10-25 | 深圳大学 | Method and device for hyperspectral remote sensing image classification |
CN110084754A (en) * | 2019-06-25 | 2019-08-02 | 江苏德劭信息科技有限公司 | A kind of image superimposing method based on improvement SIFT feature point matching algorithm |
CN110472479A (en) * | 2019-06-28 | 2019-11-19 | 广州中国科学院先进技术研究所 | A kind of finger vein identification method based on SURF feature point extraction and part LBP coding |
CN110599665A (en) * | 2018-06-13 | 2019-12-20 | 深圳兆日科技股份有限公司 | Paper pattern recognition method and device, computer equipment and storage medium |
CN110930398A (en) * | 2019-12-09 | 2020-03-27 | 嘉兴学院 | Log-Gabor similarity-based full-reference video quality evaluation method |
CN110941989A (en) * | 2019-10-18 | 2020-03-31 | 北京达佳互联信息技术有限公司 | Image verification method, image verification device, video verification method, video verification device, equipment and storage medium |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9729326B2 (en) * | 2008-04-25 | 2017-08-08 | Feng Lin | Document certification and authentication system |
CN101604385A (en) * | 2009-07-09 | 2009-12-16 | 深圳大学 | Palmprint recognition method and palmprint recognition device |
KR101677561B1 (en) * | 2010-12-08 | 2016-11-18 | 한국전자통신연구원 | Image registration device and image registration method thereof |
JP2012234258A (en) * | 2011-04-28 | 2012-11-29 | Sony Corp | Image processing device, image processing method, and program |
CH710713B1 (en) * | 2015-02-13 | 2021-08-31 | Paper Dna Ag | Authentication method using surface paper texture. |
KR102696652B1 (en) * | 2017-01-26 | 2024-08-21 | 삼성전자주식회사 | Stero matching method and image processing apparatus |
WO2018220681A1 (en) * | 2017-05-29 | 2018-12-06 | オリンパス株式会社 | Image processing device, image processing method, and image processing program |
CN107292273B (en) * | 2017-06-28 | 2021-03-23 | 西安电子科技大学 | Double Gabor palmprint ROI matching method based on specific extended eight-neighborhood |
CN110738222B (en) * | 2018-07-18 | 2022-12-06 | 深圳兆日科技股份有限公司 | Image matching method and device, computer equipment and storage medium |
CN109598205A (en) * | 2018-11-09 | 2019-04-09 | 国网山东省电力公司淄博供电公司 | The method of Finger print characteristic abstract and compressed encoding based on Gabor transformation |
-
2020
- 2020-04-28 CN CN202010348238.9A patent/CN111680549B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101894256A (en) * | 2010-07-02 | 2010-11-24 | 西安理工大学 | Iris Recognition Method Based on Odd Symmetrical 2D Log-Gabor Filter |
CN102073865A (en) * | 2010-12-24 | 2011-05-25 | 兆日科技(深圳)有限公司 | Anti-counterfeiting method and system using autologous fiber textures of paper |
CN102651074A (en) * | 2012-02-22 | 2012-08-29 | 大连理工大学 | Texture feature-based printed paper identification method |
CN103049905A (en) * | 2012-12-07 | 2013-04-17 | 中国人民解放军海军航空工程学院 | Method for realizing image registration of synthetic aperture radar (SAR) by using three components of monogenic signals |
CN103295241A (en) * | 2013-06-26 | 2013-09-11 | 中国科学院光电技术研究所 | Frequency domain significance target detection method based on Gabor wavelet |
WO2015160340A1 (en) * | 2014-04-16 | 2015-10-22 | Halliburton Energy Services, Inc. | Ultrasonic signal time-frequency decomposition for borehole evaluation or pipeline inspection |
CN103927527A (en) * | 2014-04-30 | 2014-07-16 | 长安大学 | Human face feature extraction method based on single training sample |
WO2016146265A1 (en) * | 2015-03-17 | 2016-09-22 | Zynaptiq Gmbh | Methods for extending frequency transforms to resolve features in the spatio-temporal domain |
CN104834909A (en) * | 2015-05-07 | 2015-08-12 | 长安大学 | Image characteristic description method based on Gabor synthetic characteristic |
CN106022391A (en) * | 2016-05-31 | 2016-10-12 | 哈尔滨工业大学深圳研究生院 | Hyperspectral image characteristic parallel extraction and classification method |
CN106875543A (en) * | 2017-01-25 | 2017-06-20 | 杭州视氪科技有限公司 | A kind of visually impaired people's bill acceptor system and recognition methods based on RGB D cameras |
WO2018192023A1 (en) * | 2017-04-21 | 2018-10-25 | 深圳大学 | Method and device for hyperspectral remote sensing image classification |
CN108596197A (en) * | 2018-05-15 | 2018-09-28 | 汉王科技股份有限公司 | A kind of seal matching process and device |
CN110599665A (en) * | 2018-06-13 | 2019-12-20 | 深圳兆日科技股份有限公司 | Paper pattern recognition method and device, computer equipment and storage medium |
CN110084754A (en) * | 2019-06-25 | 2019-08-02 | 江苏德劭信息科技有限公司 | A kind of image superimposing method based on improvement SIFT feature point matching algorithm |
CN110472479A (en) * | 2019-06-28 | 2019-11-19 | 广州中国科学院先进技术研究所 | A kind of finger vein identification method based on SURF feature point extraction and part LBP coding |
CN110941989A (en) * | 2019-10-18 | 2020-03-31 | 北京达佳互联信息技术有限公司 | Image verification method, image verification device, video verification method, video verification device, equipment and storage medium |
CN110930398A (en) * | 2019-12-09 | 2020-03-27 | 嘉兴学院 | Log-Gabor similarity-based full-reference video quality evaluation method |
Non-Patent Citations (3)
Title |
---|
Shihab Hamad Khaleefah ; Mohammad Faidzul Nasrudin ; Salama A. Mostafa.Fingerprinting of Deformed Paper Images Acquired by Scanners.《2015 IEEE Student Conference on Research and Development (SCOReD)》.2016,第393-397页. * |
基于SIFT变换的水印图像几何失真校正算法;李振宏;吴慧中;;计算机工程与设计(12);231-233 * |
票据诈骗案频发 纸纹为票据"加把锁";项楠;《中国防伪报道》(第第03期期);第108页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111680549A (en) | 2020-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111680549B (en) | A paper pattern identification method | |
CN110598699A (en) | Anti-counterfeiting bill authenticity distinguishing system and method based on multispectral image | |
Saber et al. | A survey on image forgery detection using different forensic approaches | |
CN103761799B (en) | A kind of bill anti-counterfeit method based on texture image feature and device | |
CN101582162B (en) | Art identifying method based on texture analysis | |
BRPI0812547B1 (en) | METHOD AND DEVICE FOR PREVENTING FAKE COPIES OF PRINTED DOCUMENTS | |
CN106778823A (en) | A kind of readings of pointer type meters automatic identifying method | |
CN101894260A (en) | Recognition method of counterfeit seal based on random generation of feature lines based on matching feature points | |
CN102542660A (en) | Bill anti-counterfeiting identification method based on bill watermark distribution characteristics | |
CN102324134A (en) | Valuable document identification method and device | |
İmamoğlu et al. | Detection of copy-move forgery using krawtchouk moment | |
CN102156884A (en) | Straight segment detecting and extracting method | |
MX2008000768A (en) | Counterfeit deterrence using dispersed miniature security marks. | |
CN102609947B (en) | Forgery detection method for spliced and distorted digital photos | |
Zheng et al. | A system for identifying an anti-counterfeiting pattern based on the statistical difference in key image regions | |
Trung et al. | Blind inpainting forgery detection | |
CN102306415A (en) | Portable valuable file identification device | |
Iqbal et al. | Automatic signature extraction from document images using hyperspectral unmixing: Automatic signature extraction using hyperspectral unmixing | |
Rathee et al. | Feature fusion for fake Indian currency detection | |
CN106599923A (en) | Detecting method and detecting device for stamped anti-counterfeiting characteristic | |
CN109087234A (en) | Watermark embedding method and device in a kind of text image | |
Benhamza et al. | Image forgery detection review | |
Du et al. | Image copy-move forgery detection based on SIFT-BRISK | |
Zhao et al. | Effective digital image copy-move location algorithm robust to geometric transformations | |
US7676058B2 (en) | System and method for detection of miniature security marks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: Room 01, 21st Floor, Building 1, Huigu Space, No. 206 Laowuhuang Road, Guandong Street, Wuhan Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430223 Patentee after: Xiaophoton (Wuhan) Technology Co.,Ltd. Patentee after: HUAZHONG University OF SCIENCE AND TECHNOLOGY Address before: 430000 science and technology building, 243 Luoyu Road, Donghu Development Zone, Wuhan City, Hubei Province Patentee before: CONVERGENCE TECHNOLOGY Co.,Ltd. Patentee before: HUAZHONG University OF SCIENCE AND TECHNOLOGY |
|
CP03 | Change of name, title or address |