CN102682266B - A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic - Google Patents
A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic Download PDFInfo
- Publication number
- CN102682266B CN102682266B CN201210152638.8A CN201210152638A CN102682266B CN 102682266 B CN102682266 B CN 102682266B CN 201210152638 A CN201210152638 A CN 201210152638A CN 102682266 B CN102682266 B CN 102682266B
- Authority
- CN
- China
- Prior art keywords
- image
- nmvtemp
- sigma
- barcode
- pix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明提出了一种基于图像拼接的柱面二维条码识读方法,步骤包括旋转采集一组二维条码图片、图像光照不均校正、条码边缘信息增强、条码图像及条码位置识别、条码模块划分、条码图像粗配准、条码图像精配准和图像拼接融合与条码识别。本发明改变现有的二维条码识读系统基于一幅图像的识别原理,能够完整采集柱面条码信息,消除柱面条码的畸变和光照不均等问题。根据Data Matrix二维条码图像的特征,设计的拼接融合算法能够很好解决二维条码图像拼接过程中遇到正确率不高、效率低等突出问题,实现快速准确的识读柱面二维条码信息。
The invention proposes a cylindrical two-dimensional barcode reading method based on image splicing, the steps include rotating and collecting a group of two-dimensional barcode pictures, image uneven illumination correction, barcode edge information enhancement, barcode image and barcode position recognition, barcode module division, barcode image coarse registration, barcode image fine registration and image splicing fusion and barcode recognition. The invention changes the recognition principle based on an image of the existing two-dimensional barcode reading system, can completely collect cylindrical barcode information, and eliminates problems such as distortion and uneven illumination of the cylindrical barcode. According to the characteristics of Data Matrix two-dimensional barcode images, the splicing and fusion algorithm designed can well solve the outstanding problems encountered in the process of two-dimensional barcode image splicing such as low accuracy rate and low efficiency, and realize fast and accurate reading of cylindrical two-dimensional barcodes information.
Description
技术领域 technical field
本发明涉及柱面二维条码识读技术领域,具体为一种基于图像拼接的柱面二维条码识读方法。The invention relates to the technical field of cylindrical two-dimensional barcode reading, in particular to a cylindrical two-dimensional barcode reading method based on image splicing.
背景技术 Background technique
产品表面直接标刻二维条码快速准确识读,是产品全生命周期管理和信息追踪的基础,是成为提高库存管理效率、实现生产过程信息采集及实时追踪的关键。目前多选用二维Data Matrix条码作为产品的二维条码永久标识,这是因为Data Matrix条码编码容量大、密度高、纠错能力强。Fast and accurate reading of two-dimensional barcodes directly marked on the surface of products is the basis for product lifecycle management and information tracking, and is the key to improving inventory management efficiency and realizing production process information collection and real-time tracking. At present, the two-dimensional Data Matrix barcode is mostly used as the permanent two-dimensional barcode label of the product, because the Data Matrix barcode has a large encoding capacity, high density, and strong error correction ability.
目前的二维条码识读方法都采用CCD摄像头采集一幅含有二维条码的图像,再对图像进行一系列处理,去除背景定位条码区域,然后提取条码数据信息。现有的二维条码识读系统只能处理平面状态下完整的、形变较小及光照均匀情况下的二维条码图像。例如美国康耐视出产的手持式Dataman 7500和德国产的封闭式读码器MATRIX2000。然而现实中经常遇到圆柱形的产品,由于标识在柱面上的二维条码与平面二维条码有着很大的差异,造成了识读困难,具体如下:The current two-dimensional barcode reading methods all use a CCD camera to capture an image containing a two-dimensional barcode, and then perform a series of processing on the image to remove the background and locate the barcode area, and then extract the barcode data information. Existing two-dimensional barcode reading systems can only process two-dimensional barcode images that are complete in a flat state, have little deformation, and are evenly illuminated. For example, the handheld Dataman 7500 produced by Cognex in the United States and the closed code reader MATRIX2000 produced in Germany. However, cylindrical products are often encountered in reality. Due to the great difference between the two-dimensional barcode marked on the cylindrical surface and the flat two-dimensional barcode, it is difficult to read, as follows:
1、标识在柱面上的二维条码,由于柱面承载会造成柱面畸变,造成条码的尺寸比例发生变化。1. For the two-dimensional barcode marked on the cylinder, the cylinder will be distorted due to the load on the cylinder, resulting in a change in the size ratio of the barcode.
2、柱面上采集的二维条码图像,一般都会存在一定程度的光照不均。如果柱面比较光滑,特别是金属柱面时,采就会形成严重高亮反光,造成信息彻底丢失。2. The two-dimensional barcode image collected on the cylindrical surface generally has a certain degree of uneven illumination. If the cylindrical surface is relatively smooth, especially when the metal cylindrical surface is used, serious highlight reflections will be formed, resulting in complete loss of information.
3、柱面上采集的一幅二维条码图像,由于柱面遮挡或者采集角度不合适,容易造成采集的信息不完整。3. A two-dimensional barcode image collected on the cylinder may easily cause incomplete information collection due to the obstruction of the cylinder or improper collection angle.
4、上述的这些典型差异,在圆柱产品直径越小,而标刻在柱面的二维条码尺越大时,表现的越明显。4. The above-mentioned typical differences are more obvious when the diameter of the cylindrical product is smaller and the two-dimensional barcode scale marked on the cylindrical surface is larger.
这些典型的差异使得柱面上的二维条码识读比较困难,甚至无法识读。现有的成熟的二维条码识读装置,在识读柱面二维条码时,普遍识读率较低,严重影响了识读效率,从而限制了二维条码技术在圆柱产品标识追踪中的应用。现有的技术研究中,为了增加柱面二维条码的识读效率和正确率,一般采取两方面的措施:一是采取标刻过程中的增强处理或使用过程中的保护措施。如中国机械工程2011年05期上解志锋等发表的题为《零件表面激光直接标刻二维条码的工艺参数优化研究》的文章,介绍了如何优化参数提高标刻的质量,这些措施着眼于提高及维护金属表面二维条码的对比度和质量,并没有从本质上解决柱面条码识读困难,所以取得的效果并不好。二是采取硬件辅助与图像融合的方法,但是现有的图像融合方法,并没有考虑到二维条码图像的一系列特点,直接用于柱面二维条码图像融合,效率低,错误率高。例如申请号为“201110100489.6”,发明名称为“一种金属柱面直接标刻二维条码的识读装置及识读方法”的专利,从硬件方面实现了旋转采集柱面图像,消除高亮反光,但软件处理理想化,不能实用。These typical differences make reading 2D barcodes on cylinders difficult or even impossible. The existing mature two-dimensional barcode reading devices generally have a low reading rate when reading cylindrical two-dimensional barcodes, which seriously affects the reading efficiency, thus limiting the application of two-dimensional barcode technology in cylindrical product identification tracking. application. In the existing technical research, in order to increase the reading efficiency and accuracy of cylindrical two-dimensional barcodes, two measures are generally taken: one is to take enhanced processing during the marking process or protection measures during use. For example, the article titled "Optimization of Process Parameters for Laser Directly Marking Two-dimensional Barcodes on the Surface of Parts" published by Xie Zhifeng et al. in Issue 05, 2011 of China Mechanical Engineering introduced how to optimize parameters to improve the quality of marking. These measures focus on improving And maintaining the contrast and quality of two-dimensional barcodes on metal surfaces does not essentially solve the difficulty in reading cylindrical barcodes, so the results achieved are not good. The second is to adopt the method of hardware assistance and image fusion, but the existing image fusion method does not take into account a series of characteristics of two-dimensional barcode images, and is directly used for cylindrical two-dimensional barcode image fusion, which has low efficiency and high error rate. For example, the application number is "201110100489.6", and the patent name of the invention is "a reading device and reading method for directly marking two-dimensional barcodes on metal cylinders", which realizes rotating and collecting cylinder images from the hardware aspect, and eliminates highlight reflections , but the software is idealized and cannot be practical.
发明内容 Contents of the invention
要解决的技术问题technical problem to be solved
为解决现有技术存在的问题本发明提出了一种基于图像拼接的柱面二维条码识读方法,通过研究柱面上标刻的Data Matrix条码图像特点,设计一种全新的图像拼接方法来解决柱面二维条码在识读时光照不均、柱面畸变、采集信息不完整等方面的难题,以达到快速准确的获得高质量二维条码图像,实现柱面信息采集,提高柱面二维条码识读效率之目的。In order to solve the problems existing in the prior art, the present invention proposes a method for reading a two-dimensional barcode on a cylinder based on image splicing. By studying the characteristics of the Data Matrix barcode image marked on the cylinder, a new image splicing method is designed to achieve Solve the problems of uneven illumination, cylinder distortion, and incomplete collection information when reading cylindrical two-dimensional barcodes, so as to achieve fast and accurate acquisition of high-quality two-dimensional barcode images, realize cylinder information collection, and improve cylinder two-dimensional The purpose of dimensional barcode reading efficiency.
技术方案Technical solutions
本发明的技术方案为:Technical scheme of the present invention is:
所述一种基于图像拼接的柱面二维条码识读方法,其特征在于:包括以下步骤:The method for reading a cylindrical two-dimensional barcode based on image stitching is characterized in that it includes the following steps:
步骤1:连续采集二维条码的N幅图像所述N幅图像包含了二维条码的全部信息;每幅图像的宽度为w,高度为h,pixn i,j表示第n幅图像Mvtempn中第i列第j行像素点的像素值;Step 1: Continuously collect N images of two-dimensional barcodes The N images include all the information of the two-dimensional barcode; the width of each image is w, the height is h, and pix n i, j represents the pixel value of the i-th row pixel in the n-th image Mvtemp n ;
步骤2:校正图像光照不均:Step 2: Correct uneven illumination of the image:
步骤(2-1):任意选取中包含条码信息的一幅图像Mvtemp,从图像Mvtemp中间h/2处向上遍历求取纵向梯度:Step (2-1): choose arbitrarily An image Mvtemp containing barcode information in the image Mvtemp traverses upward from the middle h/2 of the image Mvtemp to obtain the longitudinal gradient:
其中,表示图像Mvtemp中第j行的纵向梯度值,并在第yup行取到最大纵向梯度值;in, Indicates the vertical gradient value of the jth line in the image Mvtemp, and takes the maximum vertical gradient value in the y up line;
步骤(2-2):计算图像MVtemp中背景区域的光照度数组:Step (2-2): Calculate the illuminance array of the background area in the image MVtemp:
其中,图像Mvtemp中背景区域指第0列至第w-1列,第yup行至yup+β行的区域,β取10~h-yup;表示图像Mvtemp背景区域第i列的光照度;Wherein, the background area in the image Mvtemp refers to the area from the 0th column to the w-1th column, the y up line to the y up + β line, and β is 10~hy up ; Indicates the illuminance of the i-th column in the background area of the image Mvtemp;
步骤(2-3):计算图像Mvtemp中背景区域的平均光照度 Step (2-3): Calculate the average illuminance of the background area in the image Mvtemp
步骤(2-4):逆向校正图像序列中每一幅图像的光照不均:Steps (2-4): Inversely rectify the image sequence The uneven illumination of each image in :
步骤3:运用Roberts算子对图像序列做边缘轮廓信息提取,然后将边缘强度信息增加到原图像中,实现条码边缘信息增强;Step 3: Apply the Roberts operator to the image sequence Do edge contour information extraction, and then add edge strength information to the original image to achieve barcode edge information enhancement;
步骤4:识别条码图像及条码位置:Step 4: Identify barcode image and barcode position:
步骤(4-1):计算图像序列中每幅图像的横向投影数据 Step (4-1): Calculate the image sequence Transverse projection data for each image in
对于第n幅图像,得到一组横向投影数据计算中的最小值 For the nth image, get a set of lateral projection data calculate the minimum value in
步骤(4-2):将横向投影数据整体向下平移计算平移后横向投影数据的最大值和平均值对平移后的横向投影数据进行加权均值滤波和中值滤波,其中加权均值滤波模板为:Step (4-2): Project the data horizontally Overall pan down Compute the translated lateral projection data the maximum value of and average For the translated lateral projection data Perform weighted mean filtering and median filtering, where the weighted mean filtering template is:
中值滤波采用5×1的滑动窗口;对滤波后的横向投影数据进行阈值分割,分割函数为:Median filtering uses a 5×1 sliding window; for the filtered lateral projection data For threshold segmentation, the segmentation function is:
步骤(4-3):对经过步骤(4-2)处理后的横向投影数据进行数据拟合,拟合函数的形式为:Step (4-3): Transverse projection data processed by step (4-2) For data fitting, the form of the fitting function is:
其中,a1,b1,a2,b2为线性拟合函数的拟合变量,x1,x2为线性拟合函数的分段点;采用最小二乘拟合,得到拟合误差为:Among them, a 1 , b 1 , a 2 , b 2 are the fitting variables of the linear fitting function, x 1 , x 2 are the segmentation points of the linear fitting function; using the least squares fitting, the fitting error is :
在第n幅图像中,计算拟合误差取最小值时对应的令计算拟合误差取最小值时对应的令 In the nth image, calculate the fitting error corresponding to the minimum value make Calculate Fit Error corresponding to the minimum value make
步骤(4-4):重复步骤(4-1)~步骤(4-3),得到图像序列中每幅图像的和从而得到数组 Step (4-4): Repeat step (4-1) ~ step (4-3) to get the image sequence of each image in and and thus get the array
步骤(4-5):计算图像序列中每幅图像各自位置的纵向梯度值:Step (4-5): Compute the image sequence Each image in the Vertical gradient value for position:
从而得到数组计算数组的平均值从数组中第n=0的数据开始向后依次与比较,当比较到第n=η的数据大于为止;从数组中第n=N-1的数据开始向前依次与比较,当比较到第n=κ的数据大于为止;图像序列中,Mvtempη~Mvtempκ的图像为条码图像;and thus get the array Calculate array average of from array The n=0th data in the data starts backward and sequentially with Comparison, when the n=ηth data is greater than up to; from array The data of the n=N-1th in the data starts forward and sequentially with For comparison, when the n=κth data is greater than up to; image sequence Among them, the images of Mvtemp η ~ Mvtemp κ are barcode images;
步骤(4-6):计算所有条码图像中值的平均值ys;任取条码图像中的一幅图像,计算该图像位置和位置处的纵向梯度值;当处的纵向梯度值大于处的纵向梯度值,则将所有条码图像从各自位置向上裁剪,保留作为新的条码图像;当处的纵向梯度值大于处的纵向梯度值,则将所有条码图像从各自位置向下裁剪,保留作为新的条码图像;将新得到的条码图像按原先顺序重新存为图像宽度w不变,高度为ys;Step (4-6): Calculate all barcode images The average value y s of the value; any image in the barcode image is taken, and the image is calculated location and The vertical gradient value at the position; when The longitudinal gradient value at is greater than The vertical gradient value at the place, then all barcode images from their respective The position is cropped upwards, and the as a new barcode image; when The longitudinal gradient value at is greater than The vertical gradient value at the place, then all barcode images from their respective The position is cropped downwards, retaining as a new barcode image; save the newly obtained barcode image in its original order as The width w of the image remains unchanged, and the height is y s ;
步骤5:确定条码模块的划分方式:Step 5: Determine the division method of the barcode module:
步骤(5-1):将按顺序横向排列,纵向对齐合并成一幅图像NMvtemp,图像NMvtemp高度为ys,宽度为Mw;对图像NMvtemp做纵向梯度投影:Step (5-1): put Arranged horizontally in order, vertically aligned and merged into an image NMvtemp, the height of the image NMvtemp is ys , and the width is Mw; vertical gradient projection is performed on the image NMvtemp:
其中表示图像NMvtemp中第j行的纵向梯度;in Indicates the longitudinal gradient of row j in image NMvtemp;
步骤(5-2):从二维条码模块划分方式集合C={L×L}中选取划分方式l×l∈C,得到一组模块纵向分割点:Step (5-2): Select the division method l×l∈C from the set of division methods of two-dimensional barcode modules C={L×L}, and obtain a set of module longitudinal division points:
步骤(5-3):计算图像NMvtemp在集合H中每个模块纵向分割点hm处的纵向梯度值并计算所有纵向梯度值的平均值,作为划分方式l×l∈C的梯度值;Step (5-3): Calculate the vertical gradient value of the image NMvtemp at the vertical segmentation point h m of each module in the set H and calculate all longitudinal gradient values The average value of is used as the gradient value of the division method l×l∈C;
步骤(5-4):重复进行步骤(5-2)~步骤(5-3),计算集合C中所有划分方式的梯度值,取梯度值最大的划分方式p×p作为二维条码横向模块划分方式;Step (5-4): Repeat step (5-2) to step (5-3), calculate the gradient value of all division methods in set C, and take the division method p×p with the largest gradient value as the two-dimensional barcode horizontal module division method;
步骤6:条码图像粗配准:Step 6: Coarse registration of barcode images:
步骤(6-1):将中的每一幅图像NMvtempn都转化为一个p×w的数据矩阵,转化计算方式为:Step (6-1): put Each image in NMvtemp n is converted into a p×w data matrix, and the conversion calculation method is:
其中Xn表示图像NMvempn对应的数据矩阵,表示矩阵中第k行第i列元素,Ms表示条码模块的纵向尺寸,Ms=ys/p;Where X n represents the data matrix corresponding to the image NMvemp n , Indicates the k-th row and i-th column element in the matrix, M s represents the longitudinal size of the barcode module, M s =y s /p;
步骤(6-2):将中相邻两幅图像对应的数据矩阵Xn,Xn+1步进重叠,步进列数δg的范围为1列~5列;计算每次重叠时重叠区域元素的均方差S2(gn):Step (6-2): put The data matrices X n corresponding to two adjacent images in , X n+1 step overlap, and the step column number δ g ranges from 1 column to 5 columns; calculate the mean square error S 2 ( g n ):
其中S2(gn)表示数据矩阵重叠gn列元素时的均方差;Among them, S 2 (g n ) represents the mean square error when the data matrix overlaps g n column elements;
步骤(6-3):计算数据矩阵Xn,Xn+1步进重叠过程中的所有S2(gn),取其中取值最小的三个均方差对应的重叠列数作为相邻两幅图像NMvtempn和NMvtempn+1间的粗配准位置,并记 Step (6-3): Calculate all S 2 (g n ) in the data matrix X n , X n+1 step overlapping process, and take the number of overlapping columns corresponding to the three mean square deviations with the smallest values As the coarse registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record
步骤(6-4):重复步骤(6-2)~步骤(6-3),对图像序列中每两幅相邻图像进行粗配准,得到粗配准位置序列 Step (6-4): Repeat step (6-2) ~ step (6-3), for the image sequence Coarse registration is performed on every two adjacent images, and the coarse registration position sequence is obtained
步骤7:条码图像精配准:Step 7: Fine registration of barcode images:
步骤(7-1):采用经典相似性测度方法的计算相邻两幅图像NMvtempn和NMvempn+1在匹配位置gn处的匹配度Rn(gn):Step (7-1): Calculate the matching degree R n (g n ) of two adjacent images NMvtemp n and NMvemp n+1 at the matching position g n using the classical similarity measurement method:
微调的列数δ取δg+1;The number of columns δ for fine-tuning is δ g +1;
步骤(7-2):取匹配度Rn(gn)的最大值为相邻两幅图像NMvtempn和NMvtempn+1间的最佳匹配度,记为Rn=max{Rn(gn)},并将max{Rn(gn)}对应的位置gn作为相邻两幅图像NMvtempn和NMvtempn+1间的精配准位置,并记精配准位置Cn=gn;Step (7-2): Take the maximum value of the matching degree R n (g n ) as the best matching degree between two adjacent images NMvtemp n and NMvtemp n+1 , which is recorded as R n =max{R n (g n )}, and take the position g n corresponding to max{R n (g n )} as the fine registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record the fine registration position C n =g n ;
步骤(7-3):重复步骤(7-1)~步骤(7-2),计算图像序列中每两幅相邻图像的最佳匹配度以及精配准位置,得到最佳匹配度数组和精配准位置数组 Step (7-3): Repeat step (7-1) ~ step (7-2) to calculate the image sequence The best matching degree and the precise registration position of each two adjacent images in the middle, get the best matching degree array and array of fine registration positions
步骤8:图像拼接融合与条码识别:Step 8: Image splicing and fusion and barcode recognition:
步骤(8-1):遍历最佳匹配度数组以匹配度数值最小的两个位置点n1和n2作为分段点,将图像序列分为三个部分;Step (8-1): Traverse the best matching degree array The two position points n 1 and n 2 with the smallest matching degree value are used as segmentation points, and the image sequence Divided into three parts;
步骤(8-2):将每个部分的图像按照图像间的精配准位置采用渐入渐出的加权平均法进行拼接融合,得到三个部分的合成图像Part0、Part1、Part2,三幅图像宽度分别为w0、w1、w2,高度是ys,三幅图像间的精配准位置分别为PC0、PC1,
步骤(8-3):计算中图像NMvtemp0的横向梯度:Step (8-3): Calculate Transverse gradient of image NMvtemp 0 in:
代表图像NMvtemp0中i列的横向梯度;得到数组并计算中的最大值位置xl即为图像NMvtemp0中条码区域和空白区域分界位置; Represents the horizontal gradient of column i in image NMvtemp 0 ; get array and calculate the maximum value in Position x 1 is the boundary position between the barcode area and the blank area in the image NMvtemp 0 ;
步骤(8-4):计算中图像NMvtempM-1的横向梯度:Step (8-4): Calculate Transverse gradient of image NMvtemp M-1 in:
代表图像NMvtempM-1中i列的横向梯度;得到数组并计算中的最大值位置xr即为图像NMvtempM-1中条码区域和空白区域分界位置; Represents the horizontal gradient of the i column in the image NMvtemp M-1 ; get the array and calculate the maximum value in The position x r is the boundary position between the barcode area and the blank area in the image NMvtemp M-1 ;
步骤(8-5):将Part0、Part1、Part2三幅图像拼接融合后的宽度为wm=(xl+ys+w-xr),高度为ys;建立图像内存缓冲区,大小为wm×ys;将图像Part0放入缓冲区左侧,将图像Part2放入缓冲区右侧;判断|(wm-w0-w2)-(w1-PC0-PC1)|≤10,若满足判断条件则将图像Part1按照与Part0、Part2的精配准位置PC0、PC1放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合,进入步骤(8-7),若不满足判断条件则进入步骤(8-6);Step (8-5): After splicing and merging the three images of Part 0 , Part 1 and Part 2 , the width is w m =(x l +y s +wx r ), and the height is y s ; create an image memory buffer, The size is w m ×y s ; put the image Part 0 into the left side of the buffer, put the image Part 2 into the right side of the buffer; judge |(w m -w 0 -w 2 )-(w 1 -PC 0 - PC 1 )|≤10, if the judgment condition is satisfied, put the image Part 1 into the buffer according to the fine registration position PC 0 and PC 1 of Part 0 and Part 2 , and the overlapping area is completed by the weighted average method of gradual in and gradual out Image splicing and fusion, enter step (8-7), if the judgment condition is not met, enter step (8-6);
步骤(8-6):在图像内存缓冲区内将Part1放在两幅图像之间,开始位置与Part0重合w列,然后Part1步进向右运动,直到Part1与Part2重合w列为止;将步进运动过程中Part1与Part0、Part2分别重合的区域合并为联合重合区域,计算联合重合区域匹配度,匹配度计算方法同步骤(7-1),并步进运动过程中取到联合重合区域最大匹配度的位置作为配准位置,将图像Part1按照配准位置放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合;Step (8-6): Put Part 1 between two images in the image memory buffer, the starting position coincides with Part 0 in column w, and then Part 1 moves to the right stepping until Part 1 and Part 2 coincide with w Until the row; merge the overlapping areas of Part 1 , Part 0 , and Part 2 during the stepping motion into a joint overlapping area, and calculate the matching degree of the joint overlapping area. The calculation method of the matching degree is the same as step (7-1), and stepping motion In the process, the position of the maximum matching degree in the joint coincidence area is taken as the registration position, and the image Part 1 is put into the buffer according to the registration position, and the weighted average method of gradual in and gradual out is used in the coincident area to complete image splicing and fusion;
步骤(8-7):将图像内存缓冲区内拼接融合得到的二维条码图像,按照宽度进行裁剪,取二维条码图像xl~xl+ys部分,得到一幅新的ys×ys的Data Matrix二维条码图像,条码模块大小为Ms×Ms;使用解码系统读取新的Data Matrix二维条码图像中的条码信息,解码系统根据解码原理和Reed-Solomon纠错算法对其解码并纠错。Step (8-7): The two-dimensional barcode image obtained by splicing and fusion in the image memory buffer is cut according to the width, and the part of the two-dimensional barcode image x l ~ x l +y s is taken to obtain a new y s × The Data Matrix two-dimensional barcode image of y s , the barcode module size is M s ×M s ; use the decoding system to read the barcode information in the new Data Matrix two-dimensional barcode image, the decoding system is based on the decoding principle and the Reed-Solomon error correction algorithm Decode it and correct it.
有益效果Beneficial effect
本发明提出的一种基于图像拼接的柱面二维条码识读技术,改变现有的二维条码识读系统基于一幅图像的识别原理,能够完整采集柱面条码信息,消除柱面条码的畸变和光照不均等问题。根据Data Matrix二维条码图像的特征,设计的拼接融合算法能够很好解决二维条码图像拼接过程中遇到正确率不高、效率低等突出问题,实现快速准确的识读柱面二维条码信息。根据发明人初步统计,在不使用本发明识读方法时,读取一个柱面二维条码平均需要2~4秒,而使用本发明识读方法后只需0.4~0.8秒,提高了3~5倍的效率,并且大大提高的识读率。A cylindrical two-dimensional barcode reading technology based on image splicing proposed by the present invention changes the existing two-dimensional barcode reading system based on an image recognition principle, can completely collect cylindrical barcode information, and eliminates the problem of cylindrical barcodes. Distortion and uneven lighting problems. According to the characteristics of Data Matrix two-dimensional barcode images, the splicing and fusion algorithm designed can well solve the outstanding problems such as low accuracy rate and low efficiency encountered in the process of two-dimensional barcode image splicing, and realize fast and accurate reading of cylindrical two-dimensional barcodes information. According to the inventor's preliminary statistics, when the reading method of the present invention is not used, it takes an average of 2 to 4 seconds to read a cylindrical two-dimensional barcode, but it only takes 0.4 to 0.8 seconds after using the reading method of the present invention, which improves the reading by 3 to 4 seconds. 5 times the efficiency, and a greatly improved reading rate.
附图说明 Description of drawings
图1:本发明的流程图;Fig. 1: flow chart of the present invention;
图2:实际凸形数据图;Figure 2: Actual convex data graph;
图3:规范化后的凸形数据图;Figure 3: Normalized convex data graph;
图4:实施例中的处理过程图;Fig. 4: process diagram in the embodiment;
具体实施方式Detailed ways
下面结合具体实施例描述本发明:Describe the present invention below in conjunction with specific embodiment:
本实例取为Φ6的柱面金属,在识读工装下用MV1300,采集一组的序列图像。本实例选取相机参数如下:快门速度10us,增益调节60,采集速度为高速。为实现上述目的,本发明的技术方案总过程如图1所示。In this example, a cylindrical metal of Φ6 is used, and a set of sequence images are collected with MV1300 under the reading tool. In this example, the camera parameters are selected as follows: shutter speed 10us,
步骤1:在识读工装下,调节减小挡板缝隙,使得采集的条码图像不存在高亮反光为止,然后旋转连续采集二维条码的N幅图像所述N幅图像包含了二维条码的全部信息;每幅图像的宽度为w=97,高度为h=384,表示第n幅图像Mvtempn中第i列第j行像素点的像素值。本实施例中采集了8幅图像,如附图4中(a)所示。Step 1: Under the reading tooling, adjust and reduce the gap of the baffle until there is no highlight reflection in the collected barcode image, and then rotate and continuously collect N images of the two-dimensional barcode The N images contain all the information of the two-dimensional barcode; the width of each image is w=97, the height is h=384, Indicates the pixel value of the i-th column and j-th row pixel in the n-th image Mvtemp n . In this embodiment, eight images are collected, as shown in (a) of FIG. 4 .
步骤2:校正图像光照不均:Step 2: Correct uneven illumination of the image:
本步骤利用图像中背景区域的光照度变化规律校正图像中条码区域的光照不均,步骤如下:In this step, the uneven illumination of the barcode area in the image is corrected by using the illumination variation law of the background area in the image. The steps are as follows:
步骤(2-1):任意选取中包含条码信息的一幅图像Mvtemp,从图像Mvtemp中间h/2处向上遍历求取纵向梯度:Step (2-1): choose arbitrarily An image Mvtemp containing barcode information in the image Mvtemp traverses upward from the middle h/2 of the image Mvtemp to obtain the longitudinal gradient:
其中,表示图像Mvtemp中第j行的纵向梯度值,并在第yup行取到最大纵向梯度值;in, Indicates the vertical gradient value of the jth line in the image Mvtemp, and takes the maximum vertical gradient value in the y up line;
步骤(2-2):计算图像Mvtemp中背景区域的光照度数组:Step (2-2): Calculate the illuminance array of the background area in the image Mvtemp:
其中,图像Mvtemp中背景区域指第0列至第w-1列,第yup行至yup+β行的区域,β取10~h-yup;表示图像Mvtemp背景区域第i列的光照度;Wherein, the background area in the image Mvtemp refers to the area from the 0th column to the w-1th column, the y up line to the y up + β line, and β is 10~hy up ; Indicates the illuminance of the i-th column in the background area of the image Mvtemp;
步骤(2-3):计算图像Mvtemp中背景区域的平均光照度 Step (2-3): Calculate the average illuminance of the background area in the image Mvtemp
步骤(2-4):逆向校正图像序列中每一幅图像的光照不均:Steps (2-4): Inversely rectify the image sequence The uneven illumination of each image in :
本实施例中图像纵向像素属于柱面的同一弧度,在两侧平行的条形白光源下,图像的纵向是光照均匀的,横向的光照度则是从两侧向中间渐渐变小的。基于此规律,本实施例利用图像中空白区域的光照度变化规律校正图像中条码区域的光照不均。计算得到序列图像中第四幅图像中的矩形空白区域:纵向15~35,横向0~233,平均光照度逆向校正后的图像如图4中(b)所示。In this embodiment, the vertical pixels of the image belong to the same radian of the cylinder. Under the parallel white light source on both sides, the vertical illumination of the image is uniform, and the horizontal illuminance gradually decreases from the two sides to the middle. Based on this rule, this embodiment corrects the uneven illumination of the barcode area in the image by using the light intensity variation rule of the blank area in the image. Calculate the rectangular blank area in the fourth image in the sequence image: vertical 15-35, horizontal 0-233, average illuminance The image after inverse correction is shown in (b) in Figure 4.
步骤3:运用Roberts算子对图像序列做边缘轮廓信息提取,然后将边缘强度信息增加到原图像中,实现条码边缘信息增强;增强后的图像如图4中(c)所示。Step 3: Apply the Roberts operator to the image sequence Extract edge contour information, and then add edge strength information to the original image to achieve barcode edge information enhancement; the enhanced image is shown in Figure 4 (c).
步骤4:识别条码图像及条码位置:Step 4: Identify barcode image and barcode position:
本步骤通过拟合图像中灰度投影的方法,识别哪些图像含有条码,哪些图像不含条码,含有条码的图像中的条码所在的纵向位置,实现条码图像及条码位置的识别。步骤如下:This step recognizes which images contain barcodes, which images do not contain barcodes, and the longitudinal positions of the barcodes in the images containing barcodes by fitting the grayscale projection method in the images, so as to realize the identification of barcode images and barcode positions. Proceed as follows:
步骤(4-1):计算图像序列中每幅图像的横向投影数据 Step (4-1): Calculate the image sequence Transverse projection data for each image in
对于第n幅图像,得到一组横向投影数据计算中的最小值由于条码区域的的投影数据与背景不同,所以数据会呈现出凸字形,称为凸形数据,如图2所示。For the nth image, get a set of lateral projection data calculate the minimum value in Since the projection data of the barcode area is different from the background, so The data will show a convex shape, which is called convex data, as shown in Figure 2.
步骤(4-2):为简化计算的运算量,将横向投影数据整体向下平移计算平移后横向投影数据的最大值和平均值对平移后的横向投影数据进行加权均值滤波和中值滤波,进一步消除了各种噪声、污染等影响,其中加权均值滤波模板为:Step (4-2): In order to simplify the computational load of the calculation, project the data horizontally Overall pan down Compute the translated lateral projection data the maximum value of and average For the translated lateral projection data Perform weighted mean filtering and median filtering to further eliminate the effects of various noises, pollution, etc., where the weighted mean filtering template is:
中值滤波采用5×1的滑动窗口;对滤波后的横向投影数据进行阈值分割,消除凸形数据存在锯齿现象,分割函数为:Median filtering uses a 5×1 sliding window; for the filtered lateral projection data Perform threshold segmentation to eliminate the jaggedness of convex data. The segmentation function is:
通过步骤(4-2)的规范化操作,使得凸形数据不规则得到了很大的改善,如图3所示。Through the normalization operation of step (4-2), the irregularity of the convex data has been greatly improved, as shown in Figure 3.
步骤(4-3):对经过步骤(4-2)处理后的横向投影数据进行数据拟合,拟合函数的形式为:Step (4-3): Transverse projection data processed by step (4-2) For data fitting, the form of the fitting function is:
其中,a1,b1,a2,b2为线性拟合函数的拟合变量,x1,x2为线性拟合函数的分段点;采用最小二乘拟合,得到拟合误差为:Among them, a 1 , b 1 , a 2 , b 2 are the fitting variables of the linear fitting function, x 1 , x 2 are the segmentation points of the linear fitting function; using the least squares fitting, the fitting error is :
在第n幅图像中,计算拟合误差取最小值时对应的令计算拟合误差取最小值时对应的令 In the nth image, calculate the fitting error corresponding to the minimum value make Calculate Fit Error corresponding to the minimum value make
步骤(4-4):重复步骤(4-1)~步骤(4-3),得到图像序列中每幅图像的和从而得到数组 Step (4-4): Repeat step (4-1) ~ step (4-3) to get the image sequence of each image in and and thus get the array
步骤(4-5):计算图像序列中每幅图像各自位置的纵向梯度值:Step (4-5): Compute the image sequence Each image in the Vertical gradient value for position:
从而得到数组计算数组的平均值从数组中第n=0的数据开始向后依次与比较,当比较到第n=η的数据大于为止;从数组中第n=N-1的数据开始向前依次与比较,当比较到第n=κ的数据大于为止;图像序列中,Mvtempη~Mvtempκ的图像为条码图像;and thus get the array Calculate array average of from array The n=0th data in the data starts backward and sequentially with Comparison, when the n=ηth data is greater than up to; from array The data of the n=N-1th in the data starts forward and sequentially with For comparison, when the n=κth data is greater than up to; image sequence Among them, the images of Mvtemp η ~ Mvtemp κ are barcode images;
步骤(4-6):计算所有条码图像中值的平均值ys;任取条码图像中的一幅图像,计算该图像位置和位置处的纵向梯度值;当处的纵向梯度值大于处的纵向梯度值,则将所有条码图像从各自位置向上裁剪,保留作为新的条码图像;当处的纵向梯度值大于处的纵向梯度值,则将所有条码图像从各自位置向下裁剪,保留作为新的条码图像;将新得到的条码图像按原先顺序重新存为图像宽度w不变,高度为ys。Step (4-6): Calculate all barcode images The average value y s of the value; any image in the barcode image is taken, and the image is calculated location and The vertical gradient value at the position; when The longitudinal gradient value at is greater than The vertical gradient value at the place, then all barcode images from their respective The position is cropped upwards, and the as a new barcode image; when The longitudinal gradient value at is greater than The vertical gradient value at the place, then all barcode images from their respective The position is cropped downwards, retaining as a new barcode image; save the newly obtained barcode image in its original order as The image has constant width w and height y s .
本实施例中,在步骤4通过拟合每幅图像中灰度投影的方法,识别哪些图像含有条码,哪些图像不含条码,含有条码的图像中的条码所在的纵向位置,实现条码图像及条码位置的识别。通过计算、规范化、拟合投影凸形数据得到每幅图像中最小拟合误差位置的
步骤5:确定条码模块的划分方式:Step 5: Determine the division method of the barcode module:
本步骤根据图像竖直梯度投影,通过判断最佳预划分梯度的方式实现条码水平模块划分,过程如下:In this step, according to the vertical gradient projection of the image, the horizontal module division of the barcode is realized by judging the optimal pre-division gradient. The process is as follows:
步骤(5-1):将按顺序横向排列,纵向对齐合并成一幅图像NMvtemp,图像NMvtemp高度为ys,宽度为Mw;对图像NMvtemp做纵向梯度投影:Step (5-1): put Arranged horizontally in order, vertically aligned and merged into an image NMvtemp, the height of the image NMvtemp is ys , and the width is Mw; vertical gradient projection is performed on the image NMvtemp:
其中表示图像NMvtemp中第j行的纵向梯度;表示图像NMvtemp中第i列第j行像素点的像素值;in Indicates the longitudinal gradient of row j in image NMvtemp; Represents the pixel value of the i-th column and the j-th row pixel in the image NMvtemp;
步骤(5-2):从二维条码模块划分方式集合C={L×L}中选取划分方式l×l∈C,得到一组模块纵向分割点:Step (5-2): Select the division method l×l∈C from the set of division methods of two-dimensional barcode modules C={L×L}, and obtain a set of module longitudinal division points:
步骤(5-3):计算图像NMvtemp在集合H中每个模块纵向分割点hm处的纵向梯度值并计算所有纵向梯度值的平均值,作为划分方式l×l∈C的梯度值;Step (5-3): Calculate the vertical gradient value of the image NMvtemp at the vertical segmentation point h m of each module in the set H and calculate all longitudinal gradient values The average value of is used as the gradient value of the division method l×l∈C;
步骤(5-4):重复进行步骤(5-2)~步骤(5-3),计算集合C中所有划分方式的梯度值,取梯度值最大的划分方式p×p作为二维条码横向模块划分方式。Step (5-4): Repeat steps (5-2) to (5-3) to calculate the gradient values of all division methods in set C, and take the division method p×p with the largest gradient value as the two-dimensional barcode horizontal module Divide the way.
本实施例中选用的二维条码模块划分方式集合根据图像竖直梯度投影波形,通过判断最佳预划分梯度的方式实现条码水平模块划分。各种预划分方式的梯度值为:The set of two-dimensional barcode module division methods selected in this embodiment According to the vertical gradient projection waveform of the image, the barcode horizontal module division is realized by judging the best pre-division gradient. The gradient values of various pre-division methods are:
可知,12*12是二维条码的最佳水平模块划分方式,划分效果如图4中(e)所示。It can be seen that 12*12 is the best horizontal module division method of the two-dimensional barcode, and the division effect is shown in (e) in Figure 4.
步骤6:条码图像粗配准:Step 6: Coarse registration of barcode images:
经过上述步骤,图像间的偏移只存在于横向,还需要将图像间的横向配准分为粗配准和精配准两个阶段。本步骤将图像按照所划分的水平条码模块转化为数据矩阵,然后通过数据矩阵匹配实现相邻图像间的粗配准,过程如下:After the above steps, the offset between images only exists in the horizontal direction, and the horizontal registration between images needs to be divided into two stages: coarse registration and fine registration. In this step, the image is converted into a data matrix according to the divided horizontal barcode modules, and then the coarse registration between adjacent images is realized through data matrix matching. The process is as follows:
步骤(6-1):将中的每一幅图像NMvtempn都转化为一个p×w的数据矩阵,转化计算方式为:Step (6-1): put Each image in NMvtemp n is converted into a p×w data matrix, and the conversion calculation method is:
其中Xn表示图像NMvtempn对应的数据矩阵,表示矩阵中第k行第i列元素,Ms表示条码模块的纵向尺寸,Ms=ys/p;步骤6及以后步骤中表示第n幅图像NMvtempn中第i列第j行像素点的像素值;Where X n represents the data matrix corresponding to the image NMvtemp n , Indicates the kth row and i column element in the matrix, M s represents the longitudinal size of the barcode module, M s =y s /p; in step 6 and subsequent steps Represents the pixel value of the i-th column and j-th row pixel in the n-th image NMvtemp n ;
步骤(6-2):将中相邻两幅图像对应的数据矩阵Xn,Xn+1步进重叠,步进列数δg的范围为1列~5列;计算每次重叠时重叠区域元素的均方差S2(gn):Step (6-2): put The data matrices X n corresponding to two adjacent images in , X n+1 step overlap, and the step column number δ g ranges from 1 column to 5 columns; calculate the mean square error S 2 ( g n ):
其中S2(gn)表示数据矩阵重叠gn列元素时的均方差;Among them, S 2 (g n ) represents the mean square error when the data matrix overlaps g n column elements;
步骤(6-3):计算数据矩阵Xn,Xn+1步进重叠过程中的所有S2(gn),取其中取值最小的三个均方差对应的重叠列数作为相邻两幅图像NMvtempn和NMvtempn+1间的粗配准位置,并记 Step (6-3): Calculate all S 2 (g n ) in the data matrix X n , X n+1 step overlapping process, and take the number of overlapping columns corresponding to the three mean square deviations with the smallest values As the coarse registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record
步骤(6-4):重复步骤(6-2)~步骤(6-3),对图像序列中每两幅相邻图像进行粗配准,得到粗配准位置序列 Step (6-4): Repeat step (6-2) ~ step (6-3), for the image sequence Coarse registration is performed on every two adjacent images, and the coarse registration position sequence is obtained
步骤7:条码图像精配准:Step 7: Fine registration of barcode images:
本步骤通过在每个粗配准位置微调,计算最佳的匹配度,最终确定出相邻两幅图像间精配准,具体过程如下:This step calculates the best matching degree by fine-tuning each coarse registration position, and finally determines the fine registration between two adjacent images. The specific process is as follows:
步骤(7-1):采用经典相似性测度方法的计算相邻两幅图像NMvtempn和NMvtempn+1在匹配位置gn处的匹配度Rn(gn):Step (7-1): Calculate the matching degree R n (g n ) of two adjacent images NMvtemp n and NMvtemp n+1 at the matching position g n using the classical similarity measurement method:
微调的列数δ取δg+1;The number of columns δ for fine-tuning is δ g +1;
步骤(7-2):取匹配度Rn(gn)的最大值为相邻两幅图像NMvtempn和NMvtempn+1间的最佳匹配度,记为Rn=max{Rn(gn)},并将max{Rn(gn)}对应的位置gn作为相邻两幅图像NMvtempn和NMvtempn+1间的精配准位置,并记精配准位置Cn=gn;Step (7-2): Take the maximum value of the matching degree R n (g n ) as the best matching degree between two adjacent images NMvtemp n and NMvtemp n+1 , which is recorded as R n =max{R n (g n )}, and take the position g n corresponding to max{R n (g n )} as the fine registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record the fine registration position C n =g n ;
步骤(7-3):重复步骤(7-1)~步骤(7-2),计算图像序列中每两幅相邻图像的最佳匹配度以及精配准位置,得到最佳匹配度数组和精配准位置数组 Step (7-3): Repeat step (7-1) ~ step (7-2) to calculate the image sequence The best matching degree and the precise registration position of each two adjacent images in the middle, get the best matching degree array and array of fine registration positions
本实施例中步骤6和步骤7建立图像数据矩阵,计算最小均方差确定图像间的粗配准,然后按照经典的相似性测度方法确定图像间最佳匹配度以及精配准位置,如下所示:In this embodiment, step 6 and step 7 establish the image data matrix, calculate the minimum mean square error to determine the rough registration between images, and then determine the best matching degree and fine registration position between images according to the classical similarity measurement method, as shown below :
步骤8:图像拼接融合与条码识别:Step 8: Image splicing and fusion and barcode recognition:
本步骤基于图像间的精配准位置及匹配度,首先将图片拼接融合成三个片段,然后填入预定的二维条码图像内存缓冲区域,完成图像拼接融合,然后实现二维条码信息识读。过程如下:This step is based on the fine registration position and matching degree between the images. First, the images are spliced and fused into three segments, and then the predetermined 2D barcode image memory buffer area is filled in to complete the image splicing and fusion, and then the 2D barcode information is read. . The process is as follows:
步骤(8-1):遍历最佳匹配度数组以匹配度数值最小的两个位置点n1和n2作为分段点,将图像序列分为三个部分;Step (8-1): Traverse the best matching degree array The two position points n 1 and n 2 with the smallest matching degree value are used as segmentation points, and the image sequence Divided into three parts;
如上表所示最佳匹配度数据,三个部分分别为{NMvtemp0,NMvtemp1}、{NMvtemp2}和{NMvtemp3,NMvtemp4,NMvtemp5}。As the best matching data shown in the table above, the three parts are {NMvtemp 0 , NMvtemp 1 }, {NMvtemp 2 } and {NMvtemp 3 , NMvtemp 4 , NMvtemp 5 }.
步骤(8-2):将每个部分的图像按照图像间的精配准位置采用渐入渐出的加权平均法进行拼接融合,得到三个部分的合成图像Part0、Part1、Part2,如图4中(f1)、(f2)、(f3)所示。三幅图像宽度分别为w0=172、w1=97、w2=166,高度ys=311,三幅图像间的精配准位置分别为PC0=41、PC1=43;Step (8-2): The images of each part are spliced and fused according to the precise registration position between the images using the weighted average method of gradual in and gradual out to obtain the composite images of three parts Part 0 , Part 1 , and Part 2 , As shown in (f1), (f2), and (f3) in Figure 4. The widths of the three images are w 0 =172, w 1 =97, w 2 =166, the height y s =311, and the precise registration positions between the three images are PC 0 =41, PC 1 =43;
步骤(8-3):计算中图像NMvtemp0的横向梯度:Step (8-3): Calculate Transverse gradient of image NMvtemp 0 in:
代表图像NMvtemp0中i列的横向梯度;得到数组并计算中的最大值位置xl即为图像NMvtemp0中条码区域和空白区域分界位置;本实施例xl=12。 Represents the horizontal gradient of column i in image NMvtemp 0 ; get array and calculate the maximum value in The position x l is the boundary position between the barcode area and the blank area in the image NMvtemp 0 ; x l =12 in this embodiment.
步骤(8-4):计算中图像NMvtempM-1的横向梯度:Step (8-4): Calculate Transverse gradient of image NMvtemp M-1 in:
代表图像NMvtempM-1中i列的横向梯度;得到数组并计算中的最大值位置xr即为图像NMvtempM-1中条码区域和空白区域分界位置;本实施例xr=76。 Represents the horizontal gradient of the i column in the image NMvtemp M-1 ; get the array and calculate the maximum value in The position x r is the boundary position between the barcode area and the blank area in the image NMvtemp M-1 ; x r =76 in this embodiment.
步骤(8-5):将Part0、Part1、Part2三幅图像拼接融合后的宽度为wm=(xl+ys+w-xr)=344,高度为ys;建立图像内存缓冲区,大小为wm×ys;将图像Part0放入缓冲区左侧,将图像Part2放入缓冲区右侧;判断|(wm-w0-w2)-(w1-PC0-PC1)|≤10,若满足判断条件则将图像Part1按照与Part0、Part2的精配准位置PC0、PC1放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合,进入步骤(8-7),若不满足判断条件则进入步骤(8-6);Step (8-5): After splicing and merging the three images of Part 0 , Part 1 and Part 2 , the width is w m =(x l +y s +wx r )=344, and the height is y s ; create an image memory buffer Area, the size is w m ×y s ; put the image Part 0 into the left side of the buffer, put the image Part 2 into the right side of the buffer; judge |(w m -w 0 -w 2 )-(w 1 -PC 0 -PC 1 )|≤10, if the judgment condition is met, put the image Part 1 into the buffer according to the fine registration position PC 0 and PC 1 of Part 0 and Part 2 , and the overlapping area adopts the weighted average of gradual in and out method to complete the image splicing and fusion, enter step (8-7), if the judgment condition is not met, enter step (8-6);
本实施例中|(wm-w0-w2)-(w1-PC0-PC1)|=7≤10。满足判断条件,将图像Part1按照与Part0、Part2的精配准位置PC0、PC1放入缓冲区,完成图像拼接融合,结果如图4中(g)所示。In this embodiment, |(w m -w 0 -w 2 )-(w 1 -PC 0 -PC 1 )|=7≤10. Satisfy the judgment conditions, put the image Part 1 into the buffer according to the fine registration positions PC 0 and PC 1 of Part 0 and Part 2 , and complete the image splicing and fusion. The result is shown in (g) in Figure 4.
步骤(8-6):在图像内存缓冲区内将Part1放在两幅图像之间,开始位置与Part0重合w列,然后Part1步进向右运动,直到Part1与Part2重合w列为止;将步进运动过程中Part1与Part0、Part2分别重合的区域合并为联合重合区域,计算联合重合区域匹配度,匹配度计算方法同步骤(7-1),并步进运动过程中取到联合重合区域最大匹配度的位置作为配准位置,将图像Part1按照配准位置放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合;Step (8-6): Put Part 1 between two images in the image memory buffer, the starting position coincides with Part 0 in column w, and then Part 1 moves to the right stepping until Part 1 and Part 2 coincide with w Until the row; merge the overlapping areas of Part 1 , Part 0 , and Part 2 during the stepping motion into a joint overlapping area, and calculate the matching degree of the joint overlapping area. The calculation method of the matching degree is the same as step (7-1), and stepping motion In the process, the position of the maximum matching degree in the joint coincidence area is taken as the registration position, and the image Part 1 is put into the buffer according to the registration position, and the weighted average method of gradual in and gradual out is used in the coincident area to complete image splicing and fusion;
步骤(8-7):将图像内存缓冲区内拼接融合得到的二维条码图像,按照宽度进行裁剪,取二维条码图像xl~xl+ys部分,得到一幅新的ys×ys的Data Matrix二维条码图像,条码模块大小为Ms×Ms,如图4中(h)所示;使用解码系统读取新的Data Matrix二维条码图像中的条码信息,解码系统根据解码原理和Reed-Solomon纠错算法对其解码并纠错。Step (8-7): The two-dimensional barcode image obtained by splicing and fusion in the image memory buffer is cut according to the width, and the part of the two-dimensional barcode image x l ~ x l +y s is taken to obtain a new y s × For the Data Matrix two-dimensional barcode image of y s , the size of the barcode module is M s ×M s , as shown in (h) in Figure 4; use the decoding system to read the barcode information in the new Data Matrix two-dimensional barcode image, and the decoding system It is decoded and corrected according to the decoding principle and the Reed-Solomon error correction algorithm.
Claims (1)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210152638.8A CN102682266B (en) | 2012-05-17 | 2012-05-17 | A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210152638.8A CN102682266B (en) | 2012-05-17 | 2012-05-17 | A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN102682266A CN102682266A (en) | 2012-09-19 |
| CN102682266B true CN102682266B (en) | 2014-06-11 |
Family
ID=46814167
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201210152638.8A Expired - Fee Related CN102682266B (en) | 2012-05-17 | 2012-05-17 | A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN102682266B (en) |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3104306B2 (en) | 2015-06-11 | 2023-11-01 | Scantrust SA | Two dimensional barcode |
| CN106203564B (en) * | 2016-06-23 | 2019-02-01 | 北京印刷学院 | A kind of generation of the two dimensional code on circle-prism assembly surface and acquisition method |
| CN106529365B (en) * | 2016-12-05 | 2019-09-06 | 广东工业大学 | automatic pricing machine |
| CN108345817A (en) * | 2018-02-06 | 2018-07-31 | 徐州智融图像科技有限公司 | A kind of recognition methods of cylindrical surface Quick Response Code |
| CN111553317B (en) * | 2020-05-14 | 2023-08-08 | 北京惠朗时代科技有限公司 | Anti-fake code acquisition method and device, computer equipment and storage medium |
| CN114936631B (en) * | 2021-04-26 | 2023-06-09 | 华为技术有限公司 | A model processing method and device |
| CN114882370A (en) * | 2022-07-07 | 2022-08-09 | 西安超嗨网络科技有限公司 | Intelligent commodity identification method and device, terminal and storage medium |
| CN117669609A (en) * | 2022-08-30 | 2024-03-08 | 台达电子国际(新加坡)私人有限公司 | Bar code reader |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102156849A (en) * | 2011-04-21 | 2011-08-17 | 西北工业大学 | Reading device and reading method of two-dimensional bar code marked on metal cylindrical surface directly |
| CN102354363A (en) * | 2011-09-15 | 2012-02-15 | 西北工业大学 | Identification method of two-dimensional barcode image on high-reflect light cylindrical metal |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7175090B2 (en) * | 2004-08-30 | 2007-02-13 | Cognex Technology And Investment Corporation | Methods and apparatus for reading bar code identifications |
-
2012
- 2012-05-17 CN CN201210152638.8A patent/CN102682266B/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102156849A (en) * | 2011-04-21 | 2011-08-17 | 西北工业大学 | Reading device and reading method of two-dimensional bar code marked on metal cylindrical surface directly |
| CN102354363A (en) * | 2011-09-15 | 2012-02-15 | 西北工业大学 | Identification method of two-dimensional barcode image on high-reflect light cylindrical metal |
Also Published As
| Publication number | Publication date |
|---|---|
| CN102682266A (en) | 2012-09-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102682266B (en) | A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic | |
| CN101673338B (en) | Fuzzy license plate identification method based on multi-angle projection | |
| CN102354363B (en) | Identification method of two-dimensional barcode image on high-reflect light cylindrical metal | |
| CN107239778B (en) | Efficient and accurate license plate recognition method | |
| CN107153822A (en) | A kind of smart mask method of the semi-automatic image based on deep learning | |
| CN102646193A (en) | A Segmentation Method of Circular Character Image | |
| CN107687819A (en) | A fast and high-precision method for extracting sub-pixels in the center of light bars | |
| CN107506765B (en) | License plate inclination correction method based on neural network | |
| CN106056020A (en) | Bar code detection and recognition method and system based on machine vision system | |
| CN104809433A (en) | Zebra stripe detection method based on maximum stable region and random sampling | |
| CN119918423A (en) | A method and system for automatic layout and drawing for lens laser cutting | |
| CN112396629A (en) | River course inspection tracking method based on infrared and visible light cooperation | |
| CN110503051A (en) | A precious wood recognition system and method based on image recognition technology | |
| CN111488839B (en) | A kind of target detection method and target detection system | |
| CN120894544B (en) | SPR response area identification method based on image semantic segmentation and time sequence alignment | |
| CN116630924A (en) | A Structured Road Recognition Method Based on Improved Sliding Window Algorithm | |
| CN115170586A (en) | Stripe threshold segmentation method based on supervised learning | |
| CN119648525B (en) | A Hyperspectral Image Super-Resolution Method and System Based on Latent Space Diffusion Model | |
| CN120452029A (en) | Fish size estimation system and method in complex underwater environment based on key point detection | |
| CN115346048A (en) | Semantic Segmentation Method of Remote Sensing Image Based on Boundary Point Selection Algorithm | |
| CN103761520A (en) | Document image non-parameter binaryzation method based on stroke width | |
| CN119006498A (en) | SegMamba-based ultra-lightweight photovoltaic module image segmentation method | |
| CN119168967A (en) | A method for detecting cable appearance defects | |
| CN113298834B (en) | Visual edge finding image processing method and device for metal plate | |
| CN116958980A (en) | A real-time scene text detection method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20140611 Termination date: 20150517 |
|
| EXPY | Termination of patent right or utility model |


























































































































































































































