[go: up one dir, main page]

CN102682266B - A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic - Google Patents

A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic Download PDF

Info

Publication number
CN102682266B
CN102682266B CN201210152638.8A CN201210152638A CN102682266B CN 102682266 B CN102682266 B CN 102682266B CN 201210152638 A CN201210152638 A CN 201210152638A CN 102682266 B CN102682266 B CN 102682266B
Authority
CN
China
Prior art keywords
image
nmvtemp
sigma
barcode
pix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210152638.8A
Other languages
Chinese (zh)
Other versions
CN102682266A (en
Inventor
何卫平
林清松
雷蕾
王伟
刘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201210152638.8A priority Critical patent/CN102682266B/en
Publication of CN102682266A publication Critical patent/CN102682266A/en
Application granted granted Critical
Publication of CN102682266B publication Critical patent/CN102682266B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提出了一种基于图像拼接的柱面二维条码识读方法,步骤包括旋转采集一组二维条码图片、图像光照不均校正、条码边缘信息增强、条码图像及条码位置识别、条码模块划分、条码图像粗配准、条码图像精配准和图像拼接融合与条码识别。本发明改变现有的二维条码识读系统基于一幅图像的识别原理,能够完整采集柱面条码信息,消除柱面条码的畸变和光照不均等问题。根据Data Matrix二维条码图像的特征,设计的拼接融合算法能够很好解决二维条码图像拼接过程中遇到正确率不高、效率低等突出问题,实现快速准确的识读柱面二维条码信息。

The invention proposes a cylindrical two-dimensional barcode reading method based on image splicing, the steps include rotating and collecting a group of two-dimensional barcode pictures, image uneven illumination correction, barcode edge information enhancement, barcode image and barcode position recognition, barcode module division, barcode image coarse registration, barcode image fine registration and image splicing fusion and barcode recognition. The invention changes the recognition principle based on an image of the existing two-dimensional barcode reading system, can completely collect cylindrical barcode information, and eliminates problems such as distortion and uneven illumination of the cylindrical barcode. According to the characteristics of Data Matrix two-dimensional barcode images, the splicing and fusion algorithm designed can well solve the outstanding problems encountered in the process of two-dimensional barcode image splicing such as low accuracy rate and low efficiency, and realize fast and accurate reading of cylindrical two-dimensional barcodes information.

Description

一种基于图像拼接的柱面二维条码识读方法A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic

技术领域 technical field

本发明涉及柱面二维条码识读技术领域,具体为一种基于图像拼接的柱面二维条码识读方法。The invention relates to the technical field of cylindrical two-dimensional barcode reading, in particular to a cylindrical two-dimensional barcode reading method based on image splicing.

背景技术 Background technique

产品表面直接标刻二维条码快速准确识读,是产品全生命周期管理和信息追踪的基础,是成为提高库存管理效率、实现生产过程信息采集及实时追踪的关键。目前多选用二维Data Matrix条码作为产品的二维条码永久标识,这是因为Data Matrix条码编码容量大、密度高、纠错能力强。Fast and accurate reading of two-dimensional barcodes directly marked on the surface of products is the basis for product lifecycle management and information tracking, and is the key to improving inventory management efficiency and realizing production process information collection and real-time tracking. At present, the two-dimensional Data Matrix barcode is mostly used as the permanent two-dimensional barcode label of the product, because the Data Matrix barcode has a large encoding capacity, high density, and strong error correction ability.

目前的二维条码识读方法都采用CCD摄像头采集一幅含有二维条码的图像,再对图像进行一系列处理,去除背景定位条码区域,然后提取条码数据信息。现有的二维条码识读系统只能处理平面状态下完整的、形变较小及光照均匀情况下的二维条码图像。例如美国康耐视出产的手持式Dataman 7500和德国产的封闭式读码器MATRIX2000。然而现实中经常遇到圆柱形的产品,由于标识在柱面上的二维条码与平面二维条码有着很大的差异,造成了识读困难,具体如下:The current two-dimensional barcode reading methods all use a CCD camera to capture an image containing a two-dimensional barcode, and then perform a series of processing on the image to remove the background and locate the barcode area, and then extract the barcode data information. Existing two-dimensional barcode reading systems can only process two-dimensional barcode images that are complete in a flat state, have little deformation, and are evenly illuminated. For example, the handheld Dataman 7500 produced by Cognex in the United States and the closed code reader MATRIX2000 produced in Germany. However, cylindrical products are often encountered in reality. Due to the great difference between the two-dimensional barcode marked on the cylindrical surface and the flat two-dimensional barcode, it is difficult to read, as follows:

1、标识在柱面上的二维条码,由于柱面承载会造成柱面畸变,造成条码的尺寸比例发生变化。1. For the two-dimensional barcode marked on the cylinder, the cylinder will be distorted due to the load on the cylinder, resulting in a change in the size ratio of the barcode.

2、柱面上采集的二维条码图像,一般都会存在一定程度的光照不均。如果柱面比较光滑,特别是金属柱面时,采就会形成严重高亮反光,造成信息彻底丢失。2. The two-dimensional barcode image collected on the cylindrical surface generally has a certain degree of uneven illumination. If the cylindrical surface is relatively smooth, especially when the metal cylindrical surface is used, serious highlight reflections will be formed, resulting in complete loss of information.

3、柱面上采集的一幅二维条码图像,由于柱面遮挡或者采集角度不合适,容易造成采集的信息不完整。3. A two-dimensional barcode image collected on the cylinder may easily cause incomplete information collection due to the obstruction of the cylinder or improper collection angle.

4、上述的这些典型差异,在圆柱产品直径越小,而标刻在柱面的二维条码尺越大时,表现的越明显。4. The above-mentioned typical differences are more obvious when the diameter of the cylindrical product is smaller and the two-dimensional barcode scale marked on the cylindrical surface is larger.

这些典型的差异使得柱面上的二维条码识读比较困难,甚至无法识读。现有的成熟的二维条码识读装置,在识读柱面二维条码时,普遍识读率较低,严重影响了识读效率,从而限制了二维条码技术在圆柱产品标识追踪中的应用。现有的技术研究中,为了增加柱面二维条码的识读效率和正确率,一般采取两方面的措施:一是采取标刻过程中的增强处理或使用过程中的保护措施。如中国机械工程2011年05期上解志锋等发表的题为《零件表面激光直接标刻二维条码的工艺参数优化研究》的文章,介绍了如何优化参数提高标刻的质量,这些措施着眼于提高及维护金属表面二维条码的对比度和质量,并没有从本质上解决柱面条码识读困难,所以取得的效果并不好。二是采取硬件辅助与图像融合的方法,但是现有的图像融合方法,并没有考虑到二维条码图像的一系列特点,直接用于柱面二维条码图像融合,效率低,错误率高。例如申请号为“201110100489.6”,发明名称为“一种金属柱面直接标刻二维条码的识读装置及识读方法”的专利,从硬件方面实现了旋转采集柱面图像,消除高亮反光,但软件处理理想化,不能实用。These typical differences make reading 2D barcodes on cylinders difficult or even impossible. The existing mature two-dimensional barcode reading devices generally have a low reading rate when reading cylindrical two-dimensional barcodes, which seriously affects the reading efficiency, thus limiting the application of two-dimensional barcode technology in cylindrical product identification tracking. application. In the existing technical research, in order to increase the reading efficiency and accuracy of cylindrical two-dimensional barcodes, two measures are generally taken: one is to take enhanced processing during the marking process or protection measures during use. For example, the article titled "Optimization of Process Parameters for Laser Directly Marking Two-dimensional Barcodes on the Surface of Parts" published by Xie Zhifeng et al. in Issue 05, 2011 of China Mechanical Engineering introduced how to optimize parameters to improve the quality of marking. These measures focus on improving And maintaining the contrast and quality of two-dimensional barcodes on metal surfaces does not essentially solve the difficulty in reading cylindrical barcodes, so the results achieved are not good. The second is to adopt the method of hardware assistance and image fusion, but the existing image fusion method does not take into account a series of characteristics of two-dimensional barcode images, and is directly used for cylindrical two-dimensional barcode image fusion, which has low efficiency and high error rate. For example, the application number is "201110100489.6", and the patent name of the invention is "a reading device and reading method for directly marking two-dimensional barcodes on metal cylinders", which realizes rotating and collecting cylinder images from the hardware aspect, and eliminates highlight reflections , but the software is idealized and cannot be practical.

发明内容 Contents of the invention

要解决的技术问题technical problem to be solved

为解决现有技术存在的问题本发明提出了一种基于图像拼接的柱面二维条码识读方法,通过研究柱面上标刻的Data Matrix条码图像特点,设计一种全新的图像拼接方法来解决柱面二维条码在识读时光照不均、柱面畸变、采集信息不完整等方面的难题,以达到快速准确的获得高质量二维条码图像,实现柱面信息采集,提高柱面二维条码识读效率之目的。In order to solve the problems existing in the prior art, the present invention proposes a method for reading a two-dimensional barcode on a cylinder based on image splicing. By studying the characteristics of the Data Matrix barcode image marked on the cylinder, a new image splicing method is designed to achieve Solve the problems of uneven illumination, cylinder distortion, and incomplete collection information when reading cylindrical two-dimensional barcodes, so as to achieve fast and accurate acquisition of high-quality two-dimensional barcode images, realize cylinder information collection, and improve cylinder two-dimensional The purpose of dimensional barcode reading efficiency.

技术方案Technical solutions

本发明的技术方案为:Technical scheme of the present invention is:

所述一种基于图像拼接的柱面二维条码识读方法,其特征在于:包括以下步骤:The method for reading a cylindrical two-dimensional barcode based on image stitching is characterized in that it includes the following steps:

步骤1:连续采集二维条码的N幅图像

Figure BDA00001648868500021
所述N幅图像包含了二维条码的全部信息;每幅图像的宽度为w,高度为h,pixn i,j表示第n幅图像Mvtempn中第i列第j行像素点的像素值;Step 1: Continuously collect N images of two-dimensional barcodes
Figure BDA00001648868500021
The N images include all the information of the two-dimensional barcode; the width of each image is w, the height is h, and pix n i, j represents the pixel value of the i-th row pixel in the n-th image Mvtemp n ;

步骤2:校正图像光照不均:Step 2: Correct uneven illumination of the image:

步骤(2-1):任意选取中包含条码信息的一幅图像Mvtemp,从图像Mvtemp中间h/2处向上遍历求取纵向梯度:Step (2-1): choose arbitrarily An image Mvtemp containing barcode information in the image Mvtemp traverses upward from the middle h/2 of the image Mvtemp to obtain the longitudinal gradient:

gradgrad jj MvtempMvtemp == ΣΣ ii == 00 ww (( pixpix ii ,, jj MvtempMvtemp -- pixpix ii ,, jj ++ 11 MvtempMvtemp )) jj ∈∈ (( hh // 22 ,, hh ))

其中,

Figure BDA00001648868500033
表示图像Mvtemp中第j行的纵向梯度值,并在第yup行取到最大纵向梯度值;in,
Figure BDA00001648868500033
Indicates the vertical gradient value of the jth line in the image Mvtemp, and takes the maximum vertical gradient value in the y up line;

步骤(2-2):计算图像MVtemp中背景区域的光照度数组:Step (2-2): Calculate the illuminance array of the background area in the image MVtemp:

II ii MvtempMvtemp == 11 2020 ΣΣ jj == ythe y upup ythe y upup ++ ββ pixpix ii ,, jj MvtempMvtemp ii ∈∈ [[ 00 ,, ww ))

其中,图像Mvtemp中背景区域指第0列至第w-1列,第yup行至yup+β行的区域,β取10~h-yup

Figure BDA00001648868500035
表示图像Mvtemp背景区域第i列的光照度;Wherein, the background area in the image Mvtemp refers to the area from the 0th column to the w-1th column, the y up line to the y up + β line, and β is 10~hy up ;
Figure BDA00001648868500035
Indicates the illuminance of the i-th column in the background area of the image Mvtemp;

步骤(2-3):计算图像Mvtemp中背景区域的平均光照度 Step (2-3): Calculate the average illuminance of the background area in the image Mvtemp

II ‾‾ MvtempMvtemp == 11 (( ββ ++ 11 )) ww ΣΣ ii == 00 ww -- 11 ΣΣ jj == ythe y upup ythe y upup ++ ββ pixpix ii ,, jj MvtempMvtemp

步骤(2-4):逆向校正图像序列

Figure BDA00001648868500038
中每一幅图像的光照不均:Steps (2-4): Inversely rectify the image sequence
Figure BDA00001648868500038
The uneven illumination of each image in :

pixpix ii ,, jj nno == pixpix ii ,, jj nno ·· II ‾‾ MvtempMvtemp // II ii MvtempMvtemp nno ∈∈ [[ 00 ,, NN -- 11 ]] ,, ii ∈∈ [[ 00 ,, ww -- 11 ]] ,, jj ∈∈ [[ 00 ,, hh -- 11 ]] ,,

Figure BDA000016488685000310
Figure BDA000016488685000310

步骤3:运用Roberts算子对图像序列做边缘轮廓信息提取,然后将边缘强度信息增加到原图像中,实现条码边缘信息增强;Step 3: Apply the Roberts operator to the image sequence Do edge contour information extraction, and then add edge strength information to the original image to achieve barcode edge information enhancement;

步骤4:识别条码图像及条码位置:Step 4: Identify barcode image and barcode position:

步骤(4-1):计算图像序列

Figure BDA000016488685000312
中每幅图像的横向投影数据 Step (4-1): Calculate the image sequence
Figure BDA000016488685000312
Transverse projection data for each image in

avgavg jj nno == 11 ww ΣΣ ii == 00 ww pixpix ii ,, jj nno jj ∈∈ [[ 00 ,, hh ))

对于第n幅图像,得到一组横向投影数据

Figure BDA000016488685000315
计算
Figure BDA000016488685000316
中的最小值
Figure BDA000016488685000317
For the nth image, get a set of lateral projection data
Figure BDA000016488685000315
calculate
Figure BDA000016488685000316
the minimum value in
Figure BDA000016488685000317

步骤(4-2):将横向投影数据

Figure BDA000016488685000318
整体向下平移
Figure BDA000016488685000319
计算平移后横向投影数据
Figure BDA00001648868500041
的最大值
Figure BDA00001648868500042
和平均值
Figure BDA00001648868500043
对平移后的横向投影数据
Figure BDA00001648868500044
进行加权均值滤波和中值滤波,其中加权均值滤波模板为:Step (4-2): Project the data horizontally
Figure BDA000016488685000318
Overall pan down
Figure BDA000016488685000319
Compute the translated lateral projection data
Figure BDA00001648868500041
the maximum value of
Figure BDA00001648868500042
and average
Figure BDA00001648868500043
For the translated lateral projection data
Figure BDA00001648868500044
Perform weighted mean filtering and median filtering, where the weighted mean filtering template is:

11 99 ×× 11 22 33 22 11

中值滤波采用5×1的滑动窗口;对滤波后的横向投影数据

Figure BDA00001648868500046
进行阈值分割,分割函数为:Median filtering uses a 5×1 sliding window; for the filtered lateral projection data
Figure BDA00001648868500046
For threshold segmentation, the segmentation function is:

步骤(4-3):对经过步骤(4-2)处理后的横向投影数据

Figure BDA00001648868500048
进行数据拟合,拟合函数的形式为:Step (4-3): Transverse projection data processed by step (4-2)
Figure BDA00001648868500048
For data fitting, the form of the fitting function is:

ythe y == aa 11 (( 00 &le;&le; xx << xx 11 )) bb 11 (( xx 11 &le;&le; xx &le;&le; hh // 22 )) aa 22 (( hh // 22 << xx &le;&le; xx 22 )) bb 22 (( xx 22 << xx << hh ))

其中,a1,b1,a2,b2为线性拟合函数的拟合变量,x1,x2为线性拟合函数的分段点;采用最小二乘拟合,得到拟合误差为:Among them, a 1 , b 1 , a 2 , b 2 are the fitting variables of the linear fitting function, x 1 , x 2 are the segmentation points of the linear fitting function; using the least squares fitting, the fitting error is :

SS ^^ minmin 22 (( xx 11 nno )) == &Sigma;&Sigma; ii == 00 hh // 22 (( avgavg ii nno )) 22 -- (( &Sigma;&Sigma; ii == 00 xx 11 nno -- 11 avgavg ii nno )) 22 xx 11 nno -- (( &Sigma;&Sigma; ii == xx 11 nno hh // 22 avgavg ii nno )) 22 hh // 22 -- xx 11 nno ++ 11

SS ^^ minmin 22 (( xx 22 nno )) == &Sigma;&Sigma; ii == hh // 22 ++ 11 hh -- 11 (( avgavg ii nno )) 22 -- (( &Sigma;&Sigma; ii == hh // 22 ++ 11 xx 22 nno avgavg ii nno )) 22 xx 22 nno -- hh // 22 -- (( &Sigma;&Sigma; ii == xx 22 nno ++ 11 hh -- 11 avgavg ii nno )) 22 hh -- xx 22 nno -- 11

在第n幅图像中,计算拟合误差

Figure BDA000016488685000412
取最小值时对应的
Figure BDA000016488685000413
Figure BDA000016488685000414
计算拟合误差
Figure BDA000016488685000415
取最小值时对应的
Figure BDA000016488685000416
Figure BDA000016488685000417
In the nth image, calculate the fitting error
Figure BDA000016488685000412
corresponding to the minimum value
Figure BDA000016488685000413
make
Figure BDA000016488685000414
Calculate Fit Error
Figure BDA000016488685000415
corresponding to the minimum value
Figure BDA000016488685000416
make
Figure BDA000016488685000417

步骤(4-4):重复步骤(4-1)~步骤(4-3),得到图像序列

Figure BDA000016488685000418
中每幅图像的
Figure BDA000016488685000420
从而得到数组
Figure BDA000016488685000421
Step (4-4): Repeat step (4-1) ~ step (4-3) to get the image sequence
Figure BDA000016488685000418
of each image in and
Figure BDA000016488685000420
and thus get the array
Figure BDA000016488685000421

步骤(4-5):计算图像序列中每幅图像各自

Figure BDA000016488685000423
位置的纵向梯度值:Step (4-5): Compute the image sequence Each image in the
Figure BDA000016488685000423
Vertical gradient value for position:

gradgrad ythe y dd nno nno == &Sigma;&Sigma; ii == 00 ww -- 11 || pixpix ii ,, ythe y dd nno nno -- pixpix ii ,, ythe y dd nno ++ 11 nno ||

从而得到数组

Figure BDA00001648868500052
计算数组
Figure BDA00001648868500053
的平均值从数组
Figure BDA00001648868500055
中第n=0的数据开始向后依次与
Figure BDA00001648868500056
比较,当比较到第n=η的数据大于
Figure BDA00001648868500057
为止;从数组
Figure BDA00001648868500058
中第n=N-1的数据开始向前依次与
Figure BDA00001648868500059
比较,当比较到第n=κ的数据大于
Figure BDA000016488685000510
为止;图像序列
Figure BDA000016488685000511
中,Mvtempη~Mvtempκ的图像为条码图像;and thus get the array
Figure BDA00001648868500052
Calculate array
Figure BDA00001648868500053
average of from array
Figure BDA00001648868500055
The n=0th data in the data starts backward and sequentially with
Figure BDA00001648868500056
Comparison, when the n=ηth data is greater than
Figure BDA00001648868500057
up to; from array
Figure BDA00001648868500058
The data of the n=N-1th in the data starts forward and sequentially with
Figure BDA00001648868500059
For comparison, when the n=κth data is greater than
Figure BDA000016488685000510
up to; image sequence
Figure BDA000016488685000511
Among them, the images of Mvtemp η ~ Mvtemp κ are barcode images;

步骤(4-6):计算所有条码图像中

Figure BDA000016488685000512
值的平均值ys;任取条码图像中的一幅图像,计算该图像
Figure BDA000016488685000513
位置和
Figure BDA000016488685000514
位置处的纵向梯度值;当
Figure BDA000016488685000515
处的纵向梯度值大于
Figure BDA000016488685000516
处的纵向梯度值,则将所有条码图像从各自
Figure BDA000016488685000517
位置向上裁剪,保留
Figure BDA000016488685000518
作为新的条码图像;当
Figure BDA000016488685000519
处的纵向梯度值大于
Figure BDA000016488685000520
处的纵向梯度值,则将所有条码图像从各自
Figure BDA000016488685000521
位置向下裁剪,保留
Figure BDA000016488685000522
作为新的条码图像;将新得到的条码图像按原先顺序重新存为
Figure BDA000016488685000523
图像宽度w不变,高度为ys;Step (4-6): Calculate all barcode images
Figure BDA000016488685000512
The average value y s of the value; any image in the barcode image is taken, and the image is calculated
Figure BDA000016488685000513
location and
Figure BDA000016488685000514
The vertical gradient value at the position; when
Figure BDA000016488685000515
The longitudinal gradient value at is greater than
Figure BDA000016488685000516
The vertical gradient value at the place, then all barcode images from their respective
Figure BDA000016488685000517
The position is cropped upwards, and the
Figure BDA000016488685000518
as a new barcode image; when
Figure BDA000016488685000519
The longitudinal gradient value at is greater than
Figure BDA000016488685000520
The vertical gradient value at the place, then all barcode images from their respective
Figure BDA000016488685000521
The position is cropped downwards, retaining
Figure BDA000016488685000522
as a new barcode image; save the newly obtained barcode image in its original order as
Figure BDA000016488685000523
The width w of the image remains unchanged, and the height is y s ;

步骤5:确定条码模块的划分方式:Step 5: Determine the division method of the barcode module:

步骤(5-1):将

Figure BDA000016488685000524
按顺序横向排列,纵向对齐合并成一幅图像NMvtemp,图像NMvtemp高度为ys,宽度为Mw;对图像NMvtemp做纵向梯度投影:Step (5-1): put
Figure BDA000016488685000524
Arranged horizontally in order, vertically aligned and merged into an image NMvtemp, the height of the image NMvtemp is ys , and the width is Mw; vertical gradient projection is performed on the image NMvtemp:

ythe y gradgrad jj NMvtempNMvtemp == &Sigma;&Sigma; ii == 00 Mwmw -- 11 || pixpix ii ,, jj NMvtempNMvtemp -- pixpix ii ,, jj ++ 11 NMvtempNMvtemp || jj &Element;&Element; [[ 00 ,, ythe y sthe s ))

其中

Figure BDA000016488685000526
表示图像NMvtemp中第j行的纵向梯度;in
Figure BDA000016488685000526
Indicates the longitudinal gradient of row j in image NMvtemp;

步骤(5-2):从二维条码模块划分方式集合C={L×L}中选取划分方式l×l∈C,得到一组模块纵向分割点:Step (5-2): Select the division method l×l∈C from the set of division methods of two-dimensional barcode modules C={L×L}, and obtain a set of module longitudinal division points:

Hh == {{ hh mm || hh mm == ythe y sthe s ll &times;&times; mm }} mm == 11 ll -- 11

步骤(5-3):计算图像NMvtemp在集合H中每个模块纵向分割点hm处的纵向梯度值

Figure BDA000016488685000528
并计算所有纵向梯度值
Figure BDA000016488685000529
的平均值,作为划分方式l×l∈C的梯度值;Step (5-3): Calculate the vertical gradient value of the image NMvtemp at the vertical segmentation point h m of each module in the set H
Figure BDA000016488685000528
and calculate all longitudinal gradient values
Figure BDA000016488685000529
The average value of is used as the gradient value of the division method l×l∈C;

步骤(5-4):重复进行步骤(5-2)~步骤(5-3),计算集合C中所有划分方式的梯度值,取梯度值最大的划分方式p×p作为二维条码横向模块划分方式;Step (5-4): Repeat step (5-2) to step (5-3), calculate the gradient value of all division methods in set C, and take the division method p×p with the largest gradient value as the two-dimensional barcode horizontal module division method;

步骤6:条码图像粗配准:Step 6: Coarse registration of barcode images:

步骤(6-1):将

Figure BDA00001648868500061
中的每一幅图像NMvtempn都转化为一个p×w的数据矩阵,转化计算方式为:Step (6-1): put
Figure BDA00001648868500061
Each image in NMvtemp n is converted into a p×w data matrix, and the conversion calculation method is:

Xx nno == [[ (( xx kk ,, ii nno == 11 Mm sthe s &Sigma;&Sigma; jj == kk Mm sthe s (( kk ++ 11 )) Mm sthe s pixpix ii ,, jj nno )) ii == 00 ww -- 11 ]] kk == 00 pp -- 11

其中Xn表示图像NMvempn对应的数据矩阵,

Figure BDA00001648868500063
表示矩阵中第k行第i列元素,Ms表示条码模块的纵向尺寸,Ms=ys/p;Where X n represents the data matrix corresponding to the image NMvemp n ,
Figure BDA00001648868500063
Indicates the k-th row and i-th column element in the matrix, M s represents the longitudinal size of the barcode module, M s =y s /p;

步骤(6-2):将

Figure BDA00001648868500064
中相邻两幅图像对应的数据矩阵Xn,Xn+1步进重叠,步进列数δg的范围为1列~5列;计算每次重叠时重叠区域元素的均方差S2(gn):Step (6-2): put
Figure BDA00001648868500064
The data matrices X n corresponding to two adjacent images in , X n+1 step overlap, and the step column number δ g ranges from 1 column to 5 columns; calculate the mean square error S 2 ( g n ):

SS 22 (( gg nno )) == 11 pp gg nno (( &Sigma;&Sigma; kk == 00 pp -- 11 &Sigma;&Sigma; ii == 00 gg nno (( Xx kk ,, ii nno ++ 11 -- Xx kk ,, ww -- ii -- gg nno nno )) 22 )) (( 11 &le;&le; gg nno &le;&le; ww ))

其中S2(gn)表示数据矩阵重叠gn列元素时的均方差;Among them, S 2 (g n ) represents the mean square error when the data matrix overlaps g n column elements;

步骤(6-3):计算数据矩阵Xn,Xn+1步进重叠过程中的所有S2(gn),取其中取值最小的三个均方差对应的重叠列数

Figure BDA00001648868500066
作为相邻两幅图像NMvtempn和NMvtempn+1间的粗配准位置,并记
Figure BDA00001648868500067
Step (6-3): Calculate all S 2 (g n ) in the data matrix X n , X n+1 step overlapping process, and take the number of overlapping columns corresponding to the three mean square deviations with the smallest values
Figure BDA00001648868500066
As the coarse registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record
Figure BDA00001648868500067

步骤(6-4):重复步骤(6-2)~步骤(6-3),对图像序列

Figure BDA00001648868500068
中每两幅相邻图像进行粗配准,得到粗配准位置序列 Step (6-4): Repeat step (6-2) ~ step (6-3), for the image sequence
Figure BDA00001648868500068
Coarse registration is performed on every two adjacent images, and the coarse registration position sequence is obtained

步骤7:条码图像精配准:Step 7: Fine registration of barcode images:

步骤(7-1):采用经典相似性测度方法的计算相邻两幅图像NMvtempn和NMvempn+1在匹配位置gn处的匹配度Rn(gn):Step (7-1): Calculate the matching degree R n (g n ) of two adjacent images NMvtemp n and NMvemp n+1 at the matching position g n using the classical similarity measurement method:

RR nno (( gg nno )) == &Sigma;&Sigma; ii == 00 gg nno -- 11 &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ww -- ii -- gg nno ,, jj nno &times;&times; pixpix ii ,, jj nno ++ 11 )) &Sigma;&Sigma; ii == 00 gg nno -- 11 &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ww -- ii -- gg nno ,, jj nno )) 22 &Sigma;&Sigma; ii == 00 gg nno -- 11 &Sigma;&Sigma; jj == 11 ythe y sthe s -- 11 (( pixpix ii ,, jj nno ++ 11 )) 22 gg nno &Element;&Element; Uu (( gg 11 nno ,, &delta;&delta; )) &cap;&cap; Uu (( gg 22 nno ,, &delta;&delta; )) &cap;&cap; Uu (( gg 33 nno ,, &delta;&delta; ))

微调的列数δ取δg+1;The number of columns δ for fine-tuning is δ g +1;

步骤(7-2):取匹配度Rn(gn)的最大值为相邻两幅图像NMvtempn和NMvtempn+1间的最佳匹配度,记为Rn=max{Rn(gn)},并将max{Rn(gn)}对应的位置gn作为相邻两幅图像NMvtempn和NMvtempn+1间的精配准位置,并记精配准位置Cn=gnStep (7-2): Take the maximum value of the matching degree R n (g n ) as the best matching degree between two adjacent images NMvtemp n and NMvtemp n+1 , which is recorded as R n =max{R n (g n )}, and take the position g n corresponding to max{R n (g n )} as the fine registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record the fine registration position C n =g n ;

步骤(7-3):重复步骤(7-1)~步骤(7-2),计算图像序列

Figure BDA00001648868500071
中每两幅相邻图像的最佳匹配度以及精配准位置,得到最佳匹配度数组
Figure BDA00001648868500072
和精配准位置数组
Figure BDA00001648868500073
Step (7-3): Repeat step (7-1) ~ step (7-2) to calculate the image sequence
Figure BDA00001648868500071
The best matching degree and the precise registration position of each two adjacent images in the middle, get the best matching degree array
Figure BDA00001648868500072
and array of fine registration positions
Figure BDA00001648868500073

步骤8:图像拼接融合与条码识别:Step 8: Image splicing and fusion and barcode recognition:

步骤(8-1):遍历最佳匹配度数组

Figure BDA00001648868500074
以匹配度数值最小的两个位置点n1和n2作为分段点,将图像序列
Figure BDA00001648868500075
分为三个部分;Step (8-1): Traverse the best matching degree array
Figure BDA00001648868500074
The two position points n 1 and n 2 with the smallest matching degree value are used as segmentation points, and the image sequence
Figure BDA00001648868500075
Divided into three parts;

步骤(8-2):将每个部分的图像按照图像间的精配准位置采用渐入渐出的加权平均法进行拼接融合,得到三个部分的合成图像Part0、Part1、Part2,三幅图像宽度分别为w0、w1、w2,高度是ys,三幅图像间的精配准位置分别为PC0、PC1

Figure BDA00001648868500076
PC 1 = C n 2 ; Step (8-2): The images of each part are spliced and fused according to the precise registration position between the images using the weighted average method of gradual in and gradual out to obtain the composite images of three parts Part 0 , Part 1 , and Part 2 , The widths of the three images are w 0 , w 1 , and w 2 , and the height is y s . The precise registration positions between the three images are PC 0 , PC 1 ,
Figure BDA00001648868500076
PC 1 = C no 2 ;

步骤(8-3):计算

Figure BDA00001648868500078
中图像NMvtemp0的横向梯度:Step (8-3): Calculate
Figure BDA00001648868500078
Transverse gradient of image NMvtemp 0 in:

xx gradgrad jj NMvtemNMvtem pp 00 == &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ii ++ 11 ,, jj NMvtemNMvtem pp 00 -- pixpix ii ,, jj NMvtemNMvtem pp 00 )) ii &Element;&Element; [[ 00 ,, ww ))

Figure BDA000016488685000710
代表图像NMvtemp0中i列的横向梯度;得到数组并计算
Figure BDA000016488685000712
中的最大值
Figure BDA000016488685000713
位置xl即为图像NMvtemp0中条码区域和空白区域分界位置;
Figure BDA000016488685000710
Represents the horizontal gradient of column i in image NMvtemp 0 ; get array and calculate
Figure BDA000016488685000712
the maximum value in
Figure BDA000016488685000713
Position x 1 is the boundary position between the barcode area and the blank area in the image NMvtemp 0 ;

步骤(8-4):计算

Figure BDA000016488685000714
中图像NMvtempM-1的横向梯度:Step (8-4): Calculate
Figure BDA000016488685000714
Transverse gradient of image NMvtemp M-1 in:

xx gradgrad jj NMvtemNMvtem pp Mm -- 11 == &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ii ,, jj NMvtemNMvtem pp Mm -- 11 -- pixpix ii ++ 11 ,, jj NMvtemNMvtem pp Mm -- 11 )) ii &Element;&Element; [[ 00 ,, ww ))

Figure BDA000016488685000716
代表图像NMvtempM-1中i列的横向梯度;得到数组并计算中的最大值
Figure BDA000016488685000719
位置xr即为图像NMvtempM-1中条码区域和空白区域分界位置;
Figure BDA000016488685000716
Represents the horizontal gradient of the i column in the image NMvtemp M-1 ; get the array and calculate the maximum value in
Figure BDA000016488685000719
The position x r is the boundary position between the barcode area and the blank area in the image NMvtemp M-1 ;

步骤(8-5):将Part0、Part1、Part2三幅图像拼接融合后的宽度为wm=(xl+ys+w-xr),高度为ys;建立图像内存缓冲区,大小为wm×ys;将图像Part0放入缓冲区左侧,将图像Part2放入缓冲区右侧;判断|(wm-w0-w2)-(w1-PC0-PC1)|≤10,若满足判断条件则将图像Part1按照与Part0、Part2的精配准位置PC0、PC1放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合,进入步骤(8-7),若不满足判断条件则进入步骤(8-6);Step (8-5): After splicing and merging the three images of Part 0 , Part 1 and Part 2 , the width is w m =(x l +y s +wx r ), and the height is y s ; create an image memory buffer, The size is w m ×y s ; put the image Part 0 into the left side of the buffer, put the image Part 2 into the right side of the buffer; judge |(w m -w 0 -w 2 )-(w 1 -PC 0 - PC 1 )|≤10, if the judgment condition is satisfied, put the image Part 1 into the buffer according to the fine registration position PC 0 and PC 1 of Part 0 and Part 2 , and the overlapping area is completed by the weighted average method of gradual in and gradual out Image splicing and fusion, enter step (8-7), if the judgment condition is not met, enter step (8-6);

步骤(8-6):在图像内存缓冲区内将Part1放在两幅图像之间,开始位置与Part0重合w列,然后Part1步进向右运动,直到Part1与Part2重合w列为止;将步进运动过程中Part1与Part0、Part2分别重合的区域合并为联合重合区域,计算联合重合区域匹配度,匹配度计算方法同步骤(7-1),并步进运动过程中取到联合重合区域最大匹配度的位置作为配准位置,将图像Part1按照配准位置放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合;Step (8-6): Put Part 1 between two images in the image memory buffer, the starting position coincides with Part 0 in column w, and then Part 1 moves to the right stepping until Part 1 and Part 2 coincide with w Until the row; merge the overlapping areas of Part 1 , Part 0 , and Part 2 during the stepping motion into a joint overlapping area, and calculate the matching degree of the joint overlapping area. The calculation method of the matching degree is the same as step (7-1), and stepping motion In the process, the position of the maximum matching degree in the joint coincidence area is taken as the registration position, and the image Part 1 is put into the buffer according to the registration position, and the weighted average method of gradual in and gradual out is used in the coincident area to complete image splicing and fusion;

步骤(8-7):将图像内存缓冲区内拼接融合得到的二维条码图像,按照宽度进行裁剪,取二维条码图像xl~xl+ys部分,得到一幅新的ys×ys的Data Matrix二维条码图像,条码模块大小为Ms×Ms;使用解码系统读取新的Data Matrix二维条码图像中的条码信息,解码系统根据解码原理和Reed-Solomon纠错算法对其解码并纠错。Step (8-7): The two-dimensional barcode image obtained by splicing and fusion in the image memory buffer is cut according to the width, and the part of the two-dimensional barcode image x l ~ x l +y s is taken to obtain a new y s × The Data Matrix two-dimensional barcode image of y s , the barcode module size is M s ×M s ; use the decoding system to read the barcode information in the new Data Matrix two-dimensional barcode image, the decoding system is based on the decoding principle and the Reed-Solomon error correction algorithm Decode it and correct it.

有益效果Beneficial effect

本发明提出的一种基于图像拼接的柱面二维条码识读技术,改变现有的二维条码识读系统基于一幅图像的识别原理,能够完整采集柱面条码信息,消除柱面条码的畸变和光照不均等问题。根据Data Matrix二维条码图像的特征,设计的拼接融合算法能够很好解决二维条码图像拼接过程中遇到正确率不高、效率低等突出问题,实现快速准确的识读柱面二维条码信息。根据发明人初步统计,在不使用本发明识读方法时,读取一个柱面二维条码平均需要2~4秒,而使用本发明识读方法后只需0.4~0.8秒,提高了3~5倍的效率,并且大大提高的识读率。A cylindrical two-dimensional barcode reading technology based on image splicing proposed by the present invention changes the existing two-dimensional barcode reading system based on an image recognition principle, can completely collect cylindrical barcode information, and eliminates the problem of cylindrical barcodes. Distortion and uneven lighting problems. According to the characteristics of Data Matrix two-dimensional barcode images, the splicing and fusion algorithm designed can well solve the outstanding problems such as low accuracy rate and low efficiency encountered in the process of two-dimensional barcode image splicing, and realize fast and accurate reading of cylindrical two-dimensional barcodes information. According to the inventor's preliminary statistics, when the reading method of the present invention is not used, it takes an average of 2 to 4 seconds to read a cylindrical two-dimensional barcode, but it only takes 0.4 to 0.8 seconds after using the reading method of the present invention, which improves the reading by 3 to 4 seconds. 5 times the efficiency, and a greatly improved reading rate.

附图说明 Description of drawings

图1:本发明的流程图;Fig. 1: flow chart of the present invention;

图2:实际凸形数据图;Figure 2: Actual convex data graph;

图3:规范化后的凸形数据图;Figure 3: Normalized convex data graph;

图4:实施例中的处理过程图;Fig. 4: process diagram in the embodiment;

具体实施方式Detailed ways

下面结合具体实施例描述本发明:Describe the present invention below in conjunction with specific embodiment:

本实例取为Φ6的柱面金属,在识读工装下用MV1300,采集一组的序列图像。本实例选取相机参数如下:快门速度10us,增益调节60,采集速度为高速。为实现上述目的,本发明的技术方案总过程如图1所示。In this example, a cylindrical metal of Φ6 is used, and a set of sequence images are collected with MV1300 under the reading tool. In this example, the camera parameters are selected as follows: shutter speed 10us, gain adjustment 60, acquisition speed is high speed. In order to achieve the above purpose, the overall process of the technical solution of the present invention is shown in Figure 1 .

步骤1:在识读工装下,调节减小挡板缝隙,使得采集的条码图像不存在高亮反光为止,然后旋转连续采集二维条码的N幅图像

Figure BDA00001648868500091
所述N幅图像包含了二维条码的全部信息;每幅图像的宽度为w=97,高度为h=384,
Figure BDA00001648868500092
表示第n幅图像Mvtempn中第i列第j行像素点的像素值。本实施例中采集了8幅图像,如附图4中(a)所示。Step 1: Under the reading tooling, adjust and reduce the gap of the baffle until there is no highlight reflection in the collected barcode image, and then rotate and continuously collect N images of the two-dimensional barcode
Figure BDA00001648868500091
The N images contain all the information of the two-dimensional barcode; the width of each image is w=97, the height is h=384,
Figure BDA00001648868500092
Indicates the pixel value of the i-th column and j-th row pixel in the n-th image Mvtemp n . In this embodiment, eight images are collected, as shown in (a) of FIG. 4 .

步骤2:校正图像光照不均:Step 2: Correct uneven illumination of the image:

本步骤利用图像中背景区域的光照度变化规律校正图像中条码区域的光照不均,步骤如下:In this step, the uneven illumination of the barcode area in the image is corrected by using the illumination variation law of the background area in the image. The steps are as follows:

步骤(2-1):任意选取中包含条码信息的一幅图像Mvtemp,从图像Mvtemp中间h/2处向上遍历求取纵向梯度:Step (2-1): choose arbitrarily An image Mvtemp containing barcode information in the image Mvtemp traverses upward from the middle h/2 of the image Mvtemp to obtain the longitudinal gradient:

gradgrad jj MvtempMvtemp == &Sigma;&Sigma; ii == 00 ww (( pixpix ii ,, jj MvtempMvtemp -- pixpix ii ,, jj ++ 11 MvtempMvtemp )) jj &Element;&Element; (( hh // 22 ,, hh ))

其中,

Figure BDA00001648868500095
表示图像Mvtemp中第j行的纵向梯度值,并在第yup行取到最大纵向梯度值;in,
Figure BDA00001648868500095
Indicates the vertical gradient value of the jth line in the image Mvtemp, and takes the maximum vertical gradient value in the y up line;

步骤(2-2):计算图像Mvtemp中背景区域的光照度数组:Step (2-2): Calculate the illuminance array of the background area in the image Mvtemp:

II ii MvtempMvtemp == 11 2020 &Sigma;&Sigma; jj == ythe y upup ythe y upup ++ &beta;&beta; pixpix ii ,, jj MvtempMvtemp ii &Element;&Element; [[ 00 ,, ww ))

其中,图像Mvtemp中背景区域指第0列至第w-1列,第yup行至yup+β行的区域,β取10~h-yup

Figure BDA00001648868500097
表示图像Mvtemp背景区域第i列的光照度;Wherein, the background area in the image Mvtemp refers to the area from the 0th column to the w-1th column, the y up line to the y up + β line, and β is 10~hy up ;
Figure BDA00001648868500097
Indicates the illuminance of the i-th column in the background area of the image Mvtemp;

步骤(2-3):计算图像Mvtemp中背景区域的平均光照度 Step (2-3): Calculate the average illuminance of the background area in the image Mvtemp

II &OverBar;&OverBar; MvtempMvtemp == 11 (( &beta;&beta; ++ 11 )) ww &Sigma;&Sigma; ii == 00 ww -- 11 &Sigma;&Sigma; jj == ythe y upup ythe y upup ++ &beta;&beta; pixpix ii ,, jj MvtempMvtemp

步骤(2-4):逆向校正图像序列中每一幅图像的光照不均:Steps (2-4): Inversely rectify the image sequence The uneven illumination of each image in :

pixpix ii ,, jj nno == pixpix ii ,, jj nno &CenterDot;&Center Dot; II &OverBar;&OverBar; MvtempMvtemp // II ii MvtempMvtemp nno &Element;&Element; [[ 00 ,, NN -- 11 ]] ,, ii &Element;&Element; [[ 00 ,, ww -- 11 ]] ,, jj &Element;&Element; [[ 00 ,, hh -- 11 ]] ,,

Figure BDA00001648868500102
Figure BDA00001648868500102

本实施例中图像纵向像素属于柱面的同一弧度,在两侧平行的条形白光源下,图像的纵向是光照均匀的,横向的光照度则是从两侧向中间渐渐变小的。基于此规律,本实施例利用图像中空白区域的光照度变化规律校正图像中条码区域的光照不均。计算得到序列图像中第四幅图像中的矩形空白区域:纵向15~35,横向0~233,平均光照度

Figure BDA00001648868500103
逆向校正后的图像如图4中(b)所示。In this embodiment, the vertical pixels of the image belong to the same radian of the cylinder. Under the parallel white light source on both sides, the vertical illumination of the image is uniform, and the horizontal illuminance gradually decreases from the two sides to the middle. Based on this rule, this embodiment corrects the uneven illumination of the barcode area in the image by using the light intensity variation rule of the blank area in the image. Calculate the rectangular blank area in the fourth image in the sequence image: vertical 15-35, horizontal 0-233, average illuminance
Figure BDA00001648868500103
The image after inverse correction is shown in (b) in Figure 4.

步骤3:运用Roberts算子对图像序列

Figure BDA00001648868500104
做边缘轮廓信息提取,然后将边缘强度信息增加到原图像中,实现条码边缘信息增强;增强后的图像如图4中(c)所示。Step 3: Apply the Roberts operator to the image sequence
Figure BDA00001648868500104
Extract edge contour information, and then add edge strength information to the original image to achieve barcode edge information enhancement; the enhanced image is shown in Figure 4 (c).

步骤4:识别条码图像及条码位置:Step 4: Identify barcode image and barcode position:

本步骤通过拟合图像中灰度投影的方法,识别哪些图像含有条码,哪些图像不含条码,含有条码的图像中的条码所在的纵向位置,实现条码图像及条码位置的识别。步骤如下:This step recognizes which images contain barcodes, which images do not contain barcodes, and the longitudinal positions of the barcodes in the images containing barcodes by fitting the grayscale projection method in the images, so as to realize the identification of barcode images and barcode positions. Proceed as follows:

步骤(4-1):计算图像序列

Figure BDA00001648868500105
中每幅图像的横向投影数据
Figure BDA00001648868500106
Step (4-1): Calculate the image sequence
Figure BDA00001648868500105
Transverse projection data for each image in
Figure BDA00001648868500106

avgavg jj nno == 11 ww &Sigma;&Sigma; ii == 00 ww pixpix ii ,, jj nno jj &Element;&Element; [[ 00 ,, hh ))

对于第n幅图像,得到一组横向投影数据

Figure BDA00001648868500108
计算
Figure BDA00001648868500109
中的最小值
Figure BDA000016488685001010
由于条码区域的的投影数据与背景不同,所以
Figure BDA000016488685001011
数据会呈现出凸字形,称为凸形数据,如图2所示。For the nth image, get a set of lateral projection data
Figure BDA00001648868500108
calculate
Figure BDA00001648868500109
the minimum value in
Figure BDA000016488685001010
Since the projection data of the barcode area is different from the background, so
Figure BDA000016488685001011
The data will show a convex shape, which is called convex data, as shown in Figure 2.

步骤(4-2):为简化计算的运算量,将横向投影数据

Figure BDA000016488685001012
整体向下平移
Figure BDA000016488685001013
计算平移后横向投影数据的最大值
Figure BDA000016488685001015
和平均值
Figure BDA000016488685001016
对平移后的横向投影数据
Figure BDA000016488685001017
进行加权均值滤波和中值滤波,进一步消除了各种噪声、污染等影响,其中加权均值滤波模板为:Step (4-2): In order to simplify the computational load of the calculation, project the data horizontally
Figure BDA000016488685001012
Overall pan down
Figure BDA000016488685001013
Compute the translated lateral projection data the maximum value of
Figure BDA000016488685001015
and average
Figure BDA000016488685001016
For the translated lateral projection data
Figure BDA000016488685001017
Perform weighted mean filtering and median filtering to further eliminate the effects of various noises, pollution, etc., where the weighted mean filtering template is:

11 99 &times;&times; 11 22 33 22 11

中值滤波采用5×1的滑动窗口;对滤波后的横向投影数据

Figure BDA00001648868500112
进行阈值分割,消除凸形数据存在锯齿现象,分割函数为:Median filtering uses a 5×1 sliding window; for the filtered lateral projection data
Figure BDA00001648868500112
Perform threshold segmentation to eliminate the jaggedness of convex data. The segmentation function is:

Figure BDA00001648868500113
Figure BDA00001648868500113

通过步骤(4-2)的规范化操作,使得凸形数据不规则得到了很大的改善,如图3所示。Through the normalization operation of step (4-2), the irregularity of the convex data has been greatly improved, as shown in Figure 3.

步骤(4-3):对经过步骤(4-2)处理后的横向投影数据

Figure BDA00001648868500114
进行数据拟合,拟合函数的形式为:Step (4-3): Transverse projection data processed by step (4-2)
Figure BDA00001648868500114
For data fitting, the form of the fitting function is:

ythe y == aa 11 (( 00 &le;&le; xx << xx 11 )) bb 11 (( xx 11 &le;&le; xx &le;&le; hh // 22 )) aa 22 (( hh // 22 << xx &le;&le; xx 22 )) bb 22 (( xx 22 << xx << hh ))

其中,a1,b1,a2,b2为线性拟合函数的拟合变量,x1,x2为线性拟合函数的分段点;采用最小二乘拟合,得到拟合误差为:Among them, a 1 , b 1 , a 2 , b 2 are the fitting variables of the linear fitting function, x 1 , x 2 are the segmentation points of the linear fitting function; using the least squares fitting, the fitting error is :

SS ^^ minmin 22 (( xx 11 nno )) == &Sigma;&Sigma; ii == 00 hh // 22 (( avgavg ii nno )) 22 -- (( &Sigma;&Sigma; ii == 00 xx 11 nno -- 11 avgavg ii nno )) 22 xx 11 nno -- (( &Sigma;&Sigma; ii == xx 11 nno hh // 22 avgavg ii nno )) 22 hh // 22 -- xx 11 nno ++ 11

SS ^^ minmin 22 (( xx 22 nno )) == &Sigma;&Sigma; ii == hh // 22 ++ 11 hh -- 11 (( avgavg ii nno )) 22 -- (( &Sigma;&Sigma; ii == hh // 22 ++ 11 xx 22 nno avgavg ii nno )) 22 xx 22 nno -- hh // 22 -- (( &Sigma;&Sigma; ii == xx 22 nno ++ 11 hh -- 11 avgavg ii nno )) 22 hh -- xx 22 nno -- 11

在第n幅图像中,计算拟合误差

Figure BDA00001648868500118
取最小值时对应的
Figure BDA00001648868500119
Figure BDA000016488685001110
计算拟合误差
Figure BDA000016488685001111
取最小值时对应的
Figure BDA000016488685001112
In the nth image, calculate the fitting error
Figure BDA00001648868500118
corresponding to the minimum value
Figure BDA00001648868500119
make
Figure BDA000016488685001110
Calculate Fit Error
Figure BDA000016488685001111
corresponding to the minimum value
Figure BDA000016488685001112
make

步骤(4-4):重复步骤(4-1)~步骤(4-3),得到图像序列

Figure BDA000016488685001114
中每幅图像的
Figure BDA000016488685001115
Figure BDA000016488685001116
从而得到数组
Figure BDA000016488685001117
Step (4-4): Repeat step (4-1) ~ step (4-3) to get the image sequence
Figure BDA000016488685001114
of each image in
Figure BDA000016488685001115
and
Figure BDA000016488685001116
and thus get the array
Figure BDA000016488685001117

步骤(4-5):计算图像序列

Figure BDA000016488685001118
中每幅图像各自
Figure BDA000016488685001119
位置的纵向梯度值:Step (4-5): Compute the image sequence
Figure BDA000016488685001118
Each image in the
Figure BDA000016488685001119
Vertical gradient value for position:

gradgrad ythe y dd nno nno == &Sigma;&Sigma; ii == 00 ww -- 11 || pixpix ii ,, ythe y dd nno nno -- pixpix ii ,, ythe y dd nno ++ 11 nno ||

从而得到数组

Figure BDA00001648868500122
计算数组
Figure BDA00001648868500123
的平均值
Figure BDA00001648868500124
从数组
Figure BDA00001648868500125
中第n=0的数据开始向后依次与
Figure BDA00001648868500126
比较,当比较到第n=η的数据大于
Figure BDA00001648868500127
为止;从数组中第n=N-1的数据开始向前依次与
Figure BDA00001648868500129
比较,当比较到第n=κ的数据大于
Figure BDA000016488685001210
为止;图像序列
Figure BDA000016488685001211
中,Mvtempη~Mvtempκ的图像为条码图像;and thus get the array
Figure BDA00001648868500122
Calculate array
Figure BDA00001648868500123
average of
Figure BDA00001648868500124
from array
Figure BDA00001648868500125
The n=0th data in the data starts backward and sequentially with
Figure BDA00001648868500126
Comparison, when the n=ηth data is greater than
Figure BDA00001648868500127
up to; from array The data of the n=N-1th in the data starts forward and sequentially with
Figure BDA00001648868500129
For comparison, when the n=κth data is greater than
Figure BDA000016488685001210
up to; image sequence
Figure BDA000016488685001211
Among them, the images of Mvtemp η ~ Mvtemp κ are barcode images;

步骤(4-6):计算所有条码图像中

Figure BDA000016488685001212
值的平均值ys;任取条码图像中的一幅图像,计算该图像
Figure BDA000016488685001213
位置和
Figure BDA000016488685001214
位置处的纵向梯度值;当
Figure BDA000016488685001215
处的纵向梯度值大于处的纵向梯度值,则将所有条码图像从各自
Figure BDA000016488685001217
位置向上裁剪,保留
Figure BDA000016488685001218
作为新的条码图像;当
Figure BDA000016488685001219
处的纵向梯度值大于
Figure BDA000016488685001220
处的纵向梯度值,则将所有条码图像从各自位置向下裁剪,保留作为新的条码图像;将新得到的条码图像按原先顺序重新存为
Figure BDA000016488685001223
图像宽度w不变,高度为ys。Step (4-6): Calculate all barcode images
Figure BDA000016488685001212
The average value y s of the value; any image in the barcode image is taken, and the image is calculated
Figure BDA000016488685001213
location and
Figure BDA000016488685001214
The vertical gradient value at the position; when
Figure BDA000016488685001215
The longitudinal gradient value at is greater than The vertical gradient value at the place, then all barcode images from their respective
Figure BDA000016488685001217
The position is cropped upwards, and the
Figure BDA000016488685001218
as a new barcode image; when
Figure BDA000016488685001219
The longitudinal gradient value at is greater than
Figure BDA000016488685001220
The vertical gradient value at the place, then all barcode images from their respective The position is cropped downwards, retaining as a new barcode image; save the newly obtained barcode image in its original order as
Figure BDA000016488685001223
The image has constant width w and height y s .

本实施例中,在步骤4通过拟合每幅图像中灰度投影的方法,识别哪些图像含有条码,哪些图像不含条码,含有条码的图像中的条码所在的纵向位置,实现条码图像及条码位置的识别。通过计算、规范化、拟合投影凸形数据得到每幅图像中最小拟合误差位置的 { y d 0 = 153 , y u 0 = 196 } , { y d 1 = 34 , y u 1 = 345 } , { y d 2 = 34 , y u 2 = 345 } , { y d 3 = 35 , y u 3 = 346 } , { y d 4 = 31 , y u 4 = 342 } , { y d 5 = 29 , y u 5 = 341 } , { y d 6 = 31 , y u d = 343 } , { y d 7 = 150 , y u 7 = 200 } , 通过梯度判断可知第一幅和最后一幅图像中不含有二维条码。按照拟合位置裁剪图像,图像高度ys=311,宽度w不变,这样消除了图像间的纵向偏移,裁剪后的图像如图4中(d)所示。In this embodiment, in step 4, by fitting the method of grayscale projection in each image, identify which images contain barcodes, which images do not contain barcodes, and the longitudinal positions of the barcodes in the images containing barcodes, to realize barcode images and barcodes. Identification of the location. Calculate, normalize, and fit the projected convex data to obtain the location of the minimum fitting error in each image { the y d 0 = 153 , the y u 0 = 196 } , { the y d 1 = 34 , the y u 1 = 345 } , { the y d 2 = 34 , the y u 2 = 345 } , { the y d 3 = 35 , the y u 3 = 346 } , { the y d 4 = 31 , the y u 4 = 342 } , { the y d 5 = 29 , the y u 5 = 341 } , { the y d 6 = 31 , the y u d = 343 } , { the y d 7 = 150 , the y u 7 = 200 } , According to the gradient judgment, it can be seen that the first and last images do not contain two-dimensional barcodes. The image is cropped according to the fitting position, the height of the image is y s =311, and the width w is unchanged, thus eliminating the longitudinal offset between the images. The cropped image is shown in (d) of Figure 4.

步骤5:确定条码模块的划分方式:Step 5: Determine the division method of the barcode module:

本步骤根据图像竖直梯度投影,通过判断最佳预划分梯度的方式实现条码水平模块划分,过程如下:In this step, according to the vertical gradient projection of the image, the horizontal module division of the barcode is realized by judging the optimal pre-division gradient. The process is as follows:

步骤(5-1):将按顺序横向排列,纵向对齐合并成一幅图像NMvtemp,图像NMvtemp高度为ys,宽度为Mw;对图像NMvtemp做纵向梯度投影:Step (5-1): put Arranged horizontally in order, vertically aligned and merged into an image NMvtemp, the height of the image NMvtemp is ys , and the width is Mw; vertical gradient projection is performed on the image NMvtemp:

ythe y gradgrad jj NMvtempNMvtemp == &Sigma;&Sigma; ii == 00 Mwmw -- 11 || pixpix ii ,, jj NMvtempNMvtemp -- pixpix ii ,, jj ++ 11 NMvtempNMvtemp || jj &Element;&Element; [[ 00 ,, ythe y sthe s ))

其中

Figure BDA00001648868500131
表示图像NMvtemp中第j行的纵向梯度;
Figure BDA00001648868500132
表示图像NMvtemp中第i列第j行像素点的像素值;in
Figure BDA00001648868500131
Indicates the longitudinal gradient of row j in image NMvtemp;
Figure BDA00001648868500132
Represents the pixel value of the i-th column and the j-th row pixel in the image NMvtemp;

步骤(5-2):从二维条码模块划分方式集合C={L×L}中选取划分方式l×l∈C,得到一组模块纵向分割点:Step (5-2): Select the division method l×l∈C from the set of division methods of two-dimensional barcode modules C={L×L}, and obtain a set of module longitudinal division points:

Hh == {{ hh mm || hh mm == ythe y sthe s ll &times;&times; mm }} mm == 11 ll -- 11

步骤(5-3):计算图像NMvtemp在集合H中每个模块纵向分割点hm处的纵向梯度值并计算所有纵向梯度值

Figure BDA00001648868500135
的平均值,作为划分方式l×l∈C的梯度值;Step (5-3): Calculate the vertical gradient value of the image NMvtemp at the vertical segmentation point h m of each module in the set H and calculate all longitudinal gradient values
Figure BDA00001648868500135
The average value of is used as the gradient value of the division method l×l∈C;

步骤(5-4):重复进行步骤(5-2)~步骤(5-3),计算集合C中所有划分方式的梯度值,取梯度值最大的划分方式p×p作为二维条码横向模块划分方式。Step (5-4): Repeat steps (5-2) to (5-3) to calculate the gradient values of all division methods in set C, and take the division method p×p with the largest gradient value as the two-dimensional barcode horizontal module Divide the way.

本实施例中选用的二维条码模块划分方式集合

Figure BDA00001648868500136
根据图像竖直梯度投影波形,通过判断最佳预划分梯度的方式实现条码水平模块划分。各种预划分方式的梯度值为:The set of two-dimensional barcode module division methods selected in this embodiment
Figure BDA00001648868500136
According to the vertical gradient projection waveform of the image, the barcode horizontal module division is realized by judging the best pre-division gradient. The gradient values of various pre-division methods are:

  划分方式 Division method   8*8 8*8   10*10 10*10   12*12 12*12   14*14 14*14   16*16 16*16   18*18 18*18   20*20 20*20   22*22 22*22   24*24 24*24   梯度值 gradient value   489.42 489.42   328.22 328.22   1079.0 1079.0   333.15 333.15   367.33 367.33   446.29 446.29   349.10 349.10   343.33 343.33   640.69 640.69

可知,12*12是二维条码的最佳水平模块划分方式,划分效果如图4中(e)所示。It can be seen that 12*12 is the best horizontal module division method of the two-dimensional barcode, and the division effect is shown in (e) in Figure 4.

步骤6:条码图像粗配准:Step 6: Coarse registration of barcode images:

经过上述步骤,图像间的偏移只存在于横向,还需要将图像间的横向配准分为粗配准和精配准两个阶段。本步骤将图像按照所划分的水平条码模块转化为数据矩阵,然后通过数据矩阵匹配实现相邻图像间的粗配准,过程如下:After the above steps, the offset between images only exists in the horizontal direction, and the horizontal registration between images needs to be divided into two stages: coarse registration and fine registration. In this step, the image is converted into a data matrix according to the divided horizontal barcode modules, and then the coarse registration between adjacent images is realized through data matrix matching. The process is as follows:

步骤(6-1):将

Figure BDA00001648868500137
中的每一幅图像NMvtempn都转化为一个p×w的数据矩阵,转化计算方式为:Step (6-1): put
Figure BDA00001648868500137
Each image in NMvtemp n is converted into a p×w data matrix, and the conversion calculation method is:

Xx nno == [[ (( xx kk ,, ii nno == 11 Mm sthe s &Sigma;&Sigma; jj == kk Mm sthe s (( kk ++ 11 )) Mm sthe s pixpix ii ,, jj nno )) ii == 00 ww -- 11 ]] kk == 00 pp -- 11

其中Xn表示图像NMvtempn对应的数据矩阵,

Figure BDA00001648868500139
表示矩阵中第k行第i列元素,Ms表示条码模块的纵向尺寸,Ms=ys/p;步骤6及以后步骤中
Figure BDA00001648868500141
表示第n幅图像NMvtempn中第i列第j行像素点的像素值;Where X n represents the data matrix corresponding to the image NMvtemp n ,
Figure BDA00001648868500139
Indicates the kth row and i column element in the matrix, M s represents the longitudinal size of the barcode module, M s =y s /p; in step 6 and subsequent steps
Figure BDA00001648868500141
Represents the pixel value of the i-th column and j-th row pixel in the n-th image NMvtemp n ;

步骤(6-2):将

Figure BDA00001648868500142
中相邻两幅图像对应的数据矩阵Xn,Xn+1步进重叠,步进列数δg的范围为1列~5列;计算每次重叠时重叠区域元素的均方差S2(gn):Step (6-2): put
Figure BDA00001648868500142
The data matrices X n corresponding to two adjacent images in , X n+1 step overlap, and the step column number δ g ranges from 1 column to 5 columns; calculate the mean square error S 2 ( g n ):

SS 22 (( gg nno )) == 11 pp gg nno (( &Sigma;&Sigma; kk == 00 pp -- 11 &Sigma;&Sigma; ii == 00 gg nno (( Xx kk ,, ii nno ++ 11 -- Xx kk ,, ww -- ii -- gg nno nno )) 22 )) (( 11 &le;&le; gg nno &le;&le; ww ))

其中S2(gn)表示数据矩阵重叠gn列元素时的均方差;Among them, S 2 (g n ) represents the mean square error when the data matrix overlaps g n column elements;

步骤(6-3):计算数据矩阵Xn,Xn+1步进重叠过程中的所有S2(gn),取其中取值最小的三个均方差对应的重叠列数作为相邻两幅图像NMvtempn和NMvtempn+1间的粗配准位置,并记

Figure BDA00001648868500145
Step (6-3): Calculate all S 2 (g n ) in the data matrix X n , X n+1 step overlapping process, and take the number of overlapping columns corresponding to the three mean square deviations with the smallest values As the coarse registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record
Figure BDA00001648868500145

步骤(6-4):重复步骤(6-2)~步骤(6-3),对图像序列

Figure BDA00001648868500146
中每两幅相邻图像进行粗配准,得到粗配准位置序列 Step (6-4): Repeat step (6-2) ~ step (6-3), for the image sequence
Figure BDA00001648868500146
Coarse registration is performed on every two adjacent images, and the coarse registration position sequence is obtained

步骤7:条码图像精配准:Step 7: Fine registration of barcode images:

本步骤通过在每个粗配准位置微调,计算最佳的匹配度,最终确定出相邻两幅图像间精配准,具体过程如下:This step calculates the best matching degree by fine-tuning each coarse registration position, and finally determines the fine registration between two adjacent images. The specific process is as follows:

步骤(7-1):采用经典相似性测度方法的计算相邻两幅图像NMvtempn和NMvtempn+1在匹配位置gn处的匹配度Rn(gn):Step (7-1): Calculate the matching degree R n (g n ) of two adjacent images NMvtemp n and NMvtemp n+1 at the matching position g n using the classical similarity measurement method:

RR nno (( gg nno )) == &Sigma;&Sigma; ii == 00 gg nno -- 11 &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ww -- ii -- gg nno ,, jj nno &times;&times; pixpix ii ,, jj nno ++ 11 )) &Sigma;&Sigma; ii == 00 gg nno -- 11 &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ww -- ii -- gg nno ,, jj nno )) 22 &Sigma;&Sigma; ii == 00 gg nno -- 11 &Sigma;&Sigma; jj == 11 ythe y sthe s -- 11 (( pixpix ii ,, jj nno ++ 11 )) 22 gg nno &Element;&Element; Uu (( gg 11 nno ,, &delta;&delta; )) &cap;&cap; Uu (( gg 22 nno ,, &delta;&delta; )) &cap;&cap; Uu (( gg 33 nno ,, &delta;&delta; ))

微调的列数δ取δg+1;The number of columns δ for fine-tuning is δ g +1;

步骤(7-2):取匹配度Rn(gn)的最大值为相邻两幅图像NMvtempn和NMvtempn+1间的最佳匹配度,记为Rn=max{Rn(gn)},并将max{Rn(gn)}对应的位置gn作为相邻两幅图像NMvtempn和NMvtempn+1间的精配准位置,并记精配准位置Cn=gnStep (7-2): Take the maximum value of the matching degree R n (g n ) as the best matching degree between two adjacent images NMvtemp n and NMvtemp n+1 , which is recorded as R n =max{R n (g n )}, and take the position g n corresponding to max{R n (g n )} as the fine registration position between two adjacent images NMvtemp n and NMvtemp n+1 , and record the fine registration position C n =g n ;

步骤(7-3):重复步骤(7-1)~步骤(7-2),计算图像序列

Figure BDA00001648868500149
中每两幅相邻图像的最佳匹配度以及精配准位置,得到最佳匹配度数组
Figure BDA000016488685001410
和精配准位置数组
Figure BDA00001648868500151
Step (7-3): Repeat step (7-1) ~ step (7-2) to calculate the image sequence
Figure BDA00001648868500149
The best matching degree and the precise registration position of each two adjacent images in the middle, get the best matching degree array
Figure BDA000016488685001410
and array of fine registration positions
Figure BDA00001648868500151

本实施例中步骤6和步骤7建立图像数据矩阵,计算最小均方差确定图像间的粗配准,然后按照经典的相似性测度方法确定图像间最佳匹配度以及精配准位置,如下所示:In this embodiment, step 6 and step 7 establish the image data matrix, calculate the minimum mean square error to determine the rough registration between images, and then determine the best matching degree and fine registration position between images according to the classical similarity measurement method, as shown below :

Figure BDA00001648868500152
Figure BDA00001648868500152

步骤8:图像拼接融合与条码识别:Step 8: Image splicing and fusion and barcode recognition:

本步骤基于图像间的精配准位置及匹配度,首先将图片拼接融合成三个片段,然后填入预定的二维条码图像内存缓冲区域,完成图像拼接融合,然后实现二维条码信息识读。过程如下:This step is based on the fine registration position and matching degree between the images. First, the images are spliced and fused into three segments, and then the predetermined 2D barcode image memory buffer area is filled in to complete the image splicing and fusion, and then the 2D barcode information is read. . The process is as follows:

步骤(8-1):遍历最佳匹配度数组

Figure BDA00001648868500153
以匹配度数值最小的两个位置点n1和n2作为分段点,将图像序列
Figure BDA00001648868500154
分为三个部分;Step (8-1): Traverse the best matching degree array
Figure BDA00001648868500153
The two position points n 1 and n 2 with the smallest matching degree value are used as segmentation points, and the image sequence
Figure BDA00001648868500154
Divided into three parts;

如上表所示最佳匹配度数据,三个部分分别为{NMvtemp0,NMvtemp1}、{NMvtemp2}和{NMvtemp3,NMvtemp4,NMvtemp5}。As the best matching data shown in the table above, the three parts are {NMvtemp 0 , NMvtemp 1 }, {NMvtemp 2 } and {NMvtemp 3 , NMvtemp 4 , NMvtemp 5 }.

步骤(8-2):将每个部分的图像按照图像间的精配准位置采用渐入渐出的加权平均法进行拼接融合,得到三个部分的合成图像Part0、Part1、Part2,如图4中(f1)、(f2)、(f3)所示。三幅图像宽度分别为w0=172、w1=97、w2=166,高度ys=311,三幅图像间的精配准位置分别为PC0=41、PC1=43;Step (8-2): The images of each part are spliced and fused according to the precise registration position between the images using the weighted average method of gradual in and gradual out to obtain the composite images of three parts Part 0 , Part 1 , and Part 2 , As shown in (f1), (f2), and (f3) in Figure 4. The widths of the three images are w 0 =172, w 1 =97, w 2 =166, the height y s =311, and the precise registration positions between the three images are PC 0 =41, PC 1 =43;

步骤(8-3):计算中图像NMvtemp0的横向梯度:Step (8-3): Calculate Transverse gradient of image NMvtemp 0 in:

xx gradgrad jj NMvtemNMvtem pp 00 == &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ii ++ 11 ,, jj NMvtemNMvtem pp 00 -- pixpix ii ,, jj NMvtemNMvtem pp 00 )) ii &Element;&Element; [[ 00 ,, ww ))

Figure BDA00001648868500157
代表图像NMvtemp0中i列的横向梯度;得到数组
Figure BDA00001648868500158
并计算
Figure BDA00001648868500159
中的最大值
Figure BDA000016488685001510
位置xl即为图像NMvtemp0中条码区域和空白区域分界位置;本实施例xl=12。
Figure BDA00001648868500157
Represents the horizontal gradient of column i in image NMvtemp 0 ; get array
Figure BDA00001648868500158
and calculate
Figure BDA00001648868500159
the maximum value in
Figure BDA000016488685001510
The position x l is the boundary position between the barcode area and the blank area in the image NMvtemp 0 ; x l =12 in this embodiment.

步骤(8-4):计算

Figure BDA00001648868500161
中图像NMvtempM-1的横向梯度:Step (8-4): Calculate
Figure BDA00001648868500161
Transverse gradient of image NMvtemp M-1 in:

xx gradgrad ii NMvtemNMvtem pp Mm -- 11 == &Sigma;&Sigma; jj == 00 ythe y sthe s -- 11 (( pixpix ii ,, jj NMvtemNMvtem pp Mm -- 11 -- pixpix ii ++ 11 ,, jj NMvtemNMvtem pp Mm -- 11 )) ii &Element;&Element; [[ 00 ,, ww ))

代表图像NMvtempM-1中i列的横向梯度;得到数组并计算

Figure BDA00001648868500165
中的最大值
Figure BDA00001648868500166
位置xr即为图像NMvtempM-1中条码区域和空白区域分界位置;本实施例xr=76。 Represents the horizontal gradient of the i column in the image NMvtemp M-1 ; get the array and calculate
Figure BDA00001648868500165
the maximum value in
Figure BDA00001648868500166
The position x r is the boundary position between the barcode area and the blank area in the image NMvtemp M-1 ; x r =76 in this embodiment.

步骤(8-5):将Part0、Part1、Part2三幅图像拼接融合后的宽度为wm=(xl+ys+w-xr)=344,高度为ys;建立图像内存缓冲区,大小为wm×ys;将图像Part0放入缓冲区左侧,将图像Part2放入缓冲区右侧;判断|(wm-w0-w2)-(w1-PC0-PC1)|≤10,若满足判断条件则将图像Part1按照与Part0、Part2的精配准位置PC0、PC1放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合,进入步骤(8-7),若不满足判断条件则进入步骤(8-6);Step (8-5): After splicing and merging the three images of Part 0 , Part 1 and Part 2 , the width is w m =(x l +y s +wx r )=344, and the height is y s ; create an image memory buffer Area, the size is w m ×y s ; put the image Part 0 into the left side of the buffer, put the image Part 2 into the right side of the buffer; judge |(w m -w 0 -w 2 )-(w 1 -PC 0 -PC 1 )|≤10, if the judgment condition is met, put the image Part 1 into the buffer according to the fine registration position PC 0 and PC 1 of Part 0 and Part 2 , and the overlapping area adopts the weighted average of gradual in and out method to complete the image splicing and fusion, enter step (8-7), if the judgment condition is not met, enter step (8-6);

本实施例中|(wm-w0-w2)-(w1-PC0-PC1)|=7≤10。满足判断条件,将图像Part1按照与Part0、Part2的精配准位置PC0、PC1放入缓冲区,完成图像拼接融合,结果如图4中(g)所示。In this embodiment, |(w m -w 0 -w 2 )-(w 1 -PC 0 -PC 1 )|=7≤10. Satisfy the judgment conditions, put the image Part 1 into the buffer according to the fine registration positions PC 0 and PC 1 of Part 0 and Part 2 , and complete the image splicing and fusion. The result is shown in (g) in Figure 4.

步骤(8-6):在图像内存缓冲区内将Part1放在两幅图像之间,开始位置与Part0重合w列,然后Part1步进向右运动,直到Part1与Part2重合w列为止;将步进运动过程中Part1与Part0、Part2分别重合的区域合并为联合重合区域,计算联合重合区域匹配度,匹配度计算方法同步骤(7-1),并步进运动过程中取到联合重合区域最大匹配度的位置作为配准位置,将图像Part1按照配准位置放入缓冲区,重合区域采用渐入渐出的加权平均法完成图像拼接融合;Step (8-6): Put Part 1 between two images in the image memory buffer, the starting position coincides with Part 0 in column w, and then Part 1 moves to the right stepping until Part 1 and Part 2 coincide with w Until the row; merge the overlapping areas of Part 1 , Part 0 , and Part 2 during the stepping motion into a joint overlapping area, and calculate the matching degree of the joint overlapping area. The calculation method of the matching degree is the same as step (7-1), and stepping motion In the process, the position of the maximum matching degree in the joint coincidence area is taken as the registration position, and the image Part 1 is put into the buffer according to the registration position, and the weighted average method of gradual in and gradual out is used in the coincident area to complete image splicing and fusion;

步骤(8-7):将图像内存缓冲区内拼接融合得到的二维条码图像,按照宽度进行裁剪,取二维条码图像xl~xl+ys部分,得到一幅新的ys×ys的Data Matrix二维条码图像,条码模块大小为Ms×Ms,如图4中(h)所示;使用解码系统读取新的Data Matrix二维条码图像中的条码信息,解码系统根据解码原理和Reed-Solomon纠错算法对其解码并纠错。Step (8-7): The two-dimensional barcode image obtained by splicing and fusion in the image memory buffer is cut according to the width, and the part of the two-dimensional barcode image x l ~ x l +y s is taken to obtain a new y s × For the Data Matrix two-dimensional barcode image of y s , the size of the barcode module is M s ×M s , as shown in (h) in Figure 4; use the decoding system to read the barcode information in the new Data Matrix two-dimensional barcode image, and the decoding system It is decoded and corrected according to the decoding principle and the Reed-Solomon error correction algorithm.

Claims (1)

1. the cylinder two-dimensional bar code reading method based on Image Mosaics, is characterized in that: comprise the following steps:
Step 1: the N width image of continuous acquisition two-dimensional bar code
Figure FDA0000460566750000011
the full detail that described N width image has comprised two-dimensional bar code; The width of every width image is w, is highly h,
Figure FDA0000460566750000012
represent n width image M vtemp nin i be listed as the pixel value of the capable pixel of j;
Step 2: correcting image uneven illumination:
Step (2-1): choose arbitrarily
Figure FDA0000460566750000013
in comprise bar code information piece image Mvtemp, upwards travel through and ask for longitudinal gradient from h/2 in the middle of image M vtemp:
grad j Mvtemp = &Sigma; i = 0 w ( pix i , j Mvtemp - pix i , j + 1 Mvtemp ) , j &Element; ( h / 2 , h )
Wherein,
Figure FDA0000460566750000015
the capable longitudinal Grad of j in presentation video Mvtemp, and at y uprow is got maximum longitudinally Grad;
Step (2-2): the illuminance array of background area in computed image Mvtemp:
I i Mvtemp = 1 20 &Sigma; j = y up y up + &beta; pix i , j Mvtemp , i &Element; [ 0 , w )
Wherein, in image M vtemp, background area refers to that the 0th row are to w-1 row, y upwalk to y upthe region that+β is capable, β gets 10~h-y up;
Figure FDA0000460566750000017
the illuminance of presentation video Mvtemp background area i row;
Step (2-3): the average light illumination of background area in computed image Mvtemp
Figure FDA0000460566750000018
I &OverBar; Mvtemp = 1 ( &beta; + 1 ) w &Sigma; i = 0 w - 1 &Sigma; j = y up y up + &beta; pix i , j Mvtemp
Step (2-4): reverse correcting image sequence in the uneven illumination of every piece image:
pix i , j n = pix i , j n &CenterDot; I &OverBar; Mvtemp / I i Mvtemp , n &Element; [ 0 , N - 1 ] , i &Element; [ 0 , w - 1 ] , j &Element; [ 0 , h - 1 ] ,
Figure FDA00004605667500000112
Step 3: use Roberts operator to image sequence
Figure FDA00004605667500000113
do edge strength information extraction, then edge strength information is increased in original image, realize edge strength information and strengthen;
Step 4: identification bar code image and barcode position:
Step (4-1): sequence of computed images
Figure FDA00004605667500000114
in the transverse projection data of every width image
Figure FDA00004605667500000115
avg j n = 1 w &Sigma; i = 0 w pix i , j n , j &Element; [ 0 , h )
For n width image, obtain one group of transverse projection data
Figure FDA0000460566750000022
calculate in minimum value
Figure FDA0000460566750000024
Step (4-2): by transverse projection data
Figure FDA0000460566750000025
entirety translation downwards
Figure FDA0000460566750000026
transverse projection data after calculating translation
Figure FDA0000460566750000027
maximal value and mean value
Figure FDA0000460566750000029
to the transverse projection data after translation
Figure FDA00004605667500000210
be weighted mean filter and medium filtering, wherein weighted mean Filtering Template is:
1 9 &times; [ 1 2 3 2 1 ]
Medium filtering adopts 5 × 1 moving window; To filtered transverse projection data carry out Threshold segmentation, segmentation function is:
Figure FDA00004605667500000213
Step (4-3): to process step (4-2) transverse projection data after treatment
Figure FDA00004605667500000214
carry out data fitting, the form of fitting function is:
y = a 1 ( 0 &le; x < x 1 ) b 1 ( x 1 &le; x &le; h / 2 ) a 2 ( h / 2 < x &le; x 2 ) b 2 ( x 2 < x < h )
Wherein, a 1, b 1, a 2, b 2for the matching variable of linear fit function, x 1, x 2for the waypoint of linear fit function; Adopt least square fitting, obtain error of fitting and be:
S ^ min 2 ( x 1 n ) = &Sigma; i = 0 h / 2 ( avg i n ) 2 - ( &Sigma; i = 0 x 1 n - 1 avg i n ) 2 x 1 n - ( &Sigma; i = x 1 n h / 2 avg i n ) 2 h / 2 - x 1 n + 1
S ^ min 2 ( x 2 n ) = &Sigma; i = h / 2 + 1 h - 1 ( avg i n ) 2 - ( &Sigma; i = h / 2 + 1 x 2 n avg i n ) 2 x 2 n - h / 2 - ( &Sigma; i = x 2 n + 1 h - 2 avg i n ) 2 h - x 2 n - 1
In n width image, digital simulation error
Figure FDA00004605667500000218
corresponding while getting minimum value
Figure FDA00004605667500000219
order
Figure FDA00004605667500000220
digital simulation error
Figure FDA00004605667500000221
corresponding while getting minimum value
Figure FDA00004605667500000222
order
Figure FDA00004605667500000223
Step (4-4): repeating step (4-1)~step (4-3), obtains image sequence
Figure FDA0000460566750000031
in every width image
Figure FDA0000460566750000032
with thereby obtain array
Figure FDA0000460566750000034
Step (4-5): sequence of computed images
Figure FDA0000460566750000035
in every width image separately
Figure FDA0000460566750000036
longitudinal Grad of position:
grad y d n n = &Sigma; i = 0 w - 1 | pix i , y d n n - pix i , y d n + 1 n |
Thereby obtain array
Figure FDA0000460566750000038
calculate array mean value
Figure FDA00004605667500000310
from array
Figure FDA00004605667500000311
in the data of n=0 start backward successively with
Figure FDA00004605667500000312
relatively, when being relatively greater than to the data of n=η
Figure FDA00004605667500000313
till; From array
Figure FDA00004605667500000314
in the data of n=N-1 start forward successively with
Figure FDA00004605667500000315
relatively, when being relatively greater than to the data of n=κ
Figure FDA00004605667500000316
till; Image sequence
Figure FDA00004605667500000317
in, the image of Mvtemp η~Mvtemp κ is bar code image;
Step (4-6): calculate in all bar code images
Figure FDA00004605667500000318
the mean value y of value s; Appoint the piece image of getting in bar code image, calculate this image
Figure FDA00004605667500000319
position and
Figure FDA00004605667500000334
longitudinal Grad of position; When longitudinal Grad at place is greater than longitudinal Grad at place, by all bar code images from separately
Figure FDA00004605667500000322
position is cutting upwards, retains
Figure FDA00004605667500000323
as new bar code image; When
Figure FDA00004605667500000324
longitudinal Grad at place is greater than
Figure FDA00004605667500000325
longitudinal Grad at place, by all bar code images from separately
Figure FDA00004605667500000326
the downward cutting in position, retains
Figure FDA00004605667500000327
as new bar code image; The bar code image newly obtaining is saved as again by original order
Figure FDA00004605667500000328
picture traverse w is constant, is highly ys;
Step 5: the dividing mode of determining bar code module:
Step (5-1): will
Figure FDA00004605667500000329
transversely arranged in order, longitudinally alignment is merged into piece image NMvtemp, and image NMvtemp is highly y s, width is Mw; Image NMvtemp is done to longitudinal gradient projection:
y grad j NMvemp = &Sigma; i = 0 Mw - 1 | pix i , j NMvtemp - pix i , j + 1 NMvtemp | , j &Element; [ 0 , y s )
Wherein the capable longitudinal gradient of j in presentation video NMvtemp;
Figure FDA00004605667500000332
in presentation video NMvtemp, i is listed as the pixel value of the capable pixel of j;
Step (5-2): choose dividing mode l × l ∈ C from two-dimensional bar code Module Division mode set C={L × L}, obtain the longitudinal cut-point of a pack module:
H = { h m | h m = y s l &times; m } m = 1 l - 1
Step (5-3): computed image NMvtemp is the longitudinal cut-point h of each module in set H mlongitudinal Grad at place
Figure FDA0000460566750000041
and calculate all longitudinal Grad
Figure FDA0000460566750000042
mean value, as the Grad of dividing mode l × l ∈ C;
Step (5-4): repeat step (5-2)~step (5-3), the Grad of all dividing mode in set of computations C, gets the dividing mode p × p of Grad maximum as two-dimensional bar code transverse module dividing mode;
Step 6: the thick registration of bar code image:
Step (6-1): will
Figure FDA0000460566750000043
in every piece image NMvtemp nall be converted into the data matrix of a p × w, conversion calculations mode is:
X n = [ ( x k , i n = 1 M s &Sigma; j = kM s ( k + 1 ) M s pix i , j n ) i = 0 w - 1 ] k = 0 p - 1
Wherein Xn presentation video NMvtemp ncorresponding data matrix,
Figure FDA0000460566750000045
the capable i column element of k in representing matrix, M srepresent the longitudinal size of bar code module, M s=y s/ p; In step 6 and later step
Figure FDA0000460566750000046
represent n width image NMvtemp nin i be listed as the pixel value of the capable pixel of j;
Step (6-2): will
Figure FDA0000460566750000047
in data matrix X corresponding to adjacent two width images n, X n+1stepping is overlapping, stepping columns δ gscope be 1 row~5 row; Calculate each mean square deviation S of overlapping region element when overlapping 2(g n):
S 2 ( g n ) = 1 pg n ( &Sigma; k = 0 p - 1 &Sigma; i = 0 g n ( X k , i n + 1 - X k , w - i - g n n ) 2 ) , ( 1 &le; g n &le; w )
Wherein S 2(g n) the overlapping g of expression data matrix nmean square deviation when column element;
Step (6-3): computational data matrix X n, X n+1all S in the overlapping process of stepping 2(g n), get the wherein overlapping columns corresponding to three mean square deviations of value minimum
Figure FDA0000460566750000049
as adjacent two width image NMvtemp nand NMvtemp n+1between thick registration position, and note
Figure FDA00004605667500000410
Step (6-4): repeating step (6-2)~step (6-3), to image sequence
Figure FDA00004605667500000411
in every two width adjacent images carry out thick registration, obtain thick registration position sequence
Figure FDA00004605667500000412
Step 7: bar code image essence registration:
Step (7-1): the adjacent two width image NMvtemp of calculating that adopt classical similarity measure method nand NMvtemp n+1at matched position g nthe matching degree R at place n(g n):
R n ( g n ) = &Sigma; i = 0 g n - 1 &Sigma; j = 0 y s - 1 ( pix w - i - g n , j n &times; pix i , j n + 1 ) &Sigma; i = 0 g n - 1 &Sigma; j = 0 y s - 1 ( pix w - i - g n , j n ) 2 &Sigma; i = 0 g n - 1 &Sigma; j = 1 y s - 1 ( pix i , j n + 1 ) 2 , g n &Element; U ( g 1 n , &delta; ) &cap; U ( g 2 n , &delta; ) &cap; U ( g 3 n , &delta; )
The columns δ of fine setting gets δ g+ 1;
Step (7-2): get matching degree R n(g n) maximal value be adjacent two width image NMvtemp nand NMvtemp n+1between optimum matching degree, be designated as R n=max{R n(g n), and by max{R n(g n) corresponding position g nas adjacent two width image NMvtemp nand NMvtemp n+1between smart registration position, and remember smart registration position C n=g n;
Step (7-3): repeating step (7-1)~step (7-2), sequence of computed images
Figure FDA0000460566750000052
in optimum matching degree and the smart registration position of every two width adjacent images, obtain optimum matching number of degrees group
Figure FDA0000460566750000053
with smart registration position array
Figure FDA0000460566750000054
Step 8: Image Mosaics merges and bar-code identification:
Step (8-1): traversal optimum matching number of degrees group
Figure FDA0000460566750000055
with two location point n of matching degree numerical value minimum 1and n 2as waypoint, by image sequence
Figure FDA0000460566750000056
be divided into three parts;
Step (8-2): the image of each part is fade-in to the method for weighted mean gradually going out according to the smart registration position employing between image and splices fusion, obtain the composograph Part of three parts 0, Par t1, Part 2, three width picture traverses are respectively w 0, w 1, w 2, be highly y s, the smart registration position between three width images is respectively
Figure FDA00004605667500000517
PC 1 = C n 2 ;
Step (8-3): calculate
Figure FDA0000460566750000057
middle image NMvtemp 0transverse gradients:
x grad i NMvtem p 0 = &Sigma; j = 0 y s - 1 ( pix i + 1 , j NMvtem p 0 - pix i , j NMvtem p 0 ) , i &Element; [ 0 , w )
Figure FDA0000460566750000059
representative image NMvtemp 0the transverse gradients of middle i row; Obtain array
Figure FDA00004605667500000510
and calculate
Figure FDA00004605667500000511
in maximal value
Figure FDA00004605667500000512
position x lbe image NMvtemp 0middle bar code region and white space boundary position;
Step (8-4): calculate middle image NMvtemp m-1transverse gradients:
x grad i NMvtem p M - 1 = &Sigma; j = 0 y s - 1 ( pix i + 1 , j NMvtem p M - 1 - pix i , j NMvtem p M - 1 ) , i &Element; [ 0 , w )
Figure FDA00004605667500000515
representative image NMvtemp m-1the transverse gradients of middle i row; Obtain array and calculate
Figure FDA0000460566750000061
in maximal value
Figure FDA0000460566750000062
position x rbe image NMvtemp m-1middle bar code region and white space boundary position;
Step (8-5): by Part 0, Part 1, Part 2width after three width Image Mosaics merge is w m=(x l+ y s+ w-x r), be highly y s; Set up image memory buffer zone, size is w m× y s; By image Part 0put into left side, buffer zone, by image Part 2put into right side, buffer zone; Judgement | (w m-w 0-w 2)-(w 1-PC 0-PC 1) |≤10, if meet Rule of judgment by image Part1 according to Part 0, Part 2smart registration position PC 0, PC 1put into buffer zone, overlapping region employing is fade-in the method for weighted mean gradually going out and completes Image Mosaics fusion, enters step (8-7), enters step (8-6) if do not meet Rule of judgment;
Step (8-6): in image memory buffer zone by Part 1be placed between two width images starting position and Part 0coincidence w row, then Part 1stepping moves right, until Part 1with Part 2coincidence w only classifies as; By Part in step motion process 1with Part 0, Part 2associating overlapping region is merged in the region overlapping respectively, calculates associating overlapping region matching degree, the same step of matching degree computing method (7-1), and the position of getting the maximum matching degree in associating overlapping region in step motion process is as registration position, by image Part 1put into buffer zone according to registration position, overlapping region employing is fade-in the method for weighted mean gradually going out and completes Image Mosaics fusion;
Step (8-7): splicing in image memory buffer zone is merged to the two-dimensional barcode image obtaining, carry out cutting according to width, get two-dimensional barcode image x l~x l+ y spart, obtains the y that a width is new s× y sdata Matrix two-dimensional barcode image, barcode block size is M s× M s; Use decode system to read the bar code information in new Data Matrix two-dimensional barcode image, decode system is decoded and error correction to it according to decoding principle and Reed-Solomon error correction algorithm.
CN201210152638.8A 2012-05-17 2012-05-17 A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic Expired - Fee Related CN102682266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210152638.8A CN102682266B (en) 2012-05-17 2012-05-17 A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210152638.8A CN102682266B (en) 2012-05-17 2012-05-17 A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic

Publications (2)

Publication Number Publication Date
CN102682266A CN102682266A (en) 2012-09-19
CN102682266B true CN102682266B (en) 2014-06-11

Family

ID=46814167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210152638.8A Expired - Fee Related CN102682266B (en) 2012-05-17 2012-05-17 A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic

Country Status (1)

Country Link
CN (1) CN102682266B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3104306B2 (en) 2015-06-11 2023-11-01 Scantrust SA Two dimensional barcode
CN106203564B (en) * 2016-06-23 2019-02-01 北京印刷学院 A kind of generation of the two dimensional code on circle-prism assembly surface and acquisition method
CN106529365B (en) * 2016-12-05 2019-09-06 广东工业大学 automatic pricing machine
CN108345817A (en) * 2018-02-06 2018-07-31 徐州智融图像科技有限公司 A kind of recognition methods of cylindrical surface Quick Response Code
CN111553317B (en) * 2020-05-14 2023-08-08 北京惠朗时代科技有限公司 Anti-fake code acquisition method and device, computer equipment and storage medium
CN114936631B (en) * 2021-04-26 2023-06-09 华为技术有限公司 A model processing method and device
CN114882370A (en) * 2022-07-07 2022-08-09 西安超嗨网络科技有限公司 Intelligent commodity identification method and device, terminal and storage medium
CN117669609A (en) * 2022-08-30 2024-03-08 台达电子国际(新加坡)私人有限公司 Bar code reader

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156849A (en) * 2011-04-21 2011-08-17 西北工业大学 Reading device and reading method of two-dimensional bar code marked on metal cylindrical surface directly
CN102354363A (en) * 2011-09-15 2012-02-15 西北工业大学 Identification method of two-dimensional barcode image on high-reflect light cylindrical metal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7175090B2 (en) * 2004-08-30 2007-02-13 Cognex Technology And Investment Corporation Methods and apparatus for reading bar code identifications

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156849A (en) * 2011-04-21 2011-08-17 西北工业大学 Reading device and reading method of two-dimensional bar code marked on metal cylindrical surface directly
CN102354363A (en) * 2011-09-15 2012-02-15 西北工业大学 Identification method of two-dimensional barcode image on high-reflect light cylindrical metal

Also Published As

Publication number Publication date
CN102682266A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN102682266B (en) A Method of Reading Cylindrical Two-Dimensional Barcodes Based on Image Mosaic
CN101673338B (en) Fuzzy license plate identification method based on multi-angle projection
CN102354363B (en) Identification method of two-dimensional barcode image on high-reflect light cylindrical metal
CN107239778B (en) Efficient and accurate license plate recognition method
CN107153822A (en) A kind of smart mask method of the semi-automatic image based on deep learning
CN102646193A (en) A Segmentation Method of Circular Character Image
CN107687819A (en) A fast and high-precision method for extracting sub-pixels in the center of light bars
CN107506765B (en) License plate inclination correction method based on neural network
CN106056020A (en) Bar code detection and recognition method and system based on machine vision system
CN104809433A (en) Zebra stripe detection method based on maximum stable region and random sampling
CN119918423A (en) A method and system for automatic layout and drawing for lens laser cutting
CN112396629A (en) River course inspection tracking method based on infrared and visible light cooperation
CN110503051A (en) A precious wood recognition system and method based on image recognition technology
CN111488839B (en) A kind of target detection method and target detection system
CN120894544B (en) SPR response area identification method based on image semantic segmentation and time sequence alignment
CN116630924A (en) A Structured Road Recognition Method Based on Improved Sliding Window Algorithm
CN115170586A (en) Stripe threshold segmentation method based on supervised learning
CN119648525B (en) A Hyperspectral Image Super-Resolution Method and System Based on Latent Space Diffusion Model
CN120452029A (en) Fish size estimation system and method in complex underwater environment based on key point detection
CN115346048A (en) Semantic Segmentation Method of Remote Sensing Image Based on Boundary Point Selection Algorithm
CN103761520A (en) Document image non-parameter binaryzation method based on stroke width
CN119006498A (en) SegMamba-based ultra-lightweight photovoltaic module image segmentation method
CN119168967A (en) A method for detecting cable appearance defects
CN113298834B (en) Visual edge finding image processing method and device for metal plate
CN116958980A (en) A real-time scene text detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140611

Termination date: 20150517

EXPY Termination of patent right or utility model