[go: up one dir, main page]

CN107978110A - Fence intelligence identifying system in place and recognition methods based on images match - Google Patents

Fence intelligence identifying system in place and recognition methods based on images match Download PDF

Info

Publication number
CN107978110A
CN107978110A CN201711275338.8A CN201711275338A CN107978110A CN 107978110 A CN107978110 A CN 107978110A CN 201711275338 A CN201711275338 A CN 201711275338A CN 107978110 A CN107978110 A CN 107978110A
Authority
CN
China
Prior art keywords
image
fence
recognition
host computer
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711275338.8A
Other languages
Chinese (zh)
Inventor
张悦
孙胜利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technical Physics of CAS
Original Assignee
Shanghai Institute of Technical Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technical Physics of CAS filed Critical Shanghai Institute of Technical Physics of CAS
Priority to CN201711275338.8A priority Critical patent/CN107978110A/en
Publication of CN107978110A publication Critical patent/CN107978110A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/02Mechanical actuation
    • G08B13/12Mechanical actuation by the breaking or disturbance of stretched cords or wires
    • G08B13/122Mechanical actuation by the breaking or disturbance of stretched cords or wires for a perimeter fence
    • G08B13/124Mechanical actuation by the breaking or disturbance of stretched cords or wires for a perimeter fence with the breaking or disturbance being optically detected, e.g. optical fibers in the perimeter fence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19663Surveillance related processing done local to the camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于图像匹配的电子围栏智能在位识别系统及识别方法。系统包括前端信息获取设备、图像采集卡及上位机。所述前端信息获取设备包括CCD相机及光源,连接到图像采集卡,图像采集卡与上位机相连。上位机包括客户端、图像处理模块及决策输出模块,客户端连接到图像处理模块,同时与CCD相连,图像处理模块输出到决策输出模块。本系统基于Hu不变矩与改进BP神经网络,通过形态特征匹配进行物体在位识别。图像处理模块进行电子围栏确定、图像处理、Hu不变矩的提取和BP神经网络的训练等,由训练成熟的BP分类器进行物体在位分类识别,并将结果输出到决策输出模块。该系统结构简单,适用于大多数场景,能快速进行电子围栏内物体在位识别。

The invention discloses an image matching-based intelligent on-site recognition system and a recognition method for electronic fences. The system includes front-end information acquisition equipment, image acquisition card and upper computer. The front-end information acquisition equipment includes a CCD camera and a light source, connected to an image acquisition card, and the image acquisition card is connected to a host computer. The upper computer includes a client, an image processing module and a decision output module. The client is connected to the image processing module and connected to the CCD, and the image processing module outputs to the decision output module. This system is based on Hu invariant moments and improved BP neural network, and recognizes objects in situ through morphological feature matching. The image processing module performs electronic fence determination, image processing, extraction of Hu invariant moments, and training of BP neural network, etc., and the trained and mature BP classifier performs in-situ classification and recognition of objects, and outputs the results to the decision output module. The system has a simple structure, is suitable for most scenarios, and can quickly identify objects in the electronic fence.

Description

基于图像匹配的电子围栏智能在位识别系统及识别方法Intelligent on-site identification system and identification method for electronic fence based on image matching

技术领域technical field

本发明涉及一种电子围栏监控及计算机图像处理领域,尤其涉及一种基于图像匹配的电子围栏智能在位识别系统及识别方法,适用于大多数场景下的物体在位分类识别。The invention relates to the field of electronic fence monitoring and computer image processing, in particular to an image-matching-based intelligent on-site recognition system and a recognition method for an electronic fence, which is suitable for object on-site classification and recognition in most scenarios.

背景技术Background technique

随着社会科技进步,人类的生产行为、生活方式也随之不断发展变化,新技术新产品不断涌现,大量的生活、生产活动的发生使得智能监护识别系统也日益被人们所需要。对一块规定区域的实时监控,以保证现场安全或者识别监控区域内的各项活动等工作,都离不开电子围栏技术的支持。传统常见的电子围栏一般通过光电传感器构成红外光幕帘,通过红外光的遮挡反射以检测是否有物体侵入围栏,从而触发报警或拍照等。但是这种传统的电子围栏需要在现场提供电源并铺设电子边界,财力、物力以及人力的支出较大。With the advancement of social science and technology, human production behaviors and lifestyles are also constantly developing and changing, new technologies and new products are constantly emerging, and a large number of life and production activities occur, making intelligent monitoring and identification systems increasingly required by people. The real-time monitoring of a specified area to ensure on-site safety or identify various activities in the monitored area is inseparable from the support of electronic fence technology. Traditional and common electronic fences generally use photoelectric sensors to form an infrared light curtain, and detect whether there is an object intruding into the fence through the occlusion and reflection of infrared light, thereby triggering an alarm or taking pictures. However, this traditional electronic fence needs to provide power and lay electronic boundaries on site, and the expenditure of financial, material and manpower is relatively large.

目标识别系统是现阶段和未来武器系统的重要组成部分,也是国民生产与社会发展不可或缺的技术成分。在现代工业自动化生产中,大到飞机、导弹、舰船等军事目标,小到汽车零配件、电子装配线、电子元器件等,涉及到各种各样的检查、测量和目标识别应用。传统的目标识别系统面临着以下几个问题:The target recognition system is an important part of the current and future weapon system, and it is also an indispensable technical component of national production and social development. In modern industrial automation production, military targets such as aircraft, missiles, ships, etc., as small as auto parts, electronic assembly lines, electronic components, etc., involve a variety of inspection, measurement and target recognition applications. Traditional object recognition systems face the following problems:

(1)大数据量的图像,如对于大批量生产过程中的零件等的尺寸、外形的检测识别,系统的匹配计算量非常大;(1) Images with a large amount of data, such as the detection and recognition of the size and shape of parts in the mass production process, the matching calculation of the system is very large;

(2)目标物体的位移、旋转、尺度变化等相对传感器有着不同的位置变化,而拍摄的角度、环境等难以固定不变。很多系统在目标物体产生位移、旋转、尺度等变化时无法再正确识别目标;(2) The displacement, rotation, and scale changes of the target object have different position changes relative to the sensor, and it is difficult to fix the shooting angle and environment. Many systems can no longer correctly identify the target when the target object changes in displacement, rotation, and scale;

(3)小到居民小区监控、车间生产现场到大批量生产或流水线上的应用甚至在军事领域以及航天产品研发过程的应用中,大都要求系统能够实时在线进行监控识别并预警,这要求系统有很快的处理速度和识别速度。(3) Applications ranging from small residential area monitoring, workshop production sites to mass production or assembly lines, and even in the military field and aerospace product research and development process, most of them require the system to be able to monitor, identify and give early warning online in real time, which requires the system to have Very fast processing speed and recognition speed.

发明内容Contents of the invention

本发明为了克服上述技术存在的缺陷,提出一种基于图像匹配的电子围栏智能在位识别系统,该系统不需在现场铺设边界电子围栏,同时可以实现不随旋转、位移、尺度变换而变化的目标物体的实时快速分类识别。In order to overcome the defects of the above-mentioned technologies, the present invention proposes an intelligent on-site recognition system of electronic fence based on image matching. Real-time rapid classification and recognition of objects.

为实现上述目的,本发明的技术方案如下:To achieve the above object, the technical scheme of the present invention is as follows:

一种基于图像匹配的电子围栏智能在位识别系统及识别方法,适用于多种场景下的多种目标分类识别,尤其适用于大批量生产的零件或流水线上的目标物体在线识别。该电子围栏识别系统包括前端探测设备1图像采集卡2和上位机3。An image matching-based electronic fence intelligent on-site recognition system and recognition method are suitable for classification and recognition of various targets in various scenarios, and are especially suitable for online recognition of mass-produced parts or target objects on an assembly line. The electronic fence recognition system includes a front-end detection device 1 , an image acquisition card 2 and a host computer 3 .

所述的上位包括客户端、图像处理模块以及决策输出模块。客户端连接到前端探测设备和图像处理模块,控制CCD相机的启动、电子围栏的标定等人机互动工作;图像处理模块完成场景识别、图像处理、不变矩的提取和BP神经网络的训练及目标的在位识别;决策输出模块接收来自图像处理模块的识别结果并显示。The host includes a client, an image processing module and a decision output module. The client is connected to the front-end detection equipment and the image processing module to control the start of the CCD camera, the calibration of the electronic fence and other human-computer interaction work; the image processing module completes scene recognition, image processing, invariant moment extraction and BP neural network training and On-site recognition of the target; the decision output module receives and displays the recognition results from the image processing module.

所述的电子围栏区域由绘有特定标识的场景确定,上位机中有事先建立好的场景库,包括绘有特定标识的场景模板。本发明规定场景标识简单易识别,场景库中场景标识模板为一些简单的色块、字符、划线等,如二维码、“车间”二字、操场跑道和停车场车位等,可以通过上位机进行二维码扫描、模板匹配、字符识别等简单方法进行场景识别,并由客户端控制图像处理模块在现场图像上画出边界确定电子围栏区域。The electronic fence area is determined by a scene with a specific mark drawn, and there is a pre-established scene library in the host computer, including a scene template with a specific mark drawn. The invention stipulates that the scene identification is simple and easy to identify. The scene identification templates in the scene library are some simple color blocks, characters, dashed lines, etc., such as two-dimensional codes, the word "workshop", playground runways and parking spaces, etc. The computer performs scene recognition through simple methods such as two-dimensional code scanning, template matching, and character recognition, and the client controls the image processing module to draw the boundary on the scene image to determine the electronic fence area.

本发明首次在电子围栏识别系统中采用基于融合Hu不变矩与改进BP神经网络模型的方法,通过形状特征匹配进行物体在位识别,实现电子围栏内目标物体在不随位移、旋转、尺度等变换而变化的精确快速识别,也可实现多种不同目标物体的分类识别。For the first time, the present invention adopts the method based on fusion of Hu invariant moments and improved BP neural network model in the electronic fence recognition system, and performs object in-situ recognition through shape feature matching, and realizes that the target object in the electronic fence does not change with displacement, rotation, scale, etc. The accurate and fast recognition of changes can also realize the classification and recognition of various target objects.

一种基于图像匹配的电子围栏智能在位识别系统及识别方法,包括以下步骤:An image-matching-based intelligent on-site identification system and identification method for electronic fences, comprising the following steps:

1)系统工作时,由上位机客户端操作,启动CCD摄像头拍摄图像,图像经由图像采集卡解码并传入上位机图像处理模块;1) When the system is working, the host computer client is operated to start the CCD camera to capture images, and the images are decoded by the image acquisition card and sent to the host computer image processing module;

2)首次启动系统,图像处理模块接收到现场图像后,首先通过与上位机中事先建立好的场景库进行匹配,采用模板匹配、二维码扫描或字符识别等方法识别特定场景区域以确定电子围栏区域;2) Start the system for the first time. After the image processing module receives the scene image, it first matches with the pre-established scene library in the host computer, and uses template matching, QR code scanning or character recognition to identify the specific scene area to determine the electronic image. fenced area;

3)匹配检测到特定的场景区域标识后,上位机快速定位该区域为电子围栏区域,由客户端通过图像处理软件在现场图像上画出边界线定义电子围栏边界,之后拍摄的同一区域的图像,根据首幅图像的电子围栏边界来划定;3) After matching and detecting the specific scene area logo, the host computer quickly locates the area as the electronic fence area, and the client draws the boundary line on the scene image to define the electronic fence boundary through the image processing software, and then takes the image of the same area , delineated according to the geo-fence boundary of the first image;

4)上位机图像处理模块完成电子围栏的标定之后,对图像进行以下步骤:4) After the host computer image processing module completes the calibration of the electronic fence, the following steps are performed on the image:

a)预处理:将由CCD工业相机采集到的图像进行预处理,对灰度图像进行平滑滤波处理以克服噪声干扰。a) Preprocessing: Preprocessing the image collected by the CCD industrial camera, smoothing and filtering the grayscale image to overcome noise interference.

b)图像分割:用边缘检测或阈值分割等方法进行图像分割,分离出图像的前景即目标物体和背景。b) Image segmentation: image segmentation is performed by edge detection or threshold segmentation, and the foreground of the image is separated, that is, the target object and the background.

c)特征提取:所述的基于图像匹配的电子围栏智能在位识别系统及识别方法,其特征在于通过物体的形态匹配来识别,因此需要对前景图像进行形状分析,提取形状特征。基于形状的描述分为基于轮廓的形状描述和基于区域的形状描述。本发明采用后者,采用了基于形状的不随位移、旋转、尺度变换而变化的7个区域Hu不变矩进行识别。c) feature extraction: the intelligent on-site recognition system and recognition method for electronic fences based on image matching are characterized in that they are identified through shape matching of objects, so it is necessary to analyze the shape of the foreground image and extract shape features. Shape-based descriptions are divided into contour-based shape descriptions and region-based shape descriptions. The present invention adopts the latter, and adopts seven region Hu invariant moments based on shapes that do not change with displacement, rotation, and scale transformation for identification.

T1=N20+N02 T 1 =N 20 +N 02

T2=(N20-N02)2+4N11 2 T 2 =(N 20 -N 02 ) 2 +4N 11 2

T3=(N30-3N12)2+(3N21-N03)2 T 3 =(N 30 -3N 12 ) 2 +(3N 21 -N 03 ) 2

T4=(N30+N12)2+(N21+N03)2 T 4 =(N 30 +N 12 ) 2 +(N 21 +N 03 ) 2

T5=(N03-3N12)(N30+N12)[(N30+N12)2-3(N21+N03)2]T 5 =(N 03 -3N 12 )(N 30 +N 12 )[(N 30 +N 12 ) 2 -3(N 21 +N 03 ) 2 ]

+(3N12-N03)(N12+N03)[3(N30+N12)2-(N21+N03)2]+(3N 12 -N 03 )(N 12 +N 03 )[3(N 30 +N 12 ) 2 -(N 21 +N 03 ) 2 ]

T6=(N20-N02)[(N30+N12)2-(N21+N03)2]+4N11(N30+T 6 =(N 20 -N 02 )[(N 30 +N 12 ) 2 -(N 21 +N 03 ) 2 ]+4N 11 (N 30 +

N12)(N12+N03)N 12 )(N 12 +N 03 )

T7=(3N21-N03)(N30+N12)[(N30+N12)2-3(N21+N03)2]T 7 =(3N 21 -N 03 )(N 30 +N 12 )[(N 30 +N 12 ) 2 -3(N 21 +N 03 ) 2 ]

+(3N12-N03)(N12+N03)[3(N30+N12)2-(N21+N03)2]+(3N 12 -N 03 )(N 12 +N 03 )[3(N 30 +N 12 ) 2 -(N 21 +N 03 ) 2 ]

其中Npq为(p+q)阶归一化中心距。提取出7个不变矩T1~T7,并对7个不变矩进行归一化处理,将归一化处理后的特征值作为步骤7)中BP神经网络的输入向量。where N pq is the (p+q) order normalized center distance. Extract 7 invariant moments T 1 -T 7 , and normalize the 7 invariant moments, and use the normalized eigenvalues as the input vector of the BP neural network in step 7).

5)上位机建立一个神经网络,首先判断系统是否已经完成对BP神经网络的训练使其达到要求的识别精度,若无,则转入步骤6),若已有训练成熟的BP分类识别器,则进行步骤8);5) The upper computer establishes a neural network, first judges whether the system has completed the training of the BP neural network to make it reach the required recognition accuracy, if not, then proceeds to step 6), if there is a mature BP classification recognizer for training, Then proceed to step 8);

6)设置BP神经网络的各项参数,重复步骤1)采集若干(如5000幅)目标物体的样本图像;6) Set the parameters of the BP neural network, repeat step 1) to collect a number of sample images (such as 5000 pieces) of the target object;

7)对样本图像进行步骤2)~4),提取核心特征,将样本形状不变矩特征向量传入BP神经网络中对其进行训练,根据样本图像不变矩向量对网络的连接权值进行调整,直至网络的输出误差小于某一数值;7) Perform steps 2) to 4) on the sample image to extract core features, transfer the sample shape invariant moment feature vector into the BP neural network for training, and perform network connection weights according to the sample image invariant moment vector Adjust until the output error of the network is less than a certain value;

8)用训练完成的BP分类识别器对采集到的图像进行形态识别,通过特征匹配判断目标物体是否出现在围栏内;8) Perform morphological recognition on the collected images with the trained BP classification recognizer, and judge whether the target object appears in the fence through feature matching;

9)若BP分类器识别出电子围栏内有目标物体,则由决策输出装置给出识别结果并显示目标物体的图像。9) If the BP classifier recognizes that there is a target object in the electronic fence, the decision output device will give the recognition result and display the image of the target object.

步骤5)中建立一个改进的BP神经网络,BP神经网络是一种按误差反向传播训练的多层前馈网络,其基本思想是梯度下降法,利用梯度搜索使网络的实际输出值和期望输出值的误差均方差最小。本发明采用附加动量法改进BP网络模型,将最后一次权值(或阈值)的变化通过一个动量因子来传递。当动量因子取值0,权值(或阈值)的变化根据梯度下降法产生,当动量因子取值1,新的权值(或阈值)变化设置为最后一次权值(或阈值)的变化。带有附加动量因子的权值调节公式为:In step 5), an improved BP neural network is established. The BP neural network is a multi-layer feed-forward network trained by error backpropagation. Its basic idea is the gradient descent method, and the actual output value of the network and the expected The mean square error of the output value is the smallest. The invention adopts an additional momentum method to improve the BP network model, and transmits the last weight (or threshold) change through a momentum factor. When the momentum factor takes a value of 0, the weight (or threshold) change is generated according to the gradient descent method. When the momentum factor takes a value of 1, the new weight (or threshold) change is set as the last weight (or threshold) change. The weight adjustment formula with additional momentum factor is:

Δwij(k+1)=(1-mc)ηδiPj+mcΔwij(k)Δw ij (k+1)=(1-mc)ηδ i P j +mcΔw ij (k)

Δbi(k+1)=(1-mc)ηδi+mcΔbi(k)Δb i (k+1)=(1-mc)ηδ i +mcΔb i (k)

式中,Δwij表示隐含层第i个节点与输入层第j个节点的连接权值变化;k为训练次数;η为学习率;δi代表权值趋近于误差曲面的局部极小的值趋近程度,Pj表示梯度;Δbi为第i个神经元的阈值变化;mc为动量因子,一般取0.95左右。该方法所加入的动量项减小了训练中的震荡,同时增快了学习速率,改善了收敛性。In the formula, Δw ij represents the connection weight change between the i-th node of the hidden layer and the j-th node of the input layer; k is the number of training; η is the learning rate; δ i represents the local minimum of the weight value approaching the error surface The value approach degree of , P j represents the gradient; Δb i is the threshold change of the i-th neuron; mc is the momentum factor, generally around 0.95. The momentum term added by this method reduces the shock in training, and at the same time increases the learning rate and improves the convergence.

设置BP神经网络的节点参数,可以实现不同目标物体的分类快速识别。步骤6)中参数设置,输入层节点数根据特征向量个数确定,输出层节点数根据客户需要得到的输出结果确定,隐含层节点数由经验公式确定。其中m为隐含层节点数,l、n分别为输入输出节点数,α为1到10之间的常数。Setting the node parameters of the BP neural network can realize the classification and rapid identification of different target objects. Step 6) in the parameter setting, the number of input layer nodes is determined according to the number of feature vectors, the number of output layer nodes is determined according to the output results required by the customer, and the number of hidden layer nodes is determined by the empirical formula Sure. Among them, m is the number of hidden layer nodes, l and n are the number of input and output nodes respectively, and α is a constant between 1 and 10.

BP分类器分为训练学习和工作识别两个阶段,前一阶段根据样本图像对网络的连接权值进行调整,直至网络的输出误差小于某一数值即步骤7);后一阶段识别即步骤8),只有前向计算而没有误差的反向传播,实现对电子围栏内物体的在位识别。The BP classifier is divided into two stages: training and learning and job recognition. In the former stage, the connection weights of the network are adjusted according to the sample images until the output error of the network is less than a certain value (step 7); in the latter stage, the recognition is step 8 ), only forward calculation without error backpropagation, to achieve in-situ recognition of objects in the electronic fence.

与已有技术相比,该系统具有以下优点:Compared with the prior art, the system has the following advantages:

通过上位机图像处理软件对绘有特定标识的区域进行场景识别,由客户端对图像标定边界,建立了虚拟电子围栏,不需在现场铺设边界电子围栏,节省了财力、物力、人力等资源。The image processing software of the upper computer is used to recognize the scene of the area with a specific logo, and the client side calibrates the image boundary to establish a virtual electronic fence. It is not necessary to lay an electronic fence on the site, saving financial, material, and human resources.

该系统首次在电子围栏系统中采用融合Hu不变矩和BP神经网络的方法,用Hu不变矩表示物体的形状特征,可以实现不随平移、旋转、尺度变换等变化的目标物体的在位识别。For the first time, the system adopts the method of integrating Hu invariant moments and BP neural network in the electronic fence system, and uses Hu invariant moments to represent the shape characteristics of objects, which can realize the in-situ recognition of target objects that do not change with translation, rotation, scale transformation, etc. .

该系统采用了改进的BP神经网络模型,通过大量样本图像的学习训练,识别速度快、计算量小、人工干预少,适用于多种情况尤其流水线大批量等情况下的目标物体分类识别,有效提高识别的准确度和速度。The system adopts the improved BP neural network model. Through the learning and training of a large number of sample images, the recognition speed is fast, the calculation amount is small, and the manual intervention is small. Improve the accuracy and speed of recognition.

附图说明Description of drawings

图1为本发明电子围栏识别系统的流程图;Fig. 1 is the flowchart of electronic fence recognition system of the present invention;

图2为依据本发明实施例的系统图;Fig. 2 is a system diagram according to an embodiment of the present invention;

图3为BP神经网络算法结构图;Fig. 3 is the structural diagram of BP neural network algorithm;

图4为BP神经网络算法流程图;Fig. 4 is the flow chart of BP neural network algorithm;

图5为Hu不变矩提取流程图。Fig. 5 is a flow chart of Hu invariant moment extraction.

具体实施方式Detailed ways

下面结合附图对本发明的实施例进行说明。Embodiments of the present invention will be described below in conjunction with the accompanying drawings.

如图1为本发明电子围栏识别系统流程图,如图2为依据本发明实施例的系统图,一种基于图像匹配的电子围栏智能在位识别系统及识别方法,适用于多种场景下的多种目标识别,尤其适用于大批量生产的零件或流水线上的目标物体识别。所述的电子围栏为虚拟电子围栏,该电子围栏识别系统包括前端探测设备1图像采集卡2和上位机3。Figure 1 is a flow chart of the electronic fence identification system of the present invention, and Figure 2 is a system diagram according to an embodiment of the present invention, an image matching-based intelligent on-site identification system and identification method for electronic fences, applicable to various scenarios A variety of target recognition, especially suitable for mass-produced parts or target object recognition on the assembly line. The electronic fence is a virtual electronic fence, and the electronic fence identification system includes a front-end detection device 1 , an image acquisition card 2 and a host computer 3 .

所述的前端探测设备包括CCD相机和光源,被测表面为宽幅、连续表面,颜色单一,要求实现连续、在线、高速检测,因此采用线阵黑白CCD和线光源。本实施例选用加拿大DALSA Coreco公司生产的,型号为P2-22-04K30的黑白线阵CCD相机及东莞康视达自动化科技有限公司生产的LSL-450-34-R红光LED汇聚线光源,减少数据处理量,提高检测速度。前端探测设备安装在待检测区域内以拍摄现场图像。The front-end detection equipment includes a CCD camera and a light source. The surface to be tested is a wide, continuous surface with a single color, which requires continuous, online, and high-speed detection. Therefore, a linear array black and white CCD and a line light source are used. This embodiment selects Canada DALSA Coreco company to produce, the model is the black-and-white linear array CCD camera of P2-22-04K30 and the LSL-450-34-R red light LED convergence line light source produced by Dongguan Kangshida Automation Technology Co., Ltd., reducing Data processing capacity, improve detection speed. The front-end detection equipment is installed in the area to be detected to take on-site images.

图像采集卡接收来自CCD相机的图像解码并传入上位机,图像采集卡的选取取决于所选用的CCD相机接口,若选用相机有USB接口等则可省略。本实施例根据所选CCD,采用DALSA Xcelera-CL PX4 Dual图像采集卡,接收来自CCD相机的图像,对其解码并传入上位机。The image acquisition card receives the image decoding from the CCD camera and transmits it to the host computer. The selection of the image acquisition card depends on the interface of the CCD camera selected. If the selected camera has a USB interface, etc., it can be omitted. According to the selected CCD, this embodiment uses the DALSA Xcelera-CL PX4 Dual image acquisition card to receive the image from the CCD camera, decode it and transmit it to the host computer.

所述的上位机包括客户端、图像处理模块以及决策输出模块。客户端连接到前端探测设备和图像处理模块,控制CCD相机的启动、电子围栏的标定等人机互动工作;图像处理模块完成场景识别、图像处理、BP分类器训练及识别;决策输出模块接收来自图像处理模块的识别结果并显示。The host computer includes a client, an image processing module and a decision output module. The client is connected to the front-end detection equipment and image processing module to control human-computer interaction such as the start of the CCD camera and the calibration of the electronic fence; the image processing module completes scene recognition, image processing, BP classifier training and recognition; the decision output module receives information from The recognition result of the image processing module is displayed.

所述的电子围栏区域由绘有特定标识的场景确定,本实施例的场景定义为生产车间,特定标识定义为“车间”二字(或传送带纹理亦可)。可以在线检测车间生产流水线上的产品数量、种类,也可以通过特征的提取和BP神经网络参数的训练来进行缺陷检测亦可。The electronic fence area is determined by a scene with a specific mark drawn on it. The scene in this embodiment is defined as a production workshop, and the specific mark is defined as the word "workshop" (or the texture of the conveyor belt). The quantity and type of products on the production line of the workshop can be detected online, and defect detection can also be performed through feature extraction and BP neural network parameter training.

本电子围栏识别系统的识别方法首次在电子围栏系统中采用基于融合Hu不变矩和BP神经网络的方法,训练得到一个有自动识别功能的BP分类器。The identification method of this electronic fence identification system is the first time in the electronic fence system to adopt the method based on the fusion of Hu invariant moments and BP neural network, and train a BP classifier with automatic identification function.

如图1所示,本发明实施例的方法包括以下步骤:As shown in Figure 1, the method of the embodiment of the present invention comprises the following steps:

1)系统工作时,由上位机客户端操作,启动CCD摄像头拍摄图像,图像经由图像采集卡解码并传入上位机图像处理模块;1) When the system is working, the host computer client is operated to start the CCD camera to capture images, and the images are decoded by the image acquisition card and sent to the host computer image processing module;

2)首次启动系统,图像处理模块接收到现场图像后,首先通过与上位机中事先建立好的场景库进行匹配,本实施例场景为车间,场景标识定义为“车间”二字(或传送带纹理亦可),对图像进行字符识别检测“车间”二字进行场景识别。2) Start the system for the first time. After the image processing module receives the scene image, it first matches it with the pre-established scene library in the host computer. The scene in this embodiment is a workshop, and the scene identification is defined as the word "workshop" (or conveyor belt texture Also can), carry out character recognition to image and detect " workshop " two characters and carry out scene recognition.

3)匹配检测到特定的场景区域标识后,上位机快速定位该区域为电子围栏区域,由客户端通过图像处理软件在车间现场图像上画出边界线定义电子围栏边界,之后拍摄的同一区域的图像,根据首幅图像的电子围栏边界来划定;3) After matching and detecting the specific scene area logo, the upper computer quickly locates the area as the electronic fence area, and the client uses the image processing software to draw the boundary line on the workshop site image to define the electronic fence boundary, and then the same area is photographed Image, defined according to the geo-fence boundary of the first image;

4)上位机图像处理模块完成电子围栏的标定之后,对图像进行以下步骤:4) After the host computer image processing module completes the calibration of the electronic fence, the following steps are performed on the image:

a)对图像进行预处理,采用维纳滤波去噪:用维纳滤波器对图像进行去除噪声以及去模糊等处理,其思想是使原始图像与复原图像之间的均方误差最小。即式:a) Preprocess the image and use Wiener filter to denoise: use Wiener filter to remove noise and deblurring the image, the idea is to minimize the mean square error between the original image and the restored image. Immediate:

其中f(x,y)为原始图像,为复原图像,E代表二者均方误差。Where f(x,y) is the original image, To restore the image, E represents the mean square error of the two.

b)图像分割,本实施例采用基于Sobel算子边缘检测的方法进行图像分割。b) Image segmentation, the present embodiment adopts the method based on Sobel operator edge detection to perform image segmentation.

式中,A为原始图像,Gx、Gy分别代表经横向和纵向边缘检测的图像,G为梯度大小,θ为梯度方向。Sobel算子可以直接计算Gx、Gy检测到边的存在,以及从暗到亮和从亮到暗的变化。但是由于只采用了两个方向的模板,只能检测垂直和水平方向的边缘,对于纹理复杂的图像不适合。本实施例被测表面为宽幅、连续表面,本例适用于检测零器件、大型目标物体等边缘简单的目标。In the formula, A is the original image, G x , G y represent the image detected by horizontal and vertical edges respectively, G is the gradient size, and θ is the gradient direction. Sobel operator can directly calculate G x , G y to detect the presence of edges, and the change from dark to bright and from bright to dark. However, since only two-direction templates are used, it can only detect vertical and horizontal edges, which is not suitable for images with complex textures. The surface to be tested in this embodiment is a wide and continuous surface, and this embodiment is suitable for detecting objects with simple edges such as components and large objects.

c)如附图5所示为特征提取流程图,一种基于图像匹配的电子围栏智说能在位识别系统,是通过形态匹配来识别的,因此需提取形状特征。基于形状的描述分为基于轮廓的形状描述和基于区域的形状描述。本发明采用后者,以下基于形状的7个不随平移、旋转、尺度变换变换而变化的区域不变矩是由归一化的二阶和三阶中心矩组合而来。c) As shown in Figure 5, it is a flow chart of feature extraction. An on-site recognition system for electronic fences based on image matching is identified through morphological matching, so shape features need to be extracted. Shape-based descriptions are divided into contour-based shape descriptions and region-based shape descriptions. The present invention adopts the latter, and the following seven shape-based region invariant moments that do not change with translation, rotation, and scale transformation are combined from the normalized second-order and third-order central moments.

T1=N20+N02 T 1 =N 20 +N 02

T2=(N20-N02)2+4N11 2 T 2 =(N 20 -N 02 ) 2 +4N 11 2

T3=(N30-3N12)2+(3N21-N03)2 T 3 =(N 30 -3N 12 ) 2 +(3N 21 -N 03 ) 2

T4=(N30+N12)2+(N21+N03)2 T 4 =(N 30 +N 12 ) 2 +(N 21 +N 03 ) 2

T5=(N03-3N12)(N30+N12)[(N30+N12)2-3(N21+N03)2]T 5 =(N 03 -3N 12 )(N 30 +N 12 )[(N 30 +N 12 ) 2 -3(N 21 +N 03 ) 2 ]

+(3N12-N03)(N12+N03)[3(N30+N12)2-(N21+N03)2]+(3N 12 -N 03 )(N 12 +N 03 )[3(N 30 +N 12 ) 2 -(N 21 +N 03 ) 2 ]

T6=(N20-N02)[(N30+N12)2-(N21+N03)2]+4N11(N30+T 6 =(N 20 -N 02 )[(N 30 +N 12 ) 2 -(N 21 +N 03 ) 2 ]+4N 11 (N 30 +

N12)(N12+N03)N 12 )(N 12 +N 03 )

T7=(3N21-N03)(N30+N12)[(N30+N12)2-3(N21+N03)2]T 7 =(3N 21 -N 03 )(N 30 +N 12 )[(N 30 +N 12 ) 2 -3(N 21 +N 03 ) 2 ]

+(3N12-N03)(N12+N03)[3(N30+N12)2-(N21+N03)2]+(3N 12 -N 03 )(N 12 +N 03 )[3(N 30 +N 12 ) 2 -(N 21 +N 03 ) 2 ]

其中Npq为(p+q)阶归一化中心距。根据上式计算出的不同平移、旋转、尺度变化几种情况下的各值只发生微小的差别,可归于对离散图像的数值计算误差。由此,本发明实施例可以实现物体不随平移、旋转、尺度变换而变化的识别,提取出7个不变矩T1~T7,并对7个不变矩进行归一化处理,将归一化处理后的特征值作为步骤7)中BP神经网络的输入向量。where N pq is the (p+q) order normalized center distance. According to the above formula, there are only slight differences in the values under different translation, rotation, and scale changes, which can be attributed to the numerical calculation error of the discrete image. Thus, the embodiment of the present invention can realize the recognition of objects that do not change with translation, rotation, and scale transformation, extract 7 invariant moments T 1 ~ T 7 , and perform normalization processing on the 7 invariant moments, and normalize The eigenvalue after normalization is used as the input vector of the BP neural network in step 7).

5)上位机建立一个神经网络,首先判断系统是否已经完成对BP神经网络的训练使其达到要求的识别精度,若无,则转入步骤6),若已有训练成熟的BP分类识别器,则进行步骤8);5) The upper computer establishes a neural network, first judges whether the system has completed the training of the BP neural network to make it reach the required recognition accuracy, if not, then proceeds to step 6), if there is a mature BP classification recognizer for training, Then proceed to step 8);

6)设置BP神经网络的各项参数,输入节点为7,中间层可在4~13之间选取,输出节点设置为2.重复步骤1)采集5000幅目标物体的样本图像;6) Set various parameters of the BP neural network, the input node is 7, the middle layer can be selected between 4 and 13, and the output node is set to 2. Repeat step 1) to collect 5000 sample images of the target object;

7)对样本图像进行步骤2)~4),提取核心特征,将样本形状不变矩特征向量传入BP神经网络中对其进行训练,根据样本图像不变矩向量对网络的连接权值进行调整,直至网络的输出误差小于某一数值;7) Perform steps 2) to 4) on the sample image to extract core features, transfer the sample shape invariant moment feature vector into the BP neural network for training, and perform network connection weights according to the sample image invariant moment vector Adjust until the output error of the network is less than a certain value;

8)用训练完成的BP分类识别器对采集到的图像进行形态识别,通过特征匹配判断目标物体是否出现在围栏内;8) Perform morphological recognition on the collected images with the trained BP classification recognizer, and judge whether the target object appears in the fence through feature matching;

9)若BP分类器识别出电子围栏内有目标物体,则由决策输出装置给出识别结果并显示目标物体的图像。9) If the BP classifier recognizes that there is a target object in the electronic fence, the decision output device will give the recognition result and display the image of the target object.

本系统步骤5)建立了改进的BP神经网络,如图3所示为BP神经网络算法结构图,如图4为三层BP神经网络算法流程图。Step 5) of this system establishes an improved BP neural network, as shown in Figure 3 is the structural diagram of the BP neural network algorithm, and Figure 4 is a flow chart of the three-layer BP neural network algorithm.

传统的BP神经网络算法存在很多缺陷如易陷入局部最小值、学习收敛速度慢等,本发明采用了附加动量法改进BP网络模型。附加动量法在反向传播法的基础上在每一个权值的变化上加上一项正比于前次权值变化量的值,并根据反向传播法来产生新的权值变化。该BP神经网络的激励函数选取为S型激励函数:There are many defects in the traditional BP neural network algorithm, such as easy to fall into local minimum, slow learning convergence speed, etc. The present invention adopts the method of additional momentum to improve the BP network model. The additional momentum method adds a value proportional to the previous weight change to each weight change based on the backpropagation method, and generates a new weight change according to the backpropagation method. The activation function of the BP neural network is selected as the S-type activation function:

带有附加动量因子的权值调节公式为:The weight adjustment formula with additional momentum factor is:

Δwij(k+1)=(1-mc)ηδiPj+mcΔwij(k)Δw ij (k+1)=(1-mc)ηδ i P j +mcΔw ij (k)

Δbi(k+1)=(1-mc)ηδi+mcΔbi(k)Δb i (k+1)=(1-mc)ηδ i +mcΔb i (k)

式中,Δwij表示隐含层第i个节点与输入层第j个节点的连接权值变化;k为训练次数;η为学习率;δi代表权值趋近于误差曲面的局部极小的值趋近程度,Pj表示梯度;Δbi为第i个神经元的阈值变化;mc为动量因子,取0.95。该方法所加入的动量项减小了训练中的震荡,同时增快了学习速率,改善了收敛性。In the formula, Δw ij represents the connection weight change between the i-th node of the hidden layer and the j-th node of the input layer; k is the number of training; η is the learning rate; δ i represents the local minimum of the weight value approaching the error surface The value approach degree of , P j represents the gradient; Δbi is the threshold value change of the i-th neuron; mc is the momentum factor, which is taken as 0.95. The momentum term added by this method reduces the shock in training, and at the same time increases the learning rate and improves the convergence.

在进行附加动量法的训练程序设计中采用动量法的判断条件为:The judgment conditions for using the momentum method in the training program design of the additional momentum method are:

其中E(k)为第K步误差平方和。Where E(k) is the sum of squared errors of the Kth step.

BP网络分类器的设计从输入层、隐含层和输出层的设计考虑,隐含层越多,精度越高,但复杂度也相应越高。本实施例例设置识别物体种类为2种生产线上的零件,则输出节点数为2,输入层神经元个数为7个,输入节点数为核心特征向量数7,对应7个不变矩T1~T7,隐含层节点数由经验公式确定。其中m为隐含层节点数,l、n分别为输入输出节点数,α为1到10之间的常数,则m可取为4到13之间的数。因此步骤6)中节点设置为:输入节点为7,输出节点为2,中间层节点可在4~13之间进行最优选择。The design of BP network classifier considers the design of input layer, hidden layer and output layer. The more hidden layers, the higher the accuracy, but the higher the complexity. In this embodiment, the recognition object type is set to two kinds of parts on the production line, then the number of output nodes is 2, the number of neurons in the input layer is 7, and the number of input nodes is the number of core feature vectors 7, corresponding to 7 invariant moments T 1 ~ T 7 , the number of hidden layer nodes is determined by the empirical formula Sure. Among them, m is the number of hidden layer nodes, l and n are the number of input and output nodes respectively, and α is a constant between 1 and 10, so m can be a number between 4 and 13. Therefore, the node settings in step 6) are as follows: the input node is 7, the output node is 2, and the intermediate layer nodes can be optimally selected between 4 and 13.

BP分类器分为训练学习和工作识别两个阶段,前一阶段根据样本图像对网络的连接权值进行调整,直至网络的输出误差小于某一数值即步骤7);后一阶段识别即步骤8),只有前向计算而没有误差的反向传播,实现对电子围栏内物体的在位识别。The BP classifier is divided into two stages: training and learning and job recognition. In the former stage, the connection weights of the network are adjusted according to the sample images until the output error of the network is less than a certain value (step 7); in the latter stage, the recognition is step 8 ), only forward calculation without error backpropagation, to achieve in-situ recognition of objects in the electronic fence.

决策输出装置和人机交互界面做在同一块操作显示器上,在客户端进行操作时可以实时地看到系统识别的结果。The decision output device and the human-computer interaction interface are built on the same operation display, and the results of system recognition can be seen in real time when the client is operating.

上述实施例对本发明具体实施方式做出说明,但并不是本发明唯一的实施例,不构成对本发明保护范围的限制。The above-mentioned embodiment illustrates the specific implementation of the present invention, but it is not the only embodiment of the present invention, and does not constitute a limitation to the protection scope of the present invention.

Claims (4)

  1. A kind of 1. fence based on images match intelligently identifying system in place, including front-end information acquisition equipment (1) image Capture card (2) and host computer (3), it is characterised in that the front-end information, which obtains equipment (1), includes CCD industrial cameras and light Source, is placed in fence area;The image collected by front-end information acquisition equipment (1) is decoded through image pick-up card (2) Incoming host computer (3).
  2. 2. the intelligent identifying system in place of the fence according to claim 1 based on images match, it is characterised in that institute The host computer stated includes client 1) image processing module 2) and decision-making output module 3), client 1) it is connected to image procossing mould Block 2), and be connected with front end detecting devices (1) CCD camera, image processing module 2) it is connected to decision-making output module 3);Host computer Image processing module carries out scene Recognition to the region for being painted with specific identifier, and virtual electronic fence boundary is drawn by client;Figure As processing module also completes identification in place of pretreatment, image segmentation, feature extraction, the training of BP neural network and target etc. Function.
  3. A kind of 3. fence of the intelligence of the fence based on the images match identifying system in place based on described in claim 1 Intelligence recognition methods in place, it is characterised in that comprise the following steps:
    1) when system works, by host computer client operation, CCD camera shooting image is started, image is via image pick-up card Decode and be passed to host computer image processing module;
    2) activation system first, after image processing module receives image scene, first by with being built up in advance in host computer Scene library is matched, and the methods of using template matches, two-dimensional code scanning or character recognition identifies special scenes region to determine Fence region;
    3) after matching detection is identified to specific scene areas, it is fence region that host computer, which positions the region, by client Draw boundary line on image at the scene by image processing software and define fence border, the figure of the same area shot afterwards Picture, delimited according to the fence border of first width image;
    4) after host computer image processing module completes the calibration of fence, the processing that is followed the steps below to image:
    4-1) pre-process:The image collected by CCD industrial cameras is pre-processed, gray level image is carried out at smothing filtering Manage to overcome noise jamming;
    4-2) image is split:Image segmentation is carried out with the methods of edge detection or Threshold segmentation, isolates the prospect i.e. mesh of image Mark object and background;
    4-3) feature extraction:Matched by the form of object to identify, it is necessary to foreground image progress shape analysis, extract shape Feature, the description based on shape are divided into the shape description based on profile and the shape description based on region, employ based on shape Not with displacement, rotation, change of scale and 7 region Hu changing not bending moment T1~T7It is identified,
    T1=N20+N02
    T2=(N20-N02)2+4N11 2
    T3=(N30-3N12)2+(3N21-N03)2
    T4=(N30+N12)2+(N21+N03)2
    T5=(N03-3N12)(N30+N12)[(N30+N12)2-3(N21+N03)2]
    +(3N12-N03)(N12+N03)[3(N30+N12)2-(N21+N03)2]
    T6=(N20-N02)[(N30+N12)2-(N21+N03)2]+4N11(N30+N12)(N12+N03)
    T7=(3N21-N03)(N30+N12)[(N30+N12)2-3(N21+N03)2]
    +(3N12-N03)(N12+N03)[3(N30+N12)2-(N21+N03)2]
    Wherein NpqCentre-to-centre spacing is normalized for (p+q) rank.Extract 7 invariant moments T1~T7, and 7 invariant moments is normalized Processing, the input vector using the characteristic value after normalized as BP neural network in step 7);
    5) host computer establishes a neutral net.First determine whether that the training whether system has been completed to BP neural network reaches it To desired accuracy of identification, if nothing, step 6) is transferred to, if the ripe BP Classification and Identification devices of training, carry out step 8);
    6) parameters of BP neural network are set, and input layer is several to be determined according to feature vector number, output layer number of nodes The output result needed according to client determines that node in hidden layer m is by empirical equationDetermine, its Middle m is node in hidden layer, and l, n are respectively inputoutput section points, and α is the constant between 1 to 10;Repeat step 1) if collection The sample image of dry target object;
    7) step 2)~4 are carried out to sample image), core feature is extracted, by the incoming BP god of sample shape invariant moment features vector Through being trained in network to it, according to sample image, bending moment vector is not adjusted the connection weight of network, until network Output error be less than a certain numerical value;
    8) the BP Classification and Identifications device completed with training carries out form identification to the image collected, judges target by characteristic matching Whether object is appeared in fence;
    If 9) BP graders identify there is target object in fence, recognition result is provided by decision-making output device and is shown The image of target object.
  4. 4. the intelligence of the fence based on the images match knowledge in place according to claim 3 based on described in claim 1 The fence of other system intelligently recognition methods in place, it is characterised in that the BP neural network described in step 5) is using additional Momentum method improved BP network, is shaken with reducing in training, speeds learning rate.Weights with the additional momentum factor are adjusted Formula is:
    Δwij(k+1)=(1-mc) η δiPj+mcΔwij(k)
    Δbi(k+1)=(1-mc) η δi+mcΔbi(k)
    In formula, Δ wijRepresent that i-th of node of hidden layer and the connection weight of j-th of node of input layer change;K is frequency of training;η For learning rate;δiRepresent weights level off to error surface local minimum value convergence degree, PjRepresent gradient;ΔbiFor i-th The changes of threshold of neuron;Mc is factor of momentum, takes 0.95.
CN201711275338.8A 2017-12-06 2017-12-06 Fence intelligence identifying system in place and recognition methods based on images match Pending CN107978110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711275338.8A CN107978110A (en) 2017-12-06 2017-12-06 Fence intelligence identifying system in place and recognition methods based on images match

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711275338.8A CN107978110A (en) 2017-12-06 2017-12-06 Fence intelligence identifying system in place and recognition methods based on images match

Publications (1)

Publication Number Publication Date
CN107978110A true CN107978110A (en) 2018-05-01

Family

ID=62009254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711275338.8A Pending CN107978110A (en) 2017-12-06 2017-12-06 Fence intelligence identifying system in place and recognition methods based on images match

Country Status (1)

Country Link
CN (1) CN107978110A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040669A (en) * 2018-06-28 2018-12-18 国网山东省电力公司菏泽供电公司 Intelligent substation video fence method and system
CN109086686A (en) * 2018-07-12 2018-12-25 西安电子科技大学 Blind source separation method under time varying channel based on self-adapted momentum factor
CN110674834A (en) * 2018-07-03 2020-01-10 百度在线网络技术(北京)有限公司 Geo-fence identification method, device, equipment and computer-readable storage medium
CN111063142A (en) * 2018-10-17 2020-04-24 杭州海康威视数字技术股份有限公司 Monitoring alarm processing method, device and equipment and readable medium
CN111161558A (en) * 2019-12-16 2020-05-15 华东师范大学 Method for judging forklift driving position in real time based on deep learning
WO2020124950A1 (en) * 2018-12-17 2020-06-25 江苏云巅电子科技有限公司 Parking lot traffic accident tracing system and method based on high precision indoor positioning technology
CN112883842A (en) * 2021-02-02 2021-06-01 四川省机械研究设计院(集团)有限公司 Motorcycle engine assembling method and system based on mutual matching of parts and light source
CN114266490A (en) * 2021-12-24 2022-04-01 安徽省道路运输管理服务中心 Efficient and accurate comprehensive transportation network security risk point identification method
CN114757916A (en) * 2022-04-15 2022-07-15 西安交通大学 Defect classification method of industrial CT image based on feature extraction and BP network
CN116501913A (en) * 2023-04-28 2023-07-28 海科(平潭)信息技术有限公司 Construction method, device and equipment of electronic fence
CN116652988A (en) * 2023-07-28 2023-08-29 江苏泽宇智能电力股份有限公司 Intelligent optical fiber wiring robot and control method thereof
CN110659340B (en) * 2018-06-28 2024-04-05 北京京东尚科信息技术有限公司 Electronic fence generation method and device, medium and electronic equipment
CN118609035A (en) * 2024-08-08 2024-09-06 宁波星巡智能科技有限公司 Method, device, equipment and medium for generating electronic fence based on scene adaptation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5091780A (en) * 1990-05-09 1992-02-25 Carnegie-Mellon University A trainable security system emthod for the same
CN103280041A (en) * 2013-05-08 2013-09-04 广东电网公司珠海供电局 Monitoring method and monitoring system for automatic deploying virtual electronic fence
CN105931437A (en) * 2016-06-29 2016-09-07 上海电力学院 Smart electronic fence system based on image processing and control method thereof
CN106874581A (en) * 2016-12-30 2017-06-20 浙江大学 A kind of energy consumption of air conditioning system in buildings Forecasting Methodology based on BP neural network model
CN107024480A (en) * 2017-04-12 2017-08-08 浙江硕和机器人科技有限公司 A kind of stereoscopic image acquisition device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5091780A (en) * 1990-05-09 1992-02-25 Carnegie-Mellon University A trainable security system emthod for the same
CN103280041A (en) * 2013-05-08 2013-09-04 广东电网公司珠海供电局 Monitoring method and monitoring system for automatic deploying virtual electronic fence
CN105931437A (en) * 2016-06-29 2016-09-07 上海电力学院 Smart electronic fence system based on image processing and control method thereof
CN106874581A (en) * 2016-12-30 2017-06-20 浙江大学 A kind of energy consumption of air conditioning system in buildings Forecasting Methodology based on BP neural network model
CN107024480A (en) * 2017-04-12 2017-08-08 浙江硕和机器人科技有限公司 A kind of stereoscopic image acquisition device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵文仓等: "杂草种子视觉不变特征提取及其种类识别", 《农业工程学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040669A (en) * 2018-06-28 2018-12-18 国网山东省电力公司菏泽供电公司 Intelligent substation video fence method and system
CN110659340B (en) * 2018-06-28 2024-04-05 北京京东尚科信息技术有限公司 Electronic fence generation method and device, medium and electronic equipment
CN110674834A (en) * 2018-07-03 2020-01-10 百度在线网络技术(北京)有限公司 Geo-fence identification method, device, equipment and computer-readable storage medium
CN109086686A (en) * 2018-07-12 2018-12-25 西安电子科技大学 Blind source separation method under time varying channel based on self-adapted momentum factor
CN109086686B (en) * 2018-07-12 2022-09-30 西安电子科技大学 Blind source separation method under time-varying channel based on self-adaptive momentum factor
CN111063142B (en) * 2018-10-17 2022-04-26 杭州海康威视数字技术股份有限公司 Monitoring alarm processing method, device and equipment and readable medium
CN111063142A (en) * 2018-10-17 2020-04-24 杭州海康威视数字技术股份有限公司 Monitoring alarm processing method, device and equipment and readable medium
WO2020124950A1 (en) * 2018-12-17 2020-06-25 江苏云巅电子科技有限公司 Parking lot traffic accident tracing system and method based on high precision indoor positioning technology
CN111161558A (en) * 2019-12-16 2020-05-15 华东师范大学 Method for judging forklift driving position in real time based on deep learning
CN112883842A (en) * 2021-02-02 2021-06-01 四川省机械研究设计院(集团)有限公司 Motorcycle engine assembling method and system based on mutual matching of parts and light source
CN114266490A (en) * 2021-12-24 2022-04-01 安徽省道路运输管理服务中心 Efficient and accurate comprehensive transportation network security risk point identification method
CN114266490B (en) * 2021-12-24 2024-03-29 安徽省道路运输管理服务中心 Efficient and accurate comprehensive transportation network security risk point identification method
CN114757916A (en) * 2022-04-15 2022-07-15 西安交通大学 Defect classification method of industrial CT image based on feature extraction and BP network
CN116501913A (en) * 2023-04-28 2023-07-28 海科(平潭)信息技术有限公司 Construction method, device and equipment of electronic fence
CN116652988A (en) * 2023-07-28 2023-08-29 江苏泽宇智能电力股份有限公司 Intelligent optical fiber wiring robot and control method thereof
CN116652988B (en) * 2023-07-28 2023-10-27 江苏泽宇智能电力股份有限公司 Intelligent optical fiber wiring robot and control method thereof
CN118609035A (en) * 2024-08-08 2024-09-06 宁波星巡智能科技有限公司 Method, device, equipment and medium for generating electronic fence based on scene adaptation
CN118609035B (en) * 2024-08-08 2024-12-03 宁波星巡智能科技有限公司 Method, device, equipment and medium for generating electronic fence based on scene adaptation

Similar Documents

Publication Publication Date Title
CN107978110A (en) Fence intelligence identifying system in place and recognition methods based on images match
CN111325713B (en) Neural network-based wood defect detection method, system and storage medium
CN107123131B (en) Moving target detection method based on deep learning
Yang et al. A deep learning-based surface defect inspection system using multiscale and channel-compressed features
CN107133973B (en) A ship detection method in a bridge anti-collision system
CN106548182B (en) Pavement crack detection method and device based on deep learning and principal cause analysis
CN109724984A (en) Device and method for defect detection and identification based on deep learning algorithm
CN107194559A (en) A kind of work stream recognition method based on Three dimensional convolution neutral net
Qu et al. Moving vehicle detection with convolutional networks in UAV videos
CN105303150B (en) Realize the method and system of image procossing
CN107229930A (en) A kind of pointer instrument numerical value intelligent identification Method and device
CN102998316B (en) Transparent liquid impurity detection system and detection method thereof
CN109711322A (en) A kind of people's vehicle separation method based on RFCN
CN107220603A (en) Vehicle checking method and device based on deep learning
CN108985170A (en) Transmission line of electricity hanger recognition methods based on Three image difference and deep learning
CN112883969B (en) Rainfall intensity detection method based on convolutional neural network
CN110942450A (en) Multi-production-line real-time defect detection method based on deep learning
CN113192038A (en) Method for identifying and monitoring abnormal smoke and fire in existing flame environment based on deep learning
CN114548253A (en) A digital twin model building system based on image recognition and dynamic matching
CN103295221A (en) Water surface target motion detecting method simulating compound eye visual mechanism and polarization imaging
CN111178405A (en) A Similar Object Recognition Method Integrating Multiple Neural Networks
CN114004814A (en) Coal gangue identification method and system based on deep learning and gray scale third moment analysis
CN110111332A (en) Collagent casing for sausages defects detection model, detection method and system based on depth convolutional neural networks
CN113177439A (en) Method for detecting pedestrian crossing road guardrail
CN111950357A (en) A fast identification method of marine debris based on multi-feature YOLOV3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180501

WD01 Invention patent application deemed withdrawn after publication