CN114655655B - Conveyor belt deviation detection method based on UNet network - Google Patents
Conveyor belt deviation detection method based on UNet network Download PDFInfo
- Publication number
- CN114655655B CN114655655B CN202210223340.5A CN202210223340A CN114655655B CN 114655655 B CN114655655 B CN 114655655B CN 202210223340 A CN202210223340 A CN 202210223340A CN 114655655 B CN114655655 B CN 114655655B
- Authority
- CN
- China
- Prior art keywords
- conveyor belt
- training
- network
- unet network
- deviation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 39
- 238000012549 training Methods 0.000 claims abstract description 31
- 238000012795 verification Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 7
- 238000012360 testing method Methods 0.000 claims abstract description 7
- 238000004140 cleaning Methods 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 5
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 abstract description 2
- 239000003245 coal Substances 0.000 description 4
- 239000000306 component Substances 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G43/00—Control devices, e.g. for safety, warning or fault-correcting
- B65G43/02—Control devices, e.g. for safety, warning or fault-correcting detecting dangerous physical condition of load carriers, e.g. for interrupting the drive in the event of overheating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/02—Control or detection
- B65G2203/0266—Control or detection relating to the load carrier(s)
- B65G2203/0283—Position of the load carrier
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a conveyor belt deviation detection method based on a UNet network, which comprises the following steps: recording conveyor belt operation videos in different scenes, extracting frames to extract sample images, cleaning sample data, preprocessing, marking a conveyor belt area by using software, dividing a training set, a verification set and a test set randomly in proportion, constructing a UNet network, inputting the training set and the verification set for training, taking a group with the minimum loss value as an optimal model for detecting real-time videos after multiple iterations, segmenting out the conveyor belt area, post-processing to fit a conveyor belt boundary line, comparing the conveyor belt boundary line with a set safety area, judging whether a deviation phenomenon occurs, triggering alarm equipment according to a detection result by the system, and storing the deviation videos. The method has the advantages of high detection speed, strong robustness, simple operation and reduced labor and time cost, and can accurately identify the conveyor belt region in various complex environments.
Description
Technical Field
The invention belongs to the technical field of conveyor belt detection, and particularly relates to a conveyor belt deviation detection method based on a UNet network.
Background
Belt conveyors are important transportation means in coal mining transportation, and conveyor belts are core components in the conveyor, and occupy a vital position in the whole coal production link. The conveyer belt is in the transmission in-process, appears the off tracking problem easily, if not in time discover to correct, can cause coal to spill outward, influences mining efficiency, can cause the conflagration to endanger personal safety even when serious, consequently strengthen off tracking detection and early warning and have important meaning to coal production safety.
With the recent rapid development of industrial cameras, the off tracking detection is gradually changed from the original manual inspection, light, wave and magnetic sensor to the machine vision technology. The camera is matched with a specific light source to shoot an image of the conveyor belt, and algorithms such as traditional threshold segmentation, edge extraction and the like are adopted to analyze whether deviation occurs. The method needs to adjust parameters according to different scenes, and has high requirements on users.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a conveyor belt deviation detection method based on a UNet network, which can quickly detect whether the conveyor belt deviates by directly dividing the conveyor belt area and comparing a fitting boundary line with a safety area, and has high accuracy.
In order to solve the problems, the invention adopts the following technical scheme:
a conveyor belt deviation detection method based on a UNet network comprises the following steps:
S1: the camera collects conveyor belt operation videos of different scenes, and frames are extracted to obtain a sample set;
S2: carrying out data cleaning on the sample set, calculating the structural similarity of the sample image and the reference image, and eliminating similar images;
s3: preprocessing and labeling the sample set, and proportionally generating a training set, a verification set and a test set;
S4: adding a plurality of residual network modules into the UNet network, constructing a new UNet network, inputting a training set and a verification set into the network for training, and obtaining an optimal weight and a detection model through a plurality of iterations;
s5: setting a safety area; detecting a field video by using a model, dividing a conveyor belt region, and fitting a region boundary line; judging whether the conveyor belt deviates according to the comparison between the boundary line and the position of the safety area, and drawing a corresponding result in the code stream;
s6: when the conveyor belt is deviated, the detection system starts timing; if the duration of the deviation exceeds the set value t1, the deviation video is saved and an alarm module is triggered.
Further, the specific steps of S2 are as follows: selecting one image in a sample set as a reference image, traversing other images in the sample set, and calculating the structural similarity SSIM of each image conveyor belt region y and the reference image conveyor belt region x, wherein the formula is as follows:
Where μ x,μy is the average gray scale of x, y, respectively, δ x,δy is the contrast of x, y, c 1=(k1L)2,c2=(k2L)2 is the constant used to maintain stability, k1=0.01, k2=0.03, and l is the dynamic range of pixel values.
If the SSIM is greater than the set value, rejecting y from the sample set, otherwise, reserving y; after the traversing is finished, re-traversing by taking y as a reference, and judging whether similar images exist or not; and so on.
Further, the specific steps of S3 are as follows:
Adjusting the size of the sample image to 512 x 512 pixels;
enhancing the image contrast using a contrast limited histogram equalization algorithm;
the conveyor belt area is marked, and the training set, the verification set and the test set are divided according to the proportion of 7:2:1.
Further, the specific steps of S4 are as follows:
sampling a fourth layer under the UNet network, adding a plurality of residual error network modules, and constructing a new UNet network;
the training set and the verification set are input into a new UNet network for training, a loss function (Tversky Loss) is adopted as a network evaluation index, and the calculation formula is as follows:
wherein A is a predicted result, B is a real result, and alpha and beta are super parameters;
setting total training epochs and initial learning rate, substituting each epoch training into a verification set to calculate a loss value of the verification set, and reducing the learning rate by 2 times once the model performance is not improved;
and saving a weight file with the lowest loss function T (A, B) value as an optimal weight parameter obtained by training and a corresponding optimal model.
Further, the specific steps of S5 are as follows:
A safety area is drawn by configuring an interface at the front end, model detection is started, and a conveyor belt mask is segmented;
extracting the mask contour by a canny algorithm, fitting the contour into a straight line by using a polygon approximation algorithm, and taking the straight line in the length direction of the conveyor belt as a boundary line;
recording the boundary line position, comparing with the safety area, if the left boundary line is outside the safety area, indicating that the conveyor belt is left-deflected, and if the right boundary line is outside the safety area, indicating that the conveyor belt is right-deflected; the boundary lines are all in the safety zone and indicate normal operation;
The detection result is drawn in the code stream, pushed to the front end and displayed in real time.
Further, S6 further includes the following steps: if the deviated conveyor belt returns to the normal running state at a certain moment, the detection system starts timing until the duration is longer than the set value t2, and the alarm is released.
According to the invention, the conveyor belt area is directly segmented through the trained UNet network model, and the fitted boundary line is compared with the defined safety area to judge whether the conveyor belt area is deviated or not. Specifically, UNet extracts advanced features of a conveyor belt region in an image through a neural network, the network can output a classification result at each pixel on a feature map through learning, during detection, all pixels classified as the conveyor belt are combined to obtain a segmented conveyor belt region, filtering, binarization, edge segmentation and other operations in a traditional algorithm are not needed, the same group of parameters of the traditional algorithm cannot be applied in different scenes (different illumination, shooting angles and shooting distances), false detection is easy to occur, and the UNet can avoid the problem of repeatedly adjusting parameters such as a filtering core, a gray threshold value and the like, so that time is saved, and the unexel is more robust. Compared with the prior art, the method has the advantages of high detection speed, high accuracy, adaptation to various complex scenes, no need of additional adjustment of parameters, simplicity in operation and remarkable practicability for safe production.
It should be understood that all combinations of the foregoing concepts, as well as additional concepts described in more detail below, may be considered as part of the inventive subject matter so long as such concepts are not mutually inconsistent.
Drawings
The drawings are not intended to be drawn to scale unless specifically indicated. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing.
Fig. 1 is a flowchart of example 1.
Fig. 2 is an image of the conveyor belt of example 1.
Fig. 3 is a labeling image of the conveyor belt of example 1.
Fig. 4 is a belt mask image of a UNet network segment.
Fig. 5 is an image in the belt post-processing of UNet network segments.
Fig. 6 is a belt post-processed image of a UNet network segment.
Fig. 7 is an effect display view of the normal operation of the front-end conveyor.
Fig. 8 is a view showing the effect of the front end conveyor belt when it is deviated.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without creative efforts, based on the described embodiments of the present invention fall within the protection scope of the present invention. Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs.
The terms first, second and the like in the description and in the claims, are not used for any order, quantity or importance, but are used for distinguishing between different elements. Also, unless the context clearly indicates otherwise, singular forms "a," "an," or "the" and similar terms do not denote a limitation of quantity, but rather denote the presence of at least one. The terms "comprises," "comprising," or the like are intended to cover a feature, integer, step, operation, element, and/or component recited as being present in the element or article that "comprises" or "comprising" does not exclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. "up", "down", "left", "right" and the like are used only to indicate a relative positional relationship, and when the absolute position of the object to be described is changed, the relative positional relationship may be changed accordingly.
Fig. 1 shows a method for detecting the deviation of a conveyor belt based on a UNet network, which comprises the following steps:
s1, recording conveyor belt operation videos of different scenes by a camera, and extracting a sample set in a frame extraction mode;
S2, cleaning data of the sample set, calculating the structural similarity of the sample image and the reference image by using a structural similarity algorithm, and eliminating similar images;
s3, preprocessing and marking the sample set, and randomly dividing a training set, a verification set and a test set according to a certain proportion;
s4, constructing an improved UNet neural network, inputting the training set and the verification set data in the S3 into the network for training, taking a group with the minimum loss value as an optimal weight after d times epochs of training, and selecting the group as a final detection model;
S5, detecting and evaluating results, inputting field videos, identifying, extracting and dividing a conveyor belt area, and fitting area boundary lines; setting a safety area at the front end interface; comparing the position relation between the boundary line and the set safety area, judging whether the conveyor belt is deviated or not, and drawing a corresponding result in the code stream and returning to the front end so as to observe in real time;
S6, after the deviation occurs at a certain moment, the detection system does not give an alarm immediately, but starts timing, if the duration of the deviation is longer than a set value, an alarm module is triggered, related personnel are notified, and the deviation video of the period of time is backed up; and when the system runs normally, the system rechems, and when the duration is longer than the set value, the alarm is released.
Example 1
The camera in the S1 is arranged above the conveyor belt, so that the complete conveyor belt area and carrier rollers on two sides can be shot at a visual angle, a conveyor belt operation video is recorded in real time, and an image is extracted as a sample image in a mode of 1 frame in N seconds.
S2, cleaning data of the sample set specifically comprises the following steps: selecting one image in a sample set as a reference image, traversing other images in the sample set, and calculating the structural similarity SSIM of each image conveyor belt region y and the reference image conveyor belt region x, wherein the formula is as follows:
Where μ x,μy is the average gray scale of x, y, respectively, δ x,δy is the contrast of x, y, c 1=(k1L)2,c2=(k2L)2 is the constant used to maintain stability, k1=0.01, k2=0.03, and l is the dynamic range of pixel values.
If the SSIM is greater than the set value, rejecting y from the sample set, otherwise, reserving y; after the traversing is finished, re-traversing the image by taking y as a reference according to the method, and judging whether similar images exist or not; and so on.
As shown in fig. 2 and 3, the S3 sample set conveyor belt image and label image specifically includes the following steps:
adjusting the sizes of all sample images to 512 x 512 pixels;
enhancing the image contrast using a contrast limited histogram equalization algorithm;
The conveyor belt area is marked by marking software, the training set, the verification set and the test set are randomly divided according to the ratio of 7:2:1, the sample set has 2851 images, and the final number of the embodiment is approximately 2000,550,300.
The specific steps of S4 are as follows:
Sampling a fourth layer under the UNet network, adding a plurality of residual network modules, and enhancing the regional characteristic extraction capability of the conveyor belt;
the training set and the verification set are input into the improved UNet network for training, tversky Loss is adopted as a network evaluation index, and the calculation formula is as follows:
wherein A is a predicted result, B is a real result, and alpha and beta are super parameters;
The total training epochs is k, the initial learning rate is set to l, each epoch training is substituted into the verification set to calculate the loss value of the verification set, and once the model performance is not improved any more, the learning rate is reduced by 2 times;
and saving a weight file with the lowest loss function T (A, B) value as an optimal weight parameter obtained by training and a corresponding optimal model.
Fig. 4-6 are belt dividing diagrams and boundary lines provided in the embodiments, and specifically include the following steps: starting model detection, and dividing a conveyor belt mask, as shown in fig. 5; extracting the mask contour by a canny algorithm, and fitting the contour into a straight line by using a polygon approximation algorithm, wherein the straight line in the length direction of the conveyor belt is a boundary line, as shown in fig. 6;
Fig. 7 and 8 show the detection results of different conveyor belt regions according to the embodiment, which specifically include the following steps: the front end configuration interface draws a conveyor belt safety area through a mouse, compares the fitted boundary line with the conveyor belt safety area, and indicates normal operation if the boundary lines are all in the safety area; if the left boundary is outside the safe area, this means that the conveyor belt is biased left, and if the right is outside the safe area, this means that the conveyor belt is biased right. Drawing a detection result in a code stream, pushing the detection result to the front end for real-time display, and if no deviation occurs, the boundary line is a green straight line, and the lower left corner displays a normal character; otherwise, the boundary line is a red straight line, and the lower left corner displays the character of left deviation or right deviation.
The front end of the embodiment sets the alarm duration t1 and removes the alarm time t2, after the deviation phenomenon occurs, the detection system starts timing, if the duration deviation time is greater than t1, the alarm device is triggered to prompt the staff, and the deviation video of the time is saved for backtracking. If the conveyor belt returns to the normal running state at a certain moment, the detection system counts time, and if the normal time is greater than t2, the alarm is released.
The invention has high detection speed, the UNet extracts the advanced features of the conveyor belt region in the image through the neural network, the network can output a classification result at each pixel on the feature map through learning, during detection, all the pixels classified as the conveyor belt are combined to obtain the segmented conveyor belt region, the operations of filtering, binarization, edge segmentation and the like in the traditional algorithm are not needed, the same group of parameters of the traditional algorithm cannot be applied under different scenes (different illumination, shooting angles and shooting distances), false detection is easy to occur, the UNet can avoid the problem of repeatedly adjusting parameters such as a filtering core, a gray threshold value and the like, the time is saved, the robustness is improved, the conveyor belt region can be accurately identified in various complex environments, the operation is simple, and the labor and time cost is reduced.
While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Those skilled in the art will appreciate that various modifications and adaptations can be made without departing from the spirit and scope of the present invention. Accordingly, the scope of the invention is defined by the appended claims.
Claims (5)
1. The method for detecting the deviation of the conveyor belt based on the UNet network is characterized by comprising the following steps of:
S1: the camera collects conveyor belt operation videos of different scenes, and frames are extracted to obtain a sample set;
S2: carrying out data cleaning on the sample set, calculating the structural similarity of the sample image and the reference image, and eliminating similar images;
s3: preprocessing and labeling the sample set, and proportionally generating a training set, a verification set and a test set;
S4: adding a plurality of residual network modules into the UNet network, constructing a new UNet network, inputting a training set and a verification set into the network for training, and obtaining an optimal weight and a detection model through a plurality of iterations;
S5: setting a safety area; detecting a field video by using a model, dividing a conveyor belt region, and fitting a region boundary line; judging whether the conveyor belt is deviated or not according to the comparison of the boundary line and the position of the safety area, and drawing a corresponding result in the code stream, wherein the specific steps are as follows: a safety area is drawn by configuring an interface at the front end, model detection is started, and a conveyor belt mask is segmented; extracting the mask contour by a canny algorithm, fitting the contour into a straight line by using a polygon approximation algorithm, and taking the straight line in the length direction of the conveyor belt as a boundary line; recording the boundary line position, comparing with the safety area, if the left boundary line is outside the safety area, indicating that the conveyor belt is left-deflected, and if the right boundary line is outside the safety area, indicating that the conveyor belt is right-deflected; the boundary lines are all in the safety zone and indicate normal operation; drawing the detection result in the code stream, pushing to the front end and displaying in real time;
s6: when the conveyor belt is deviated, the detection system starts timing; if the duration of the deviation exceeds the set value t1, the deviation video is saved and an alarm module is triggered.
2. The UNet network-based conveyor belt deviation detection method as in claim 1, wherein the specific steps of S2 are as follows: selecting one image in a sample set as a reference image, traversing other images in the sample set, and calculating the structural similarity SSIM of each image conveyor belt region y and the reference image conveyor belt region x, wherein the formula is as follows:
Wherein μ x,μy is the average gray scale of x, y, δ x,δy is the contrast of x, y, c 1=(k1L)2,c2=(k2L)2 is the constant used to maintain stability, k1=0.01, k2=0.03, l is the dynamic range of pixel values;
if the SSIM is greater than the set value, rejecting y from the sample set, otherwise, reserving y; after the traversing is finished, re-traversing by taking y as a reference, and judging whether similar images exist or not; and so on.
3. The UNet network-based conveyor belt deviation detection method as in claim 1 wherein the specific steps of S3 are as follows:
Adjusting the size of the sample image to 512 x 512 pixels;
enhancing the image contrast using a contrast limited histogram equalization algorithm;
the conveyor belt area is marked, and the training set, the verification set and the test set are divided according to the proportion of 7:2:1.
4. The UNet network-based conveyor belt deviation detection method as in claim 1 wherein the specific steps of S4 are as follows:
sampling a fourth layer under the UNet network, adding a plurality of residual error network modules, and constructing a new UNet network;
the training set and the verification set are input into a new UNet network for training, a loss function (Tversky Loss) is adopted as a network evaluation index, and the calculation formula is as follows:
wherein A is a predicted result, B is a real result, and alpha and beta are super parameters;
setting total training epochs and initial learning rate, substituting each epoch training into a verification set to calculate a loss value of the verification set, and reducing the learning rate by 2 times once the model performance is not improved;
and saving a weight file with the lowest loss function T (A, B) value as an optimal weight parameter obtained by training and a corresponding optimal model.
5. The UNet network-based conveyor belt deviation detection method as in claim 1 wherein S6 further comprises the steps of: if the deviated conveyor belt returns to the normal running state at a certain moment, the detection system starts timing until the duration is longer than the set value t2, and the alarm is released.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210223340.5A CN114655655B (en) | 2022-03-09 | 2022-03-09 | Conveyor belt deviation detection method based on UNet network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210223340.5A CN114655655B (en) | 2022-03-09 | 2022-03-09 | Conveyor belt deviation detection method based on UNet network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114655655A CN114655655A (en) | 2022-06-24 |
CN114655655B true CN114655655B (en) | 2024-07-26 |
Family
ID=82029139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210223340.5A Active CN114655655B (en) | 2022-03-09 | 2022-03-09 | Conveyor belt deviation detection method based on UNet network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114655655B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115331157B (en) * | 2022-09-28 | 2023-02-17 | 山东山矿机械有限公司 | Conveyor abnormity detection method and system based on image processing |
CN115527148A (en) * | 2022-09-30 | 2022-12-27 | 浪潮通信技术有限公司 | Smoke and fire detection method, device, equipment, storage medium and program product |
CN118529439B (en) * | 2024-06-04 | 2024-10-15 | 徐州华东机械有限公司 | Belt damage detection method and detection system for belt conveyor based on improvement CENTERNET |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283344A (en) * | 2021-05-27 | 2021-08-20 | 中国矿业大学 | Mining conveying belt deviation detection method based on semantic segmentation network |
CN113344905A (en) * | 2021-06-28 | 2021-09-03 | 燕山大学 | Strip deviation amount detection method and system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN208843186U (en) * | 2018-06-15 | 2019-05-10 | 湖北凯瑞知行智能装备有限公司 | Rubber belt from deviating diagnostic system based on machine vision |
CN108584352B (en) * | 2018-06-15 | 2024-02-13 | 湖北凯瑞知行智能装备有限公司 | Machine vision-based adhesive tape deviation diagnosis system and method |
CN110163852B (en) * | 2019-05-13 | 2021-10-15 | 北京科技大学 | Real-time deviation detection method of conveyor belt based on lightweight convolutional neural network |
JP7319162B2 (en) * | 2019-10-02 | 2023-08-01 | 株式会社荏原製作所 | Transport abnormality prediction system |
CN110902313A (en) * | 2019-12-05 | 2020-03-24 | 天津成科传动机电技术股份有限公司 | Belt pulley contour detection method, device, equipment and storage medium, and conveyor belt flow detection method, device and equipment |
CN113781506B (en) * | 2021-08-06 | 2023-12-15 | 东北大学 | Strip steel offset detection method and system |
CN114155494B (en) * | 2022-02-10 | 2022-05-17 | 力博重工科技股份有限公司 | A deep learning-based monitoring method for conveyor belt deviation of belt conveyor |
-
2022
- 2022-03-09 CN CN202210223340.5A patent/CN114655655B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283344A (en) * | 2021-05-27 | 2021-08-20 | 中国矿业大学 | Mining conveying belt deviation detection method based on semantic segmentation network |
CN113344905A (en) * | 2021-06-28 | 2021-09-03 | 燕山大学 | Strip deviation amount detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN114655655A (en) | 2022-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114655655B (en) | Conveyor belt deviation detection method based on UNet network | |
KR101869442B1 (en) | Fire detecting apparatus and the method thereof | |
CN109918971B (en) | Method and device for detecting number of people in monitoring video | |
CN111709935B (en) | Real-time coal gangue positioning and identifying method for ground moving belt | |
Bhattacharya et al. | Devanagari and bangla text extraction from natural scene images | |
CN109359625A (en) | The method and system of customer identification is judged based on head and shoulder detection and face recognition technology | |
CN110544271B (en) | Parabolic motion detection method and related device | |
Anala et al. | Anomaly detection in surveillance videos | |
CN102214309A (en) | Special human body recognition method based on head and shoulder model | |
CN110210379A (en) | A kind of lens boundary detection method of combination critical movements feature and color characteristic | |
CN114067296B (en) | Rail surface defect identification method and device | |
CN110334760A (en) | A kind of optical component damage detecting method and system based on resUnet | |
CN110096945A (en) | Indoor Video key frame of video real time extracting method based on machine learning | |
CN112330618A (en) | Image offset detection method, device and storage medium | |
CN111476804A (en) | Method, device and equipment for efficiently segmenting carrier roller image and storage medium | |
CN116416270B (en) | Method and device for monitoring underground working surface leakage water | |
AG et al. | Helmet detection using single shot detector (ssd) | |
CN114120175A (en) | Method for identifying foreign matters on coal conveying belt based on computer vision | |
CN106778675B (en) | A kind of recognition methods of target in video image object and device | |
CN117935171B (en) | Target tracking method and system based on gesture key points | |
CN112001336B (en) | Pedestrian crossing alarm method, device, equipment and system | |
CN113947795A (en) | Mask wearing detection method, device, equipment and storage medium | |
CN111401104B (en) | Classification model training method, classification method, device, equipment and storage medium | |
Fatichah et al. | Optical flow feature based for fire detection on video data | |
CN105973903A (en) | System and method for detecting oral solution bottle caps |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |