[go: up one dir, main page]

CN119005857A - Spare part management method and system based on target identification and change detection - Google Patents

Spare part management method and system based on target identification and change detection Download PDF

Info

Publication number
CN119005857A
CN119005857A CN202411025078.9A CN202411025078A CN119005857A CN 119005857 A CN119005857 A CN 119005857A CN 202411025078 A CN202411025078 A CN 202411025078A CN 119005857 A CN119005857 A CN 119005857A
Authority
CN
China
Prior art keywords
warehouse
image
spare parts
information
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411025078.9A
Other languages
Chinese (zh)
Inventor
张绳武
林华宝
叶瀚
谭阿峰
许建明
叶晓椿
姚泽玮
林俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Wangneng Technology Development Co ltd
State Grid Information and Telecommunication Co Ltd
Original Assignee
Fujian Wangneng Technology Development Co ltd
State Grid Information and Telecommunication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Wangneng Technology Development Co ltd, State Grid Information and Telecommunication Co Ltd filed Critical Fujian Wangneng Technology Development Co ltd
Priority to CN202411025078.9A priority Critical patent/CN119005857A/en
Publication of CN119005857A publication Critical patent/CN119005857A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Economics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及一种基于目标识别与变化检测的备品备件管理方法及系统,包括以下步骤:通过安装在库房内的摄像头采集备品备件出入库图像或视频,将图像信息传送至图像处理组件;对输入图像或视频进行预处理,包括去噪、归一化和增强操作;根据输入的预处理后的图像或视频,通过图像处理算法及强化学习模型对,确认备品备件的出入库信息;将输出的备品备件的出入库信息处理为日志管理模块能够识别的格式,输出至日志管理模块;日志管理模块记录系统操作的详细日志,包括用户操作、系统事件和处理结果,提供操作记录和问题追踪功能,帮助系统维护和故障排查;通过用户交互模块能够对备品备件出入库历史及实时图像与结果信息进行查询与修正。

The present application relates to a spare parts management method and system based on target recognition and change detection, comprising the following steps: collecting spare parts in and out of the warehouse image or video through a camera installed in the warehouse, and transmitting the image information to an image processing component; preprocessing the input image or video, including denoising, normalization and enhancement operations; confirming the spare parts in and out of the warehouse information through an image processing algorithm and a reinforcement learning model according to the input preprocessed image or video; processing the output spare parts in and out of the warehouse information into a format recognizable by a log management module, and outputting it to the log management module; the log management module records a detailed log of system operations, including user operations, system events and processing results, provides operation records and problem tracking functions, and helps system maintenance and troubleshooting; and the spare parts in and out of the warehouse history and real-time images and result information can be queried and corrected through a user interaction module.

Description

Spare part management method and system based on target identification and change detection
Technical Field
The application relates to the technical field of information processing, in particular to a spare part management method and system based on target identification and change detection.
Background
With the development of society and the rapid growth of national economy, the amount and variety of materials managed by national grid companies are also increasing. With this, the difficulty of material management increases, and management of links including purchasing, warehousing, delivering, inventory and the like requires a more scientific and efficient method for management. The main scheme of intelligent management of spare parts library of national network company is to introduce RFID technology to identify and manage the materials, thus realizing automation and accuracy of material management. This technique requires that RFID tag paper be attached to each item and then assets be managed by system setup. Management such as article warehouse-in and warehouse-out is accomplished through reading the sign indicating number ware at RFID.
The method realizes the high-accuracy real-time intelligent image for target identification and change detection based on the YOLO algorithm, and the YOLO algorithm has excellent real-time performance and accuracy, and can obviously improve the identification capability of multiple target areas by fine adjustment and optimization of the YOLO. The invention is applied to a national power grid spare part material management system, realizes image noninductive management on spare parts, automatically identifies asset problems and basic information of circulating backup spare parts, and provides functions of intelligent efficient retrieval, high-precision, high-quality inspection and the like.
The prior art, such as chinese patent No. CN117151595A, discloses a method, apparatus and storage medium for inventory management of goods. The method comprises the following steps: acquiring an in-warehouse commodity inventory image shot by a camera; inputting the commodity inventory image into a commodity identification model for identification processing to obtain the monitoring inventory of the target commodity; determining a current inventory of the target commodity according to the registered inventory and the monitored inventory of the target commodity; responding to the confirmation operation of the current stock quantity, and calling a sales quantity prediction model; determining the predicted sales of the target commodity in a future preset time through a sales prediction model; and determining the planned ex-warehouse quantity or the planned in-warehouse quantity of the target commodity in the future preset time according to the current stock quantity and the predicted sales quantity. The invention can obtain more accurate and reasonable current stock quantity, accurately forecast future sales of commodities, obtain planned ex-warehouse quantity or planned in-warehouse quantity, and ensure that the stock can timely deal with matched commodity sales.
The problem with the prior art is that when the method identifies the images of the goods in the inventory, one device can only monitor the quantity of one type of goods, and cannot cope with the complex inventory environment.
Disclosure of Invention
The invention provides a solution to the above technical problems.
The technical scheme of the invention is as follows:
In one aspect, the present invention provides a spare part management system based on target identification and change detection, including:
Spare part detection module contains image acquisition unit, image preprocessing unit, target detection unit, post-processing unit, wherein:
an image acquisition unit: the method comprises the steps that an image acquisition device installed in a warehouse is used for acquiring images or videos of spare parts entering and exiting the warehouse, and the images or video information is transmitted to an image preprocessing unit through a communication interface;
an image preprocessing unit: preprocessing input image or video information, including denoising, normalization and enhancement operations;
Target detection unit: confirming warehouse-in and warehouse-out information of spare parts through an image processing algorithm and a reinforcement learning model according to the input preprocessed image or video information;
Post-processing unit: processing the output warehouse-in and warehouse-out information of the spare parts into a format which can be identified by the log management module, and outputting the format to the log management module;
And the log management module is used for: recording a detailed log of system operation, including user operation, system events and processing results;
And a user interaction module: the system is used for inquiring the history of the warehouse-in and warehouse-out of spare parts, the real-time images and the result information; selecting different image acquisition devices; and displaying the current system state, receiving user input, and configuring system parameters.
As a preferred embodiment, the image preprocessing unit: preprocessing the input image or video information, specifically comprising the following steps:
the specific denoising process comprises the following steps:
and (5) average value filtering: for each pixel point in the image, the new pixel value is the average value of the neighborhood pixel values; let the new value I new (x, y) of pixel (x, y) for the neighborhood size m×n be:
Wherein I (I, j) represents the original pixel value at pixel point (I, j);
median filtering: sorting the pixel values in the adjacent areas, and taking the intermediate value as a new pixel value;
the specific formula of the normalization flow is as follows:
where I (x, y) is the original pixel value, and I max and I min are the minimum and maximum pixel values, respectively, in the image;
the specific formula of the enhancement flow is as follows:
Brightness adjustment: i brightened (x, y) =i (x, y) +b; wherein b is a brightness adjustment value;
Contrast adjustment: i contrasted (x, y) =a×i (x, y) +b; where a controls contrast and b controls brightness.
As a preferred embodiment, the target detection unit confirms the in-out information of the spare parts through an image processing algorithm and a reinforcement learning model, and includes the following steps:
Network forward propagation: processing the preprocessed image through a multi-layer convolutional neural network to extract multi-layer features; extracting basic features of an image by using a backhaul deep neural network, capturing multi-scale information by using a pyramid pooling module PPM, integrating feature pyramid network FPN to fuse feature graphs of different layers, and combining a self-adaptive spatial feature network ASFF to improve a feature fusion effect; finally, performing target detection on a plurality of scales by utilizing a Multi-scale detection head Multi-scale Detection Head so as to adapt to targets with different sizes; in the training process, optimizing and improving a Mosaic data enhancement strategy, enhancing the generalization capability of the model, and dynamically adjusting the learning rate to improve the training efficiency and the model performance;
prediction output: the output of the network is a fixed-size grid, each grid cell predicting the bounding box, confidence score and class probability of the target;
decoding output: decoding the prediction result output by the network into an actual target boundary box and category; converting the relative coordinates into actual coordinates of the image, filtering out boundary frames with low confidence coefficient according to a confidence coefficient threshold value, removing repeatedly overlapped boundary frames, and reserving boundary frames with highest confidence coefficient so as to avoid multiple detection of the same target;
Post-treatment: carrying out subsequent processing on the final target detection result; drawing the detected target boundary box and class label on the original image, and outputting the detection result to a log management module for storage.
As a preferred embodiment, the network propagates the flow forward; the method specifically comprises the following steps:
the specific formula of the convolution operation is as follows:
Wherein: i in and I out are respectively input and output images, Being a convolution kernel, k depends on the convolution kernel size;
the specific formula of the pyramid pooling module PPM is as follows:
Wherein: f pooled is a characteristic diagram obtained under different pooling scales d, and s is the pooling window size;
The feature map F after feature pyramid network FPN fusion is expressed as:
F=H+L
Wherein: h is a high-level feature map, L is an up-sampled bottom-level feature map;
the specific formula of the adaptive spatial signature network ASFF is as follows:
FASFF=w1F1+w2F2+,…,+wcFc
wherein F c is the c-th fused feature map, and w c is the adaptive weight obtained by corresponding network learning;
The specific formula of the Multi-scale detection head Multi-scale Detection Heads is as follows:
x=xa+Δx,y=ya+Δy,z=za+Δz,h=ha+Δh
Wherein: x a、ya、za、ha is the original anchor frame coordinates, and Δx, Δy, Δz, Δh are the predicted bounding box offsets, respectively.
As a preferred embodiment, the prediction output process specifically includes the following steps:
Coordinate conversion:
x=xgrid×stride+xoffset
y=ygrid×stride+yoffset
Wherein: x grid、ygrid is the index of the grid cell; stride is the step size of the cell; x offset、yoffset is the predicted offset;
The confidence Softmax is calculated as follows:
Wherein: p o is a probability class; s o is the confidence score of the o category; o=1, 2,3, …, b.
In a preferred embodiment, in the decoding output flow, a bounding box with low confidence is filtered out according to a confidence threshold; the method is realized by a non-maximum suppression algorithm, and the specific formula is as follows:
Wherein: s o is a confidence score; m is the bounding box with highest score; b v is the bounding box to be processed; ioU (M, b v) is the intersection ratio of bounding boxes M and b v; n t is a set cross ratio threshold;
then there are:
Wherein: area (M ∈b v) is the intersection area of two bounding boxes; area (mθb v) is the union area of two bounding boxes.
As a preferred implementation mode, the user interaction module can inquire the warehouse-in and warehouse-out history of spare parts and real-time image and result information, and can timely find error information by manually comparing an output result with the image and feed back machine reinforcement learning through the user interaction module.
On the other hand, the invention also provides a spare part management method based on target identification and change detection, which comprises the following steps:
step S1: collecting images or videos of the spare parts entering and exiting the warehouse through a camera arranged in the warehouse, and transmitting image information to an image processing assembly through a communication interface;
Step S2: preprocessing an input image or video, including denoising, normalization and enhancement operations;
Step S3: according to the input preprocessed image or video, confirming the warehouse-in and warehouse-out information of spare parts through an image processing algorithm and a reinforcement learning model pair;
step S4: processing the output warehouse-in and warehouse-out information of the spare parts into a format which can be identified by the log management module, and outputting the format to the log management module;
Step S5: the log management module records a detailed log of system operation, including user operation, system events and processing results; providing operation record and problem tracking functions, and helping system maintenance and fault investigation;
Step S6: the user interaction module can inquire the warehouse-in and warehouse-out history of spare parts, real-time images and result information; different image devices can be selected; the current state can be displayed, user input is accepted, and system parameters are configured.
In still another aspect, the present invention further provides an electronic device, where a computer program is stored, where the computer program when executed by a processor implements a spare part management method based on object identification and change detection according to any of the embodiments of the present invention.
In yet another aspect, the present invention also provides a computer readable medium storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement a spare part management method based on object identification and change detection according to any of the embodiments of the present invention.
The invention has the following beneficial effects:
Automation and efficiency:
the camera is utilized to automatically collect images or videos of spare parts entering and exiting the warehouse, so that the tedious work of manual recording and checking is reduced, and the management efficiency is greatly improved.
The whole process realizes full-automatic processing from image acquisition, preprocessing, post-processing detection and log recording, and reduces the possibility of human errors.
Accurate target detection:
Through an image processing algorithm and a reinforcement learning model, the warehouse-in and warehouse-out information of spare parts can be more accurately confirmed, and compared with the traditional identification method which relies on manual judgment or simplicity, the accuracy and the reliability of detection are improved.
Perfect log management:
Logging system operations, including user operations, system events, and processing results, provides powerful support for system maintenance and troubleshooting. The prior art may be deficient in the integrity and detail of the log records.
Good user interactivity:
The user interaction module allows a user to inquire the history and real-time information of the spare parts entering and exiting, and can also select different image devices and configure system parameters. This enables a user to use the system more flexibly and conveniently, meeting the personalized needs, while some existing systems may not be sufficiently friendly and flexible in terms of user interaction.
Real-time and dynamic monitoring:
the method can acquire and process the images in real time, acquire the in-out conditions of spare parts in time, realize dynamic monitoring and management, and reflect inventory changes more timely compared with the traditional periodic inventory mode.
Traceability of data:
complete log records and clear in-out and in-out information processing ensure the traceability of data, and facilitate the examination and analysis of the inventory management process and results, which may not be outstanding in the prior art.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of the system of the present invention;
Fig. 2 is a schematic flow chart of a spare part detection module in the first embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the step numbers used herein are for convenience of description only and are not limiting as to the order in which the steps are performed.
It is to be understood that the terminology used in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising" indicate the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term "and/or" refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Embodiment one:
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solution of the present application will be clearly and completely described below with reference to fig. 1 in conjunction with a specific embodiment of the present application.
In order to solve the problems in the prior art, the invention provides a spare part management system based on target identification and change detection, which comprises:
Spare part detection module contains image acquisition unit, image preprocessing unit, target detection unit, post-processing unit, wherein:
an image acquisition unit: the method comprises the steps that an image acquisition device installed in a warehouse is used for acquiring images or videos of spare parts entering and exiting the warehouse, and the images or video information is transmitted to an image preprocessing unit through a communication interface;
The acquired spare parts are stored in the intelligent cabinet and are laid in a flat mode according to layers, and the conditions that the identification blind areas are caused by stacking and the dense arrangement and the identification are unclear are avoided.
An image preprocessing unit: preprocessing input image or video information, including denoising, normalization and enhancement operations; the specific operation flow is as follows:
Denoising:
and (5) average value filtering: for each pixel point in the image, the new pixel value is the average value of the neighborhood pixel values; let the new value I new (x, y) of pixel (x, y) for the neighborhood size m×n be:
Wherein I (I, j) represents the original pixel value at pixel point (I, j);
median filtering: sorting the pixel values in the adjacent areas, and taking the intermediate value as a new pixel value;
the specific formula of the normalization flow is as follows:
where I (x, y) is the original pixel value, and I max and I min are the minimum and maximum pixel values, respectively, in the image;
the specific formula of the enhancement flow is as follows:
Brightness adjustment: i brightened (x, y) =i (x, y) +b; wherein b is a brightness adjustment value;
Contrast adjustment: i contrasted (x, y) =a×i (x, y) +b; where a controls contrast and b controls brightness.
Target detection unit: according to the input preprocessed image or video, confirming the warehouse-in and warehouse-out information of spare parts through an image processing algorithm and a reinforcement learning model; the specific flow is as follows:
Network forward propagation: processing the preprocessed image through a multi-layer convolutional neural network to extract multi-layer features; extracting basic features of an image by using a backhaul deep neural network, capturing multi-scale information by using a pyramid pooling module PPM, integrating feature pyramid network FPN to fuse feature graphs of different layers, and combining a self-adaptive spatial feature network ASFF to improve a feature fusion effect; finally, performing target detection on a plurality of scales by utilizing a Multi-scale detection head Multi-scale Detection Head so as to adapt to targets with different sizes; in the training process, optimizing and improving a Mosaic data enhancement strategy, enhancing the generalization capability of the model, and dynamically adjusting the learning rate to improve the training efficiency and the model performance; the method specifically comprises the following steps:
the specific formula of the convolution operation is as follows:
Wherein: i in and I out are respectively input and output images, Being a convolution kernel, k depends on the convolution kernel size;
the specific formula of the pyramid pooling module PPM is as follows:
Wherein: f pooled is a characteristic diagram obtained under different pooling scales d, and s is the pooling window size;
The feature map F after feature pyramid network FPN fusion is expressed as:
F=H+L
Wherein: h is a high-level feature map, L is an up-sampled bottom-level feature map;
the specific formula of the adaptive spatial signature network ASFF is as follows:
FASFF=w1F1+w2F2+,…,+wcFc
wherein F c is the c-th fused feature map, and w c is the adaptive weight obtained by corresponding network learning;
The specific formula of the Multi-scale detection head Multi-scale Detection Heads is as follows:
x=xa+Δx,y=ya+Δy,z=za+Δz,h=ha+Δh
Wherein: x a、ya、za、ha is the original anchor frame coordinates, and Δx, Δy, Δz, Δh are the predicted bounding box offsets, respectively.
Optimizing and improving a Mosaic data enhancement strategy, enhancing the generalization capability of a model, and specifically adopting the following method:
The mosaics data enhancement algorithm enables the model to identify the target in a smaller range by combining a plurality of pictures into one picture according to a certain proportion. Firstly, four pictures are randomly selected and respectively placed at the upper left, the upper right, the lower left and the lower right of a specified large picture after size adjustment and scaling according to reference point coordinates. And secondly, according to the size conversion mode of each picture, mapping relation is corresponding to the picture label. And finally, splicing the large images according to the designated abscissa and ordinate, and processing the coordinates of the detection frame exceeding the boundary. The Mosaic data enhancement algorithm can effectively enhance data diversity and model robustness, and is beneficial to improving small target detection performance.
In the training process, the Mosaic data enhancement strategy is continuously improved by optimizing and adjusting the super-parameters. First, setting img-size to 980 x 980 is beneficial to improving training results under ideal training time and hardware conditions. Second, for batch-size, a hardware maximum is used to avoid the statistics offset caused by too small a batch-size value. Then, the iteration times of epochs are adjusted by combining the loss curve after the primary training structure, so that the under-fitting caused by the too small data and the over-fitting caused by the too large numerical value are avoided. Then, the cache is selected to be started, training is quickened, and training time is shortened. Finally, setting random seed can make training process repeatable and controllable.
Prediction output: the output of the network is a fixed-size grid, each grid cell predicting the bounding box, confidence score and class probability of the target; the method specifically comprises the following steps:
Coordinate conversion:
x=xgrid×stride+xoffset
y=ygrid×stride+yoffset
Wherein: x grid、ygrid is the index of the grid cell; stride is the step size of the cell; x offset、yoffset is the predicted offset;
The confidence Softmax is calculated as follows:
Wherein: p o is a probability class; s o is the confidence score of the o category; o=1, 2,3, …, b.
Filtering out the bounding box with low confidence according to the confidence threshold; the method is realized by a non-maximum suppression algorithm, and the specific formula is as follows:
Wherein: s o is a confidence score; m is the bounding box with highest score; b v is the bounding box to be processed; ioU (M, b v) is the intersection ratio of bounding boxes M and b v; n t is a set cross ratio threshold;
then there are:
Wherein: area (M ∈b v) is the intersection area of two bounding boxes; area (mθb v) is the union area of two bounding boxes.
Post-processing unit: processing the output warehouse-in and warehouse-out information of the spare parts into a format which can be identified by the log management module, and outputting the format to the log management module;
And the log management module is used for: recording a detailed log of system operation, including user operation, system events and processing results;
And a user interaction module: the system is used for inquiring the history of the warehouse-in and warehouse-out of spare parts, the real-time images and the result information; selecting different image acquisition devices; and displaying the current system state, receiving user input, and configuring system parameters.
The user interaction module is a front end part of the system and is responsible for interacting with a user and providing an operation interface. Its main functions include displaying the current state of the system, accepting user input, configuring system parameters, etc. The method comprises the following specific functions:
1. display analog switching: allowing the user to switch between different simulated views helps the user view, understand and use the various functional modules of the system and their status in order to view different functions or states of the system.
2. Building a webpage: the web page interface for creating and managing the system comprises layout design, content display and the like, and provides an intuitive and easy-to-operate interface for the user, so that the user can conveniently access and use the system functions.
3. A configuration panel: providing the function of user to adjust system parameters and settings. The function helps a user to configure the working mode and parameters of the system according to actual requirements, and is suitable for different application scenes and operation requirements.
4. Input source selection: allowing the user to select and specify the source of the data input, e.g. to select different image acquisition devices or data streams. The function supports the management and switching of multiple input sources, ensures that the system can acquire information from different data sources and perform corresponding processing and analysis.
5. Historical or real-time data feedback and derivation: the user can compare the history and real-time output result with the image through the user interaction module, can discover error information in time, and feeds back the machine reinforcement learning through the user interaction module. The log management module data can also be derived according to user requirements.
Embodiment two:
The embodiment provides a spare part management method based on target identification and change detection, which comprises the following steps:
step S1: collecting images or videos of the spare parts entering and exiting the warehouse through a camera arranged in the warehouse, and transmitting image information to an image processing assembly through a communication interface;
Step S2: preprocessing an input image or video, including denoising, normalization and enhancement operations;
Step S3: according to the input preprocessed image or video, confirming the warehouse-in and warehouse-out information of spare parts through an image processing algorithm and a reinforcement learning model pair;
step S4: processing the output warehouse-in and warehouse-out information of the spare parts into a format which can be identified by the log management module, and outputting the format to the log management module;
Step S5: the log management module records a detailed log of system operation, including user operation, system events and processing results; providing operation record and problem tracking functions, and helping system maintenance and fault investigation;
Step S6: the user interaction module can inquire the warehouse-in and warehouse-out history of spare parts, real-time images and result information; different image devices can be selected; the current state can be displayed, user input is accepted, and system parameters are configured.
Embodiment III:
The present embodiment provides an electronic device, on which a computer program is stored, which when executed by a processor implements a spare part management method based on object identification and change detection according to any one of the embodiments of the present invention.
Embodiment four:
The present embodiment provides a computer readable medium storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement a spare part management method based on object recognition and change detection according to any of the embodiments of the present invention.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present application, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (hereinafter referred to as ROM), a random access Memory (Random Access Memory hereinafter referred to as RAM), a magnetic disk, or an optical disk, etc., which can store program codes.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (10)

1. A spare part management system based on target identification and change detection, comprising:
Spare part detection module contains image acquisition unit, image preprocessing unit, target detection unit, post-processing unit, wherein:
an image acquisition unit: the method comprises the steps that an image acquisition device installed in a warehouse is used for acquiring images or videos of spare parts entering and exiting the warehouse, and the images or video information is transmitted to an image preprocessing unit through a communication interface;
an image preprocessing unit: preprocessing input image or video information, including denoising, normalization and enhancement operations;
Target detection unit: confirming warehouse-in and warehouse-out information of spare parts through an image processing algorithm and a reinforcement learning model according to the input preprocessed image or video information;
Post-processing unit: processing the output warehouse-in and warehouse-out information of the spare parts into a format which can be identified by the log management module, and outputting the format to the log management module;
And the log management module is used for: recording a detailed log of system operation, including user operation, system events and processing results;
And a user interaction module: the system is used for inquiring the history of the warehouse-in and warehouse-out of spare parts, the real-time images and the result information; selecting different image acquisition devices; and displaying the current system state, receiving user input, and configuring system parameters.
2. The stock spare part management system based on target identification and change detection as set forth in claim 1, wherein: the image preprocessing unit: preprocessing the input image or video information, specifically comprising the following steps:
the specific denoising process comprises the following steps:
and (5) average value filtering: for each pixel point in the image, the new pixel value is the average value of the neighborhood pixel values; let the new value I new (x, y) of pixel (x, y) for the neighborhood size m×n be:
Wherein I (I, j) represents the original pixel value at pixel point (I, j);
median filtering: sorting the pixel values in the adjacent areas, and taking the intermediate value as a new pixel value;
the specific formula of the normalization flow is as follows:
where I (x, y) is the original pixel value, and I max and I min are the minimum and maximum pixel values, respectively, in the image;
the specific formula of the enhancement flow is as follows:
Brightness adjustment: i brightened (x, y) =i (x, y) +b; wherein b is a brightness adjustment value;
Contrast adjustment: i contrasted (x, y) =a×i (x, y) +b; where a controls contrast and b controls brightness.
3. The spare parts management system based on target identification and change detection as set forth in claim 1, wherein: the target detection unit confirms the warehouse-in and warehouse-out information of spare parts through an image processing algorithm and a reinforcement learning model, and comprises the following steps:
Network forward propagation: processing the preprocessed image through a multi-layer convolutional neural network to extract multi-layer features; extracting basic features of an image by using a backhaul deep neural network, capturing multi-scale information by using a pyramid pooling module PPM, integrating feature pyramid network FPN to fuse feature graphs of different layers, and combining a self-adaptive spatial feature network ASFF to improve a feature fusion effect; finally, performing target detection on a plurality of scales by utilizing a Multi-scale detection head Multi-scale Detection Head so as to adapt to targets with different sizes; in the training process, optimizing and improving a Mosaic data enhancement strategy, enhancing the generalization capability of the model, and dynamically adjusting the learning rate to improve the training efficiency and the model performance;
prediction output: the output of the network is a fixed-size grid, each grid cell predicting the bounding box, confidence score and class probability of the target;
decoding output: decoding the prediction result output by the network into an actual target boundary box and category; converting the relative coordinates into actual coordinates of the image, filtering out boundary frames with low confidence coefficient according to a confidence coefficient threshold value, removing repeatedly overlapped boundary frames, and reserving boundary frames with highest confidence coefficient so as to avoid multiple detection of the same target;
Post-treatment: carrying out subsequent processing on the final target detection result; drawing the detected target boundary box and class label on the original image, and outputting the detection result to a log management module for storage.
4. The spare parts management system based on target identification and change detection as set forth in claim 2, wherein: the network forward propagation flow; the method specifically comprises the following steps:
the specific formula of the convolution operation is as follows:
Wherein: i in and I out are respectively input and output images, Being a convolution kernel, k depends on the convolution kernel size;
the specific formula of the pyramid pooling module PPM is as follows:
Wherein: f pooled is a characteristic diagram obtained under different pooling scales d, and s is the pooling window size;
The feature map F after feature pyramid network FPN fusion is expressed as:
F=H+L
Wherein: h is a high-level feature map, L is an up-sampled bottom-level feature map;
the specific formula of the adaptive spatial signature network ASFF is as follows:
FASFF=w1F1+w2F2+,…,+wcFc
wherein F c is the c-th fused feature map, and w c is the adaptive weight obtained by corresponding network learning;
The specific formula of the Multi-scale detection head Multi-scale Detection Heads is as follows:
x=xa+Δx,y=ya+Δy,z=za+Δz,h=ha+Δh
Wherein: x a、ya、za、ha is the original anchor frame coordinates, and Δx, Δy, Δz, Δh are the predicted bounding box offsets, respectively.
5. The spare parts management system based on target identification and change detection as set forth in claim 2, wherein: the prediction output flow specifically comprises the following steps:
Coordinate conversion:
x=xgrid×stride+xoffset
y=ygrid×stride+yoffset
Wherein: x grid、ygrid is the index of the grid cell; stride is the step size of the cell; x offset、yoffset is the predicted offset;
The confidence Softmax is calculated as follows:
Wherein: p o is a probability class; s o is the confidence score of the o category; o=1, 2,3, …, b.
6. The spare parts management system based on target identification and change detection as set forth in claim 2, wherein: in the decoding output flow, filtering out a boundary box with low confidence according to a confidence threshold; the method is realized by a non-maximum suppression algorithm, and the specific formula is as follows:
Wherein: s o is a confidence score; m is the bounding box with highest score; b v is the bounding box to be processed; ioU (M, b v) is the intersection ratio of bounding boxes M and b v; n t is a set cross ratio threshold;
then there are:
Wherein: area (M ∈b v) is the intersection area of two bounding boxes; area (mθb v) is the union area of two bounding boxes.
7. The spare parts management system based on target identification and change detection as set forth in claim 1, wherein: the user interaction module can inquire the warehouse-in and warehouse-out history of spare parts and real-time image and result information, and can timely find error information by manually comparing an output result with the image, and the user interaction module feeds back machine reinforcement learning.
8. The spare part management method based on target identification and change detection is characterized by comprising the following steps of:
step S1: collecting images or videos of the spare parts entering and exiting the warehouse through a camera arranged in the warehouse, and transmitting image information to an image processing assembly through a communication interface;
Step S2: preprocessing an input image or video, including denoising, normalization and enhancement operations;
Step S3: according to the input preprocessed image or video, confirming the warehouse-in and warehouse-out information of spare parts through an image processing algorithm and a reinforcement learning model pair;
step S4: processing the output warehouse-in and warehouse-out information of the spare parts into a format which can be identified by the log management module, and outputting the format to the log management module;
Step S5: the log management module records a detailed log of system operation, including user operation, system events and processing results; providing operation record and problem tracking functions, and helping system maintenance and fault investigation;
Step S6: the user interaction module can inquire the warehouse-in and warehouse-out history of spare parts, real-time images and result information; different image devices can be selected; the current state can be displayed, user input is accepted, and system parameters are configured.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements a spare part management method based on object recognition and change detection as recited in claim 8 when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements a spare part management method based on object recognition and change detection as claimed in any one of claims 8.
CN202411025078.9A 2024-07-29 2024-07-29 Spare part management method and system based on target identification and change detection Pending CN119005857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411025078.9A CN119005857A (en) 2024-07-29 2024-07-29 Spare part management method and system based on target identification and change detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411025078.9A CN119005857A (en) 2024-07-29 2024-07-29 Spare part management method and system based on target identification and change detection

Publications (1)

Publication Number Publication Date
CN119005857A true CN119005857A (en) 2024-11-22

Family

ID=93468307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411025078.9A Pending CN119005857A (en) 2024-07-29 2024-07-29 Spare part management method and system based on target identification and change detection

Country Status (1)

Country Link
CN (1) CN119005857A (en)

Similar Documents

Publication Publication Date Title
US8941645B2 (en) Comparing virtual and real images in a shopping experience
US20230052727A1 (en) Method and system for detecting physical features of objects
CN110705666A (en) Artificial intelligence cloud computing display rack goods and label monitoring and goods storage method
DE112019001175T5 (en) Visual feedback on the process status
DE102018120510A1 (en) PURCHASER IDENTIFICATION SYSTEMS AND METHODS FOR IDENTIFYING NEW RADIO FREQUENCY IDENTIFICATION (RFID) IDEAS EVENTS NEAR AN RFID READER
DE112019001788T5 (en) METHOD, SYSTEM AND DEVICE FOR CORRECTING TRANSLUCENCE ARTIFACTS IN DATA REPRESENTING A SUPPORT STRUCTURE
CN118196309B (en) High-definition visual detection and identification system based on image processing industrial personal computer
CN113012278B (en) Web-side digital factory visual monitoring method, system and storage medium
Agnihotram et al. Combination of advanced robotics and computer vision for shelf analytics in a retail store
CN118906649A (en) Printing machine real-time monitoring method based on image processing and related equipment
CN116453061A (en) Remote pig selling supervision method, device and equipment based on image recognition
US20250086585A1 (en) Retail shelf image processing and inventory tracking system
CN113487192B (en) Work order processing system, method, electronic device, and computer-readable storage medium
CN119005857A (en) Spare part management method and system based on target identification and change detection
CN112529511A (en) Machine vision-based storage warehouse management system
Wannachai et al. Real-Time Seven Segment Display Detection and Recognition Online System Using CNN
CN118037662A (en) Method, system, equipment and medium for detecting machining tool
US20240257327A1 (en) Method and system for detecting coating degradation
US20240257337A1 (en) Method and system for surface deformation detection
US12136061B2 (en) Retail shelf image processing and inventory tracking system
CN112558554A (en) Task tracking method and system
CN114386901A (en) AR (augmented reality) glasses-based multi-person collaborative maintenance system and method for overseas warehouse and AR glasses
CN116009274A (en) Light field display method and device
Ekanem et al. Conjectures of Computer Vision Technology (CVT) on Industrial Information Management Systems (IMSs): A Futuristic Gaze
CN112801578A (en) Commodity warehousing and ex-warehouse management system and method applied to individual vendor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination