[go: up one dir, main page]

CN118053218B - Method, device and system for detecting computer board card - Google Patents

Method, device and system for detecting computer board card Download PDF

Info

Publication number
CN118053218B
CN118053218B CN202311513544.3A CN202311513544A CN118053218B CN 118053218 B CN118053218 B CN 118053218B CN 202311513544 A CN202311513544 A CN 202311513544A CN 118053218 B CN118053218 B CN 118053218B
Authority
CN
China
Prior art keywords
data
feature
computer board
analysis
physical structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311513544.3A
Other languages
Chinese (zh)
Other versions
CN118053218A (en
Inventor
唐筱毓
陈小旺
李亚平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yichenwei Technology Co ltd
Original Assignee
Shenzhen Yichenwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yichenwei Technology Co ltd filed Critical Shenzhen Yichenwei Technology Co ltd
Priority to CN202311513544.3A priority Critical patent/CN118053218B/en
Publication of CN118053218A publication Critical patent/CN118053218A/en
Application granted granted Critical
Publication of CN118053218B publication Critical patent/CN118053218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2257Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2263Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2268Logging of test results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • G06F11/261Functional testing by simulating additional hardware, e.g. fault simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • G06F11/263Generation of test inputs, e.g. test vectors, patterns or sequences ; with adaptation of the tested hardware for testability with external testers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/045Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C3/00Registering or indicating the condition or the working of machines or other apparatus, other than vehicles
    • G07C3/14Quality control systems
    • G07C3/143Finished product quality control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2123/00Data types
    • G06F2123/02Data types in the time domain, e.g. time-series data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Automation & Control Theory (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Fuzzy Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种计算机板卡的检测方法、装置和系统。其中,该方法通过集成软件测试与微纳米级三维扫描技术,能够模拟并测试计算机板卡在不同操作环境下的工作状态,获取板卡的功能性行为数据以及物理结构数据。此检测方法包括利用集成算法对功能性行为数据和物理结构数据进行融合分析,以识别那些单一测试方法无法检测到的缺陷。在检测到缺陷的情况下,该方法不仅能够识别并移除不良品,还能对测试用例和扫描参数进行适应性调整,进一步确保检测流程能够覆盖以前未发现的缺陷类型,以及适应生产线上的变化和板卡设计的更新。

The present invention discloses a method, device and system for detecting computer boards. The method can simulate and test the working state of computer boards in different operating environments by integrating software testing with micro-nano three-dimensional scanning technology, and obtain functional behavior data and physical structure data of the boards. This detection method includes using an integrated algorithm to perform a fusion analysis of functional behavior data and physical structure data to identify defects that cannot be detected by a single test method. When defects are detected, the method can not only identify and remove defective products, but also adaptively adjust test cases and scanning parameters, further ensuring that the detection process can cover previously undiscovered defect types, and adapt to changes in the production line and updates to board designs.

Description

Method, device and system for detecting computer board card
Technical Field
The present application relates to the field of computer board card detection technologies, and in particular, to a method, an apparatus, and a system for detecting a computer board card.
Background
In the field of modern electronics manufacturing, it is critical to ensure the quality and reliability of computer boards. With the continuous advancement of technology, the complexity of computer boards is also increasing, which results in a continuous innovative need for detection methods. Conventional inspection methods, such as visual inspection or function-specific based testing, often fail to fully capture all potential quality problems, especially microscopic defects that are not readily found under normal operating conditions. Furthermore, a single test method may lack flexibility to accommodate changes in the manufacturing process, or to respond to newly introduced board designs.
Therefore, there is a strong need for a test method that can fully evaluate the performance of a computer board under a variety of operating conditions.
Disclosure of Invention
The application provides a method, a device and a system for detecting a computer board card, which are used for improving the accuracy of identifying defects of the computer board card.
The application provides a method for detecting a computer board card, which comprises the following steps:
running a software test case, and simulating the working state of the computer board card under various operating environments including temperature, current and data transmission to acquire functional behavior data of the computer board card;
Scanning the surface and the internal structure of the computer board card by adopting a micro-nano three-dimensional scanner while running a software test to collect physical structure data of the computer board card, wherein the physical structure data is used for identifying physical defects of the computer board card on a microscopic scale;
Performing fusion analysis processing on the functional behavior data and the physical structure data by using an integrated algorithm, wherein the fusion analysis processing is based on the fact that the final performance of the computer board card is affected by the functional state and the physical state together, so that defects which cannot be detected by a single method are identified;
judging whether the computer board card has defects according to the processing result of the fusion analysis processing;
If the computer board card has defects, judging the computer board card as defective products, removing the computer board card with the defects from the production line, and adaptively adjusting scanning parameters of the software test case and the micro-nano three-dimensional scanner; the adaptive adjustment is based on the result of detailed analysis of defective products, so as to diagnose and incorporate the type of defects of the computer board which are not fully covered before in the detection flow of the computer board, and ensure that the adjustment is made on the change of the production line condition of the computer board or the newly introduced board design.
Still further, the method for detecting a computer board card further includes:
And storing detection data in the detection process, wherein the detection data comprises functional behavior data, physical structure data and processing results of fusion analysis processing, and continuously optimizing software test cases and three-dimensional scanning parameters by utilizing the detection data so as to realize self calibration and self optimization of a detection system, and enhance the adaptability of the system to various quality differences of a computer board card and the responsiveness to changes in the production process.
Still further, the performing fusion analysis processing on the functional behavior data and the physical structure data using an integration algorithm includes:
converting the functional behavior data and the physical structure data into standardized feature vectors by utilizing multidimensional scale analysis and principal component analysis;
Utilizing standardized feature vectors, identifying and extracting potential associations between functional and physical features through the internal structure of learning data by using a feature learning model based on deep learning, so as to generate a comprehensive feature representation for each computer board card;
Inputting the integrated feature representation into a fuzzy logic-based integrated learning framework, the integrated learning framework intelligently matching the integrated feature representation with a set of predefined defect patterns, thereby identifying potential defects that cannot be detected by a single test;
The cause of the potential defect is analyzed by adopting an interpretation algorithm based on causal reasoning, and complex data and modes are converted into visual reports and charts, so that a clear visual and theoretical basis is provided for a technician to understand the detection result, and further decision and action are facilitated.
Still further, the converting the functional behavior data and the physical structure data into normalized feature vectors using multidimensional scaling and principal component analysis includes:
Respectively carrying out data normalization processing on the functional behavior data and the physical structure data, calculating the deviation of each data point in each data set relative to the data set mean value of the data points, and carrying out standardization processing on the original data points by utilizing the deviation, thereby forming two independent and standardized functional behavior data sets and physical structure data sets;
Respectively applying multidimensional scale analysis to the normalized functional behavior data set and the physical structure data set, and generating corresponding low-dimensional space coordinates for each data set by constructing a distance matrix among data points and executing a nonlinear dimension reduction technology;
Integrating low-dimensional space coordinates corresponding to the functional behavior data set and the physical structure data set respectively to generate a joint data matrix, and calculating a covariance matrix on the joint data matrix;
performing principal component analysis by using the covariance matrix to determine a main variation direction of the combined data set, and extracting a feature vector corresponding to the maximum feature value to form a comprehensive principal component feature vector set;
and carrying out scale adjustment processing on the comprehensive principal component feature vector set to enable all dimension features to have the same magnitude so as to obtain a standardized feature vector.
Still further, the feature learning model includes:
An input layer for receiving a normalized feature vector;
The two-channel feature processing layer comprises two parallel sub-networks, wherein the first sub-network is configured with a depth network structure optimized for time sequence data features, the second sub-network is configured with a depth network structure optimized for space data features, and each sub-network respectively extracts the time sequence features and the space features in the fusion feature vector and outputs two independent feature representations;
The association learning layer is configured with a self-defined attention mechanism and is used for integrating the output of the two-channel feature processing layer and generating a feature representation by learning potential association between a time sequence feature representation and a space feature representation;
the depth feature fusion layer is used for receiving the output of the association learning layer, and further integrating and refining the comprehensive feature representation through a series of full-connection layers to form a comprehensive feature representation for representing the state of the computer board card;
And the output layer is configured with a full connection layer and a Softmax activation function and is used for converting the comprehensive characteristic representation of the depth characteristic fusion layer into probability distribution or classification labels for describing the possible states of the computer board card.
Further, the loss function L of the feature learning model is defined as:
where C is the total number of categories of the output layer, y m is the one-hot encoding vector of the real label, Is the corresponding model output, m is the ordinal number of the sample;
where L CE is the cross entropy loss, defined as:
Where y mn represents the value of the nth element in the true tag vector y m; Representing model output vectors The value of the nth element;
L WHD is the weighted loss for hard to detect defects, defined as:
where y mC is an element in the true label vector that is difficult to detect the defect class, Is the corresponding element in the prediction vector, and alpha and beta are the super parameters for balancing the two-part loss.
Still further, the custom attention mechanism includes:
Evaluating correlation between the time series characteristic and the space characteristic by using a variation self-encoder, and determining joint contribution of the time series characteristic and the space characteristic to defect detection;
assigning a weight to each feature based on the evaluation result of the variation self-encoder;
and carrying out feature weighted fusion, and merging the time sequence features and the space features according to the assigned weights to generate feature representation for defect detection.
Furthermore, the variation self-encoder further comprises a regularization term which penalizes the distribution deviation of the potential space of the variation self-encoder based on Kullback-Leibler divergence, so that the generated characteristic representation not only accurately reflects the characteristics of the original data, but also has good generalization capability.
The application provides a detection device of a computer board card, which comprises:
The test unit is used for running a software test case and simulating the working state of the computer board card under various operating environments including temperature, current and data transmission so as to acquire the functional behavior data of the computer board card;
The scanning unit is used for scanning the surface and the internal structure of the computer board card by adopting a micro-nano three-dimensional scanner while running the software test so as to collect physical structure data of the computer board card, wherein the physical structure data is used for identifying physical defects of the computer board card on a microscopic scale;
The analysis unit is used for carrying out fusion analysis processing on the functional behavior data and the physical structure data by using an integrated algorithm, wherein the fusion analysis processing is based on the fact that the final performance of the computer board card is affected by the functional state and the physical state together, so that defects which cannot be detected by a single method are identified;
the judging unit is used for judging whether the computer board card has defects according to the processing result of the fusion analysis processing;
The adjusting unit is used for judging the defective products if the computer board card has defects, further removing the computer board card with the defects from the production line, and adaptively adjusting the scanning parameters of the software test case and the micro-nano three-dimensional scanner; the adaptive adjustment is based on the result of detailed analysis of defective products, so as to diagnose and incorporate the type of defects of the computer board which are not fully covered before in the detection flow of the computer board, and ensure that the adjustment is made on the change of the production line condition of the computer board or the newly introduced board design.
The application provides a detection system of a computer board card, which comprises:
The data acquisition module comprises a central processing unit and a storage unit, wherein the storage unit is preloaded with operation instructions and is used for simulating the working states of the computer board card under various environmental conditions and generating corresponding functional behavior data;
the micro-nano three-dimensional imaging equipment is synchronously operated with the data acquisition module and is used for scanning the surface and internal structure of the computer board card and collecting physical structure data, wherein the data is used for identifying micro-scale physical defects of the computer board card;
The analysis processing unit is provided with data processing hardware and software resources, receives data from the data acquisition module and the three-dimensional imaging equipment, and performs fusion analysis by utilizing an integrated algorithm so as to identify potential defects;
The defect detection module is connected with the analysis processing unit and is used for evaluating the fusion analysis result and determining whether the computer board card meets the quality standard or not;
The user interface terminal provides an interactive interface for technical operators, and is used for displaying detection results, fusing analysis feedback and real-time system states and receiving adjustment commands input by users;
The remote monitoring unit includes at least one network interface for remotely transmitting the detection data, the system status and the alarm information to the central monitoring station or the mobile terminal.
The technical scheme provided by the application has the following beneficial technical effects:
(1) Through integrated software testing and micro-nano three-dimensional scanning technology, comprehensive detection of the functional behavior and physical structure of the computer board card is realized. The dual detection mechanism can greatly improve the accuracy and reliability of identifying various defects, in particular to the defects on a microscopic scale which are not easy to be found in the traditional method. .
(2) The integrated algorithm is used for fusion analysis processing, and the common influence of the functionality and the physical data can be comprehensively considered, so that the potential defects which cannot be detected by a single testing method are revealed. The omnibearing detection mode ensures that the defect identification is more accurate, reduces the risks of missed detection and false detection, and improves the quality control level of the whole production line.
(3) Through detailed analysis of defective products, the method can adaptively adjust software test cases and scanner parameters, and the flexibility and adaptability of the detection system are enhanced. The dynamic adjustment mechanism enables the detection system to be quickly adapted to the change of the production line condition and the introduction of a new board card design, and ensures the continuous optimization and improvement of the detection flow.
(4) The production efficiency and the economic benefit are improved, reworking and waste generation caused by defects are reduced, and meanwhile, clear diagnosis and decision basis are provided for technicians, so that the maintenance efficiency is improved, and the maintenance cost is reduced.
Drawings
Fig. 1 is a flowchart of a method for detecting a computer board card according to a first embodiment of the present application.
Fig. 2 is a schematic diagram of a detecting device for a computer board card according to a second embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The first embodiment of the application provides a method for detecting a computer board card. Referring to fig. 1, a schematic diagram of a first embodiment of the present application is shown. The following describes a method for detecting a computer board card according to a first embodiment of the present application in detail with reference to fig. 1.
Step S101: and running a software test case, and simulating the working states of the computer board card under various operating environments including temperature, current and data transmission to acquire the functional behavior data of the computer board card.
When running software test cases to simulate the working state of a computer board card under different operating environments, critical test cases can be designed to cover multiple aspects to ensure the functionality and performance of the overall test board card. The following are some exemplary software test cases:
Temperature limit test: and regulating temperature control equipment connected to the computer board card by using software, setting different temperature points, and simulating a working environment from low temperature to high temperature. For each temperature point, the software test case records the response and performance of the board card and detects whether the board card can maintain a normal running state.
Voltage and current ripple test: the power supply voltage and current of the computer board card are changed by using software, and the working capacity of the computer board card under different power supply conditions and the robustness of a power supply circuit are tested.
Calculating a pressure test: high-intensity computing tasks, such as complex mathematical operations or graphics processing, are run to evaluate the performance limits of the processor and GPU.
Memory access and bandwidth testing: the speed and efficiency of the memory and its performance under high load are tested by a large number of data read and write operations.
Data transmission and interface testing: the data throughput and the transmission stability of the large amount of data are evaluated by simulating the transmission of the large amount of data through various interfaces (such as USB, SATA, PCIe) of the board card.
Long-term running stability test: the reliability and error rate of the computer board card are tested under the condition of continuous long-time operation.
Power management and energy saving performance test: and evaluating the performance and conversion efficiency of the power management system of the computer board card under different energy consumption modes.
Start and restart testing: the computer board is repeatedly started and restarted to test the reliability of its start-up logic and hardware initialization process.
Peripheral compatibility test: various peripherals such as a storage device, an input device, and the like are connected, and compatibility of the board card and peripheral management capability are evaluated.
Software and drive compatibility test: different operating systems and drivers are installed and run, and software compatibility and system integration capability are tested.
Fault injection test: faults, such as interrupting the data flow or generating erroneous electrical signals, are artificially injected on the software or hardware to test the error handling and recovery capabilities of the board.
And (3) environmental factor test: taking actual use environment into consideration, the computer board card is tested for environmental factors such as vibration, humidity, static electricity and the like.
These test cases can cover some important aspects that computer boards may encounter in their intended use, ensuring that they function properly under a variety of conditions. Each test case should yield detailed log and result data that can be used to analyze the functionality and performance of the board. When the test case is designed, the specific application scene of the board card and the specific requirements of the user are also required to be considered so as to ensure the practicability and the relevance of the test result.
Functional behavior data refers to data collected during execution of a software test case, which reflects the performance and response of a computer board under a particular simulated operating environment. Functional behavior data typically includes, but is not limited to, the following:
processor performance data: such as CPU utilization, core temperature, number of instructions executed, processing speed, etc.
The memory use condition is as follows: including memory occupancy, read-write speed, access latency, etc.
Power usage data: such as power consumption per unit time of the power supply, stability of current and voltage, performance variation under load, etc.
Data transmission and network performance metrics: including data throughput, transmission rate, error rate, stability of the connection, etc.
Temperature response data: the working condition of the board card under different temperature conditions and the influence of temperature change on performance.
The hardware interface responds: the performance of the various interfaces (e.g., USB, HDMI, PCIe, etc.) under high load or extreme conditions.
Software compatibility record: compatibility between the board card and the running operating system, drivers and other software.
Error log and system stability: any errors, system crashes, unexpected restarts, or other abnormal behavior that occurred during the test are recorded.
The purpose of the functional behavior data is to evaluate the performance and stability of the computer board from a software level. By analysis of this data, possible functional defects of the computer board card can be identified, problems that may be encountered in actual use can be predicted, and places where improvements in the design and manufacturing process may be desired can be determined.
Step S102: and scanning the surface and the internal structure of the computer board card by adopting a micro-nano three-dimensional scanner while running the software test so as to collect physical structure data of the computer board card, wherein the physical structure data is used for identifying physical defects of the computer board card on a microscopic scale.
Step S102 includes using a high-resolution micro-nano three-dimensional scanner to comprehensively scan the surface and internal structures of the computer board card. This step includes:
Suitable micro-nano three-dimensional scanning techniques, such as laser scanning, optical interference imaging or X-ray computed tomography, are selected to obtain high resolution images of the computer board microstructure. These techniques are capable of revealing surface and internal details with nanometer-scale resolution, capturing minute defects that may be missed by conventional inspection.
The three-dimensional scanning needs to be run synchronously with the software test case in step S101, so that the physical state of the computer board card can be captured immediately when the computer board card runs various operations. This synchronization is to capture physical defects that may only manifest themselves under certain operating conditions.
Physical structure data is collected about multiple levels of the board card by high precision scanning. The physical structure data of the computer board refers to the data about the detailed size, shape, material and mutual position relationship of each component of the board obtained by the micro-nano three-dimensional scanner. Such data reflects the physical characteristics of the board on a microscopic scale, including but not limited to the following:
Appearance data: detailed images and measurement data of the physical dimensions, edge flatness, surface roughness and appearance defects (such as scratches, depressions or protrusions) of the board.
Interlayer structure data: the exact distance between the layers of the multilayer PCB (Printed Circuit Board), the integrity of the interlayer connection, any case of an interlayer short or open.
Solder joint and connector data: the shape, size, distribution, and possible weld defects (e.g., cold-weld, overseld, or solder balls) of the solder joint; alignment of connectors and integrity of contact surfaces.
Material defect data: uniformity of material, defects (e.g., microcracks, pores, impurities) and composition distribution.
Component mounting data: positional accuracy of various electronic components (e.g., chips, resistors, capacitors) mounted on the board, the degree of fit of the components, and the quality of soldering.
Wire and trace data: wire width, wire pitch, wire integrity, and any etch defects or wire breaks.
Mechanical stress data: the board may be subjected to mechanical stresses or deformation, particularly in the vicinity of the connector or the fastening point.
Thermal stress data: structural changes due to thermal expansion or thermal stress concentration areas that may lead to failure.
Physical structure data is obtained by a three-dimensional scanner with micro-nano precision, which may employ a variety of different imaging techniques, such as laser confocal scanning, electron microscopy imaging, X-ray tomography imaging, etc., to provide high resolution three-dimensional images. These images can then be converted to digital models and used for detailed analysis and computation, providing a basis for detecting and assessing the quality of computer boards. The physical structure data provides critical information for the discovery of physical defects on the microscopic scale that may occur during the board design and manufacturing process.
Using the three-dimensional scan data, various types of physical defects, such as cracks, delamination, solder joint failure, wire breakage, lamination failure, chip package defects, microscopic foreign objects or bubbles, and the like, can be identified. This identification is based on a detailed analysis of the collected three-dimensional images.
Raw data collected by a three-dimensional scanner typically requires post-processing by specialized software to generate an image or model that is easy to analyze. Post-processing steps may include data filtering, denoising, image reconstruction, edge detection, and defect labeling.
The whole scanning process can be automatic, so that manual intervention is reduced, and operation errors are reduced. At the same time, the operator should have an intuitive user interface to monitor the scanning process, view the real-time scanned image, or receive a preliminary analysis of the scan results.
Identifying physical defects is critical to understanding the possible performance degradation or risk of failure of a computer board in actual use, especially when the board is subjected to extreme operating conditions. In some cases, functional problems not found in software testing may originate from minor imperfections at the physical level, and three-dimensional scanning can provide an explanation for such problems.
All scanning processes and results should be recorded and consistent with industry standards to ensure data comparability and reliability.
Scanning may need to be performed under specific environmental conditions, such as temperature, humidity and vibration control, to ensure that the data quality is not affected by external factors.
Throughout step S102, the synchronicity, fineness of the scan, and synergy between the scan data and the software test data are emphasized. This step is critical to determining the overall quality and reliability of the board and provides the underlying data for the next fusion analysis.
Step S103: and performing fusion analysis processing on the functional behavior data and the physical structure data by using an integrated algorithm, wherein the fusion analysis processing is based on the fact that the final performance of the computer board card is affected by the functional state and the physical state together, so that the defect which cannot be detected by a single method is identified.
Still further, the performing fusion analysis processing on the functional behavior data and the physical structure data using an integration algorithm includes:
converting the functional behavior data and the physical structure data into standardized feature vectors by utilizing multidimensional scale analysis and principal component analysis;
Utilizing standardized feature vectors, identifying and extracting potential associations between functional and physical features through the internal structure of learning data by using a feature learning model based on deep learning, so as to generate a comprehensive feature representation for each computer board card;
Inputting the integrated feature representation into a fuzzy logic-based integrated learning framework, the integrated learning framework intelligently matching the integrated feature representation with a set of predefined defect patterns, thereby identifying potential defects that cannot be detected by a single test;
The cause of the potential defect is analyzed by adopting an interpretation algorithm based on causal reasoning, and complex data and modes are converted into visual reports and charts, so that a clear visual and theoretical basis is provided for a technician to understand the detection result, and further decision and action are facilitated.
The converting the functional behavior data and the physical structure data into normalized feature vectors using multidimensional scaling and principal component analysis comprises:
Respectively carrying out data normalization processing on the functional behavior data and the physical structure data, calculating the deviation of each data point in each data set relative to the data set mean value of the data points, and carrying out standardization processing on the original data points by utilizing the deviation, thereby forming two independent and standardized functional behavior data sets and physical structure data sets;
Respectively applying multidimensional scale analysis to the normalized functional behavior data set and the physical structure data set, and generating corresponding low-dimensional space coordinates for each data set by constructing a distance matrix among data points and executing a nonlinear dimension reduction technology;
Integrating low-dimensional space coordinates corresponding to the functional behavior data set and the physical structure data set respectively to generate a joint data matrix, and calculating a covariance matrix on the joint data matrix;
performing principal component analysis by using the covariance matrix to determine a main variation direction of the combined data set, and extracting a feature vector corresponding to the maximum feature value to form a comprehensive principal component feature vector set;
and carrying out scale adjustment processing on the comprehensive principal component feature vector set to enable all dimension features to have the same magnitude so as to obtain a standardized feature vector.
First, data normalization processing is performed on the collected functional behavior data and physical structure data. The functional behavior data and the physical structure data respectively constitute respective data sets. In this step, for each dataset, the deviation of the data point from its dataset mean is calculated. Such bias calculations aim to reduce the scale differences between the feature metrics in the dataset so that each feature can be treated fairly in subsequent analysis. In this way, two independent and normalized data sets are obtained, which lays a foundation for further data analysis.
Next, a multidimensional scaling analysis (MDS) is performed on each of the normalized data sets. Multidimensional scaling is a technique for reducing the dimension of data by constructing a matrix of distances between data points and using nonlinear dimension reduction techniques to reveal the structure of the data in a low-dimensional space. The purpose of this step is to provide a low dimensional spatial coordinate representation for each dataset that preserves the inherent structure of the data while simplifying the complexity of the data.
And then, integrating the low-dimensional space coordinates corresponding to the functional behavior data set and the low-dimensional space coordinates corresponding to the physical structure data set to form a joint data matrix. The purpose of integrating these two data sets is to obtain a comprehensive data representation that combines the functionality and physical attributes of the board. On this joint data matrix, a covariance matrix is further calculated. The covariance matrix reflects the correlation between the features in the dataset and is a key step in performing principal component analysis.
Using the resulting covariance matrix, principal Component Analysis (PCA) is performed. Principal component analysis is a powerful statistical tool for revealing the dominant direction of variation in a dataset. The feature vector corresponding to the maximum feature value is extracted by feature value decomposition. These feature vectors form a comprehensive set of principal component feature vectors that reflect the principal variability of the joint dataset. This set is critical to identifying the status of the board because it integrates the most important information from the different tests.
And finally, performing scale adjustment processing on the comprehensive principal component feature vector set. The scaling process may be a normalization (Z-score normalization) or a normalization process (Min-Max scaling). This step ensures that the contributions of the different dimensional features in the model are balanced so that the features of each dimension are of the same order of magnitude. The standardized set of feature vectors provides a reliable input to a subsequent machine learning model or other statistical analysis method that facilitates accurate classification of the status of the board card, such as detection and identification of potential defects.
With normalized feature vectors, a deep learning based feature learning model is used to identify and extract potential associations between functional and physical features through the inherent structure of the learning data, thereby generating a composite feature representation for each computer board card.
The structure of the feature learning model provided in this embodiment includes:
(1) Input layer:
Input: the fused standardized feature vector contains the functional behavior and physical structure data after preprocessing.
The realization is as follows: the input layer serves as a starting point of the model, and data is directly transferred to the next layer without any complicated processing.
(2) Double-channel characteristic treatment layer:
Input: fused feature vectors from the input layer.
The realization is as follows: two parallel sub-networks or channels are designed, each channel being a deep network structure, but optimized for different types of features. One subnetwork may focus on processing time series data related features, while another subnetwork is directed to spatial data features. This design allows the network to process and learn the inherent characteristics of both types of data in parallel.
And (3) outputting: the characteristic representations of the two subnetworks, respectively, each capture time series and spatial information in the input data.
(3) An association learning layer:
input: and outputting the dual-channel characteristic processing layer.
The realization is as follows: the purpose of this layer is to integrate the outputs of the two subnetworks and learn the potential associations between them. A custom attention mechanism or joint embedding strategy may be employed to emphasize interactions and correlations between two features.
And (3) outputting: a feature representation that integrates functional and physical feature associations.
(4) Depth feature fusion layer:
Input: and (5) correlating the output of the learning layer.
The realization is as follows: the feature representation is further integrated and refined using fully connected layer and deep neural network structures, such as multi-layer perceptrons (MLPs), to enable better characterization of the state of the computer board.
And (3) outputting: comprehensive feature representation for defect identification.
(5) Output layer:
Input: and outputting a depth characteristic fusion layer.
The realization is as follows: classifying by the status of the board (e.g., normal, abnormal, specific defect types), it is possible to use a fully connected layer with Softmax activation function.
And (3) outputting: a probability distribution representing the possible states of the board, or a direct class label.
Custom loss function and optimizer:
The loss function is chosen or designed, which can particularly emphasize the recognition accuracy of key features and penalize defects that may be missed by a single test.
Let C be the total number of categories, y m be the one-hot encoding vector of the real tag,Is the corresponding model output, m is the ordinal number of the sample, and the loss function L is defined as:
where L CE is the cross entropy loss, defined as:
Where y mn represents the value of the nth element in the true tag vector y m; Representing model output vectors The value of the nth element;
L WHD is the weighted loss for hard to detect defects, defined as:
Here, y mC is an element in the true label vector that is difficult to detect the defect class, Is the corresponding element in the prediction vector, and alpha and beta are the super parameters for balancing the two-part loss. Weighting itemsThe prediction accuracy of defects that are difficult to detect is emphasized, while the role of β is to amplify the effect of this class prediction error. By such definition, the loss function not only penalizes classification errors, but also emphasizes the identification of defects that are difficult to detect, thereby making the model more focused on accurate predictions of these key classes when training.
Training the feature learning model typically involves the steps of:
Data preparation:
a set of labeled training data is prepared, wherein the labeled training data comprises the fused standardized feature vectors of a plurality of examples and corresponding labels (such as the state of a board card).
Model initialization:
the various layers and parameters of the model are initialized, which may include random initialization of weights and setting of bias terms.
Selection of a loss function and an optimizer:
a loss function is defined that will be used to evaluate the difference between the model output and the actual label. The loss function should be able to reflect the accuracy of the model in identifying key features and give a greater penalty for defects that are difficult to detect.
An optimizer, such as Adam, is selected that will be used to update the model parameters to minimize the loss function.
Forward propagation:
for each training instance, the model will perform forward propagation, computing the various layers of activity from the input layer to the output layer.
Calculating loss:
the error between the predicted result and the real tag is calculated using the loss function.
Back propagation and parameter update:
The gradient of the loss function is propagated back to the layers of the model by a back propagation algorithm, and the model parameters are updated by an optimizer.
Iterative training:
The process of forward propagation, loss calculation, backward propagation and parameter updating is repeatedly performed until the model reaches a certain accuracy on the training data or until the number of iterations reaches a preset threshold.
And (3) verification and adjustment:
the validation dataset is used to test the performance of the model and adjust the parameters or structure of the model as needed.
Evaluation and optimization:
finally, the performance of the model on the independent test set is evaluated, and necessary optimization is carried out according to the evaluation result.
Strategies such as batch processing, regularization techniques (e.g., dropout), early stop (Early Stopping), etc., may be used during training to improve training efficiency, prevent overfitting, and ensure generalization of the model. Through the steps, the feature learning model can effectively learn how to predict the state of the computer board card from training data.
The reference code for the feature learning model is as follows:
still further, the custom attention mechanism includes:
Evaluating correlation between the time series characteristic and the space characteristic by using a variation self-encoder, and determining joint contribution of the time series characteristic and the space characteristic to defect detection;
based on the evaluation result of the variation self-encoder, weight is distributed to each feature, and features which are more important to defect detection are emphasized;
and carrying out feature weighted fusion, and merging the time sequence features and the space features according to the assigned weights to generate feature representation for defect detection.
Application of a variation self-encoder (VAE):
The purpose is as follows: the VAE is used to analyze and evaluate the correlation between two types of data features extracted from computer boards, time series features and spatial features. These features represent the functional parameters of the board (e.g., current or temperature) over time and the physical properties of the board (e.g., component layout), respectively.
The process comprises the following steps: the VAE learns the deep representation of these features through the encoding and decoding processes, revealing the complex structure and associated patterns hidden in the original data. During encoding, the VAE compresses the high-dimensional features of the input into a low-dimensional potential representation. During decoding it attempts to reconstruct the input data from these potential representations, learning the inherent relevance of the data during the process.
Based on the analysis results of the VAE, weights are dynamically assigned to each feature to highlight those features that are critical to detecting computer board defects.
Feature weighted fusion:
the purpose is as follows: by combining the temporal sequence and the spatial features, weighted fusion is performed according to their weights, thereby generating a feature representation that is optimized specifically for detecting defects in computer boards.
The process comprises the following steps: weighted fusion involves multiplying each feature by its corresponding weight, which reflects the relative importance of the feature. By this weighting, the composite signature representation is able to more accurately describe the health of the board and more effectively indicate potential defects.
The python reference implementation code for this custom attention mechanism is as follows:
in the code # above, the defect_detection_features are feature representations optimized for defect detection, and can be used for further detection model training.
Furthermore, the variation self-encoder further comprises a regularization term which penalizes the distribution deviation of the potential space of the variation self-encoder based on Kullback-Leibler divergence, so that the generated characteristic representation not only accurately reflects the characteristics of the original data, but also has good generalization capability.
In the design of a variational self-encoder, one key component is the regularization term, which functions to guide the potential spatial representation of the variational self-encoder during the optimization process as closely as possible to a pre-set probability distribution (e.g., a multidimensional gaussian distribution). Regularization term uses Kullback-Leibler divergence (KL divergence) to measure the deviation of the variation from the potential spatial distribution in the encoder model from a preset distribution, and adjusts model parameters by punishing this deviation.
Specifically, the variation maps the input features from the encoder portion of the encoder to a potential space whose distribution parameters (e.g., mean and variance) are learned by the neural network. And the decoder portion attempts to reconstruct the input features from the potential space. In this process, the KL-divergence regularization term calculates the difference between the actual distribution of potential space and the target distribution (typically assumed to be a standard normal distribution). This difference is added to the loss function of the VAE so that the model must also ensure that the distribution of the representations matches the expected distribution while learning the data representations.
This approach has two main benefits: one is to avoid that any part of the potential space is ignored (i.e. it is ensured that every direction of the space is meaningful), and the other is to prevent overfitting, since it forces the model-learned representation to conform to a certain distribution and thus have a better generalization capability. In practice, the degree of matching of the reconstruction accuracy and the potential spatial distribution may be balanced by adjusting the weights of the regularization term.
By the method, the generated characteristic representation can more comprehensively capture key characteristics of time series data and space data, and has important significance for detecting complex defects on a computer board card. For example, it may help to distinguish between small or potential defects that are not easily found under a single test, making defect detection more accurate and reliable.
In summary, the variation self-encoder model provided in the embodiment introduces regularization terms based on KL divergence, so that the generalization and reliability of deep feature representations of data are ensured while learning the representations, and the performance of computer board card defect detection is greatly improved.
In general, this custom attention mechanism utilizes advanced machine learning techniques to in-depth analyze and optimize the feature analysis process to achieve more accurate defect detection. The variational self-encoder provides a deep understanding of feature association, and the adaptive learning algorithm ensures that feature weights can reflect the true situation of importance of the feature to defect detection, and feature weighted fusion combines the information into a powerful tool to support defect identification in the decision process.
Further, the comprehensive feature representation output by the deep feature fusion layer of the feature learning model is input into an ensemble learning framework based on fuzzy logic, which intelligently matches the comprehensive feature representation with a set of predefined defect patterns to identify potential defects that cannot be detected by a single test
The embodiment provides a method for detecting defects of a computer board card through an integrated learning framework based on fuzzy logic. The core of the framework is to intelligently match the composite feature representation generated by the deep feature learning model with a set of predefined defect patterns. Such a matching process can reveal complex or hidden defects that may not be detected by a single test.
First, an ensemble learning framework is initialized, which contains a plurality of fuzzy logic systems, each of which processes specifically identifying a particular type of defect pattern. Fuzzy logic systems use the concepts of fuzzy sets and fuzzy rules to interpret uncertainties and ambiguities in the composite feature representation.
In this framework, a set of defect modes is defined, each mode corresponding to a particular type of defect that may occur with the board. These patterns are derived based on previous data collection and expert knowledge and cover various failure phenomena from micro-cracking to short circuit.
A set of fuzzy logic rules is established for each defect mode. These rules relate aspects of the feature representation to the membership of the corresponding defect mode. For example, if a certain characteristic value is within a predetermined range, the presence of a certain specific defect may be indicated.
The input composite feature representation is obfuscated, i.e. it is converted into an fuzzy set according to the importance of the individual features and the contribution to the defect. This process involves assigning a membership function to each feature to determine its membership to each defect mode.
The fuzzy characteristic representation is intelligently matched by using a fuzzy logic system. The matching process calculates membership of each defect mode based on previously established fuzzy logic rules, and generates a fuzzy inference output reflecting possible defect types of the board card based on the membership.
And converting the membership degree of each defect mode obtained from intelligent matching into a definite defect diagnosis result through fuzzy logic reasoning. The process may include a defuzzification step of selecting the defect mode with the highest membership, or combining multiple defect modes with high membership to determine the final defect status of the board.
The following is a reference implementation using Python and scikit-fuzzy libraries, which contain detailed fuzzy logic system initialization, defect pattern definition, fuzzy rule establishment, and fuzzy reasoning and result interpretation procedures.
In the above implementation, the fuzzy logic variable of the feature input and the defect level variable of the output are defined. Membership functions are defined for each variable and how to apply rules to evaluate possible defect levels based on the fuzzy values of the input features.
The cause of the potential defect is analyzed by adopting an interpretation algorithm based on causal reasoning, and complex data and modes are converted into visual reports and charts, so that a clear visual and theoretical basis is provided for a technician to understand the detection result, and further decision and action are facilitated.
Causal reasoning generally involves determining causal relationships between variables. In the context of defect detection, this may mean finding out which features indicate a particular defect type. The following are the steps for performing this analysis:
(1) Data collection and preprocessing
A large amount of historical data is collected, including the characteristic values of the computer board card and corresponding defect records.
The data is pre-processed, such as cleaning, normalization and denoising.
(2) Mapping of defect features and possible causes
Statistical analysis methods or machine learning algorithms are used to analyze the relationship between features and defects. For example, a decision tree algorithm may reveal a feature decision path, which may be used as a preliminary causal relationship map.
(3) Establishment of causal reasoning model
A model that can represent potential causal relationships between variables is constructed using, for example, a Structural Equation Model (SEM) or a bayesian network.
Based on historical data and expert knowledge, structure and conditional probabilities in the model are defined.
3.1 Selecting an appropriate model type
Structural Equation Model (SEM): SEM is one of the statistical models that allows simultaneous estimation of multiple systems of equations. SEM is commonly used for causal reasoning, as it can reveal direct and indirect effects between variables.
Bayesian networks: this is a probabilistic graph model that uses probabilistic reasoning to predict relationships between different variables. Bayesian networks are particularly suited to handle causal relationships with high uncertainty and complexity.
3.2 Definition model Structure
Variable selection: it is determined which variables are to be included in the model. These variables should include all relevant characteristic data and defect records.
Structure definition: based on theoretical knowledge and expert opinion, potential causal structures between variables are defined. For example, an expert may know that there is a causal relationship between certain features and a particular type of defect.
3.3 Establishing model parameters
Conditional probability table: in a bayesian network, the probability of each variable depends on its parent variable. A Conditional Probability Table (CPT) is created for each variable to express this relationship.
Regression coefficient: in SEM, parameters in the equation, such as regression coefficients, are estimated, which quantify the effect of one variable on another.
3.4 Model coding and implementation
Model structures and parameters are encoded using a programming language (such as Python) and a relational library (e.g., pgmpy for bayesian networks, semopy for structural equation models).
Code is written to estimate model parameters, such as using maximum likelihood estimation or bayesian inference.
3.5 Initial model estimation and adjustment
The historical data is used to estimate initial parameters of the model.
The model structure and parameters are adjusted based on the results of the parameter estimation, which may involve adding or deleting variables, or altering the relationships between variables.
3.6 Model optimization and iteration
The iterative process may need to be performed multiple times, each time optimizing the model based on the statistical fit index and expert opinion.
Metrics such as the red pool information criterion (AIC) or the Bayesian Information Criterion (BIC) are used to evaluate the fitness of the model.
(4) Verification of causal inference model
The model is trained and validated using the dataset, ensuring that it reflects true causal relationships.
Cross-validation is performed or a separate test set is used to evaluate the accuracy of the model.
(5) Interpretation and visualization of causal relationships
The skilled person is provided with an explicit visual and theoretical basis for understanding the detection result, for example, a graphical representation such as a causal graph is used to show the causal relationship between variables.
The association between feature values and defect types is visualized using visualization tools such as heat maps, scatter maps, and bar charts.
(6) Report generation
An algorithm is developed to automatically generate an interpretive report based on the model output and visualization.
The report should detail which features are relevant to a particular defect, and their associated strengths.
(7) Decision support
The interpreted report is integrated into a decision support system to assist the technician in making data-driven maintenance or quality control decisions.
Operational advice is provided, such as which parts need to be further detected or replaced.
To facilitate understanding of this step, an example will be described below.
Consider an example in which a minute crack in one of the computer boards may exist that results in intermittent circuit failure. This defect may not always be active, so a single functional test (such as a current test) or a single physical inspection (such as appearance detection) may not reliably identify it. The following is an operational flow of how the integration algorithm may identify such defects:
And (3) data collection: and simultaneously performing functional test and physical structure detection.
Functional testing may include measurement of current and voltage, as well as signal integrity testing.
Physical structure detection fine structure images of the board may be captured by a high resolution micro-nano three-dimensional scanner.
Data preprocessing: functional test data and physical structure data are normalized, scale differences are eliminated and depth analysis is ready to be performed.
Feature extraction: extracting features from the physical structure image by using a deep learning method; meanwhile, a change mode and an abnormality index are extracted from the functional test data.
Data fusion: in combination with the extracted functional behavioral features and physical structural features, an integrated algorithm is used to create a comprehensive characterization representation that captures the functionality and physical condition of the board.
Defect identification: the integrated feature vectors are input into a fuzzy logic based ensemble learning framework that is capable of handling uncertainties and complexities in the feature representation, intelligently matching features to predefined defect patterns, including intermittent circuit problems caused by micro-cracking that may be missed by a single method.
Analysis and interpretation of results: if the model detects a potential defect, causal inference tools are used to track the possible causes of the defect and to assist the technician in interpreting the results by generating easily understood reports and visualizations.
Self-optimizing: based on these findings, test cases and scan parameters are adjusted to improve the detection capability of similar defects, and a closed loop feedback mechanism is implemented to continuously optimize the detection process.
In this example, the integration algorithm takes advantage of the complementary nature of the functional and physical structure data. While a single test may not be able to find a problem, combining the data of the two together may identify a defect by revealing an association between an intermittent fault and a tiny physical defect. For example, current fluctuations in functional testing may be associated with micro-cracks observed in physical scans. This integrated approach provides a more comprehensive detection approach, increasing the opportunity to identify subtle defects under complex real world conditions.
Step S104: and judging whether the computer board card has defects according to the processing result of the fusion analysis processing.
This step involves a detailed analysis of the fusion analysis process results generated by the integration algorithm.
By analyzing the fused data, a technician or automated system can detect and identify inconsistencies and abnormal patterns between functional behavior and physical structure data, which patterns may indicate the presence of potential defects. For example, in step S104, if the learning integration framework identifies a potential defect that cannot be detected by a single test, then it is determined that the computer board card is defective.
Step S105: if the computer board card is identified to have defects, judging the computer board card to be defective, removing the computer board card with the defects from the production line, and adaptively adjusting software test cases and three-dimensional scanning parameters; the adaptive adjustment is based on the result of detailed analysis of defective products, so as to diagnose and incorporate the type of defects of the computer board which are not fully covered before in the detection flow of the computer board, and ensure that the adjustment is made on the change of the production line condition of the computer board or the newly introduced board design.
The adaptive adjustment process is a comprehensive quality control method, and aims to improve the sensitivity and the adaptability of the detection method, and aims at the defect types which are newly appeared or not fully identified before in the production process. The following are detailed steps of the adaptation procedure, aimed at ensuring that the technician is able to implement and comply with the possible patent requirements.
1. Analyzing defective products
Detailed defect recording: using the high precision scan and test results, the exact location, size and morphology of each defect on the board is recorded.
Cause analysis: the potential causes of the occurrence of defects may include material defects, environmental factors during production, or operational errors.
Severity assessment: the severity of defects is assessed according to their extent of impact on board functionality.
History comparison: comparing with the historical defect data, searching for patterns and correlations.
2. Data collection
Comprehensive data file: the detailed test logs, scan images and production process parameters are collected, creating a comprehensive data archive for in-depth analysis.
Environmental parameter recording: and recording the production environment parameters of the defective board card, such as temperature and humidity, machine setting and the like.
Operation log examination: the operation log during production is reviewed for non-standard operations that may be related to defect generation.
3. Improved test case
Test case design: based on the defect analysis results, new test cases are designed or existing test scripts are adjusted to more accurately capture similar defects.
Automated test update: the integration of new test cases into the automated test framework ensures that they are automatically executed in future production.
4. Adjusting scan parameters
And (3) scanning precision adjustment: the scanning resolution is improved so as to more clearly capture the minute defects.
Parameter optimization: the scanning speed and focal length are optimized to obtain optimal image quality and defect detection rate.
Automated adjustment implementation: an automation script is written or a control system is adjusted so that the scan parameters are automatically adjusted according to the type of board and the expected defects.
Further, the method for detecting the computer board card further comprises the following steps:
And storing detection data in the detection process, wherein the detection data comprises functional behavior data, physical structure data and processing results of fusion analysis processing, and continuously optimizing software test cases and three-dimensional scanning parameters by utilizing the detection data so as to realize self calibration and self optimization of a detection system, and enhance the adaptability of the system to various quality differences of a computer board card and the responsiveness to changes in the production process.
This is a self-calibrating and self-optimizing test method that uses the data collected during the test to optimize the test cases and three-dimensional scan parameters of the computer board. This process ensures that the inspection system is able to accommodate variations in board quality and variations in the production process, including the steps of:
(1) Data storage
A database or data warehouse is created for storing all of the test data collected in each test cycle.
The detection data should include functional behavior data, physical structure data, and results from fusion analysis processes.
The security, integrity and retrievability of the data store are ensured.
(2) Data classification and tagging
The collected data is classified and marked as normal, defective or reject.
Recording and marking the detection result corresponding to each board card, wherein the detection result comprises a test case and scanning parameters used in the detection process.
(3) Data analysis
The stored inspection data is analyzed to identify patterns of inspection failures and common defect types.
Statistical analysis or machine learning algorithms are used to identify factors that may lead to detection vulnerabilities.
(4) Test case optimization
And according to the result of the data analysis, adjusting and optimizing the software test case so as to improve the detection capability of specific problems.
Newly adding test cases or modifying existing cases to cover vulnerabilities revealed by data analysis.
(5) Scanning parameter adjustment
Scanning parameters of the three-dimensional scanner, such as resolution, scanning speed, and scanning path, are adjusted to better capture defects.
For new or complex defect types detected, the scan parameters are refined to increase the accuracy of the detection.
(6) Self-calibration procedure
A closed loop control system is established which automatically adjusts the detection parameters based on past detection results and optimization measures.
Implementing a continuous learning mechanism, the system constantly adjusts its own behavior by analyzing the new detected data.
(7) Performance evaluation and iteration
The performance of the detection system is periodically evaluated to ensure the effectiveness of the optimization measures.
Based on the results of the performance evaluation, the optimization process is iterated, continuously improving the test cases and the scan parameters.
In the above embodiment, a method for detecting a computer board card is provided, and correspondingly, the application also provides a device for detecting a computer board card. Fig. 2 is a schematic diagram of a detecting device for a computer board card according to an embodiment of the application. Since this embodiment, i.e. the second embodiment, is substantially similar to the method embodiment, the description is relatively simple, and reference should be made to the description of the method embodiment for relevant points. The system embodiments described below are merely illustrative.
A second embodiment of the present application provides a device for detecting a computer board card, including:
The test unit 201 is used for running a software test case, simulating the working state of the computer board card under various operating environments including temperature, current and data transmission, so as to obtain the functional behavior data of the computer board card;
The scanning unit 202 is configured to scan the surface and the internal structure of the computer board card by using a micro-nano three-dimensional scanner while running the software test, so as to collect physical structure data of the computer board card, where the physical structure data is used to identify physical defects of the computer board card on a microscopic scale;
An analysis unit 203, configured to perform fusion analysis processing on the functional behavior data and the physical structure data by using an integration algorithm, where the fusion analysis processing is based on that a final performance of a computer board card is affected by a functional state and a physical state together, so as to identify a defect that cannot be detected by a single method;
a judging unit 204, configured to judge whether a computer board card has a defect according to a processing result of the fusion analysis processing;
The adjusting unit 205 is configured to determine that the computer board card is defective if the computer board card has a defect, further remove the computer board card having the defect from the production line, and adaptively adjust the scanning parameters of the software test case and the micro-nano three-dimensional scanner; the adaptive adjustment is based on the result of detailed analysis of defective products, so as to diagnose and incorporate the type of defects of the computer board which are not fully covered before in the detection flow of the computer board, and ensure that the adjustment is made on the change of the production line condition of the computer board or the newly introduced board design.
A third embodiment of the present application provides a system for detecting a computer board card, including:
The data acquisition module comprises a central processing unit and a storage unit, wherein the storage unit is preloaded with operation instructions and is used for simulating the working states of the computer board card under various environmental conditions and generating corresponding functional behavior data;
the micro-nano three-dimensional imaging equipment is synchronously operated with the data acquisition module and is used for scanning the surface and internal structure of the computer board card and collecting physical structure data, wherein the data is used for identifying micro-scale physical defects of the computer board card;
The analysis processing unit is provided with data processing hardware and software resources, receives data from the data acquisition module and the three-dimensional imaging equipment, and performs fusion analysis by utilizing an integrated algorithm so as to identify potential defects;
The defect detection module is connected with the analysis processing unit and is used for evaluating the fusion analysis result and determining whether the computer board card meets the quality standard or not;
The user interface terminal provides an interactive interface for technical operators, and is used for displaying detection results, fusing analysis feedback and real-time system states and receiving adjustment commands input by users;
The remote monitoring unit includes at least one network interface for remotely transmitting the detection data, the system status and the alarm information to the central monitoring station or the mobile terminal.
A fourth embodiment of the present application provides an electronic apparatus including:
A processor;
And a memory for storing a program which, when read and executed by the processor, performs the method for detecting a computer board card provided in the first embodiment of the present application.
A fifth embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of detecting a computer board card provided in the first embodiment of the present application.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (7)

1.一种计算机板卡的检测方法,其特征在于,包括:1. A method for detecting a computer board, characterized in that it comprises: 运行软件测试用例,模拟计算机板卡在包括温度、电流和数据传输的多种操作环境下的工作状态,以获取计算机板卡的功能性行为数据;Run software test cases to simulate the working state of the computer board under various operating environments including temperature, current and data transmission to obtain functional behavior data of the computer board; 在运行软件测试的同时,采用微纳米级三维扫描仪对所述计算机板卡的表面及内部结构进行扫描,以收集计算机板卡的物理结构数据,其中,所述物理结构数据用于识别计算机板卡在微观尺度上的物理缺陷;While running the software test, a micro-nano three-dimensional scanner is used to scan the surface and internal structure of the computer board to collect physical structure data of the computer board, wherein the physical structure data is used to identify physical defects of the computer board at a microscopic scale; 使用集成算法对于所述功能性行为数据和所述物理结构数据进行融合分析处理,其中,所述融合分析处理基于计算机板卡的最终性能受到功能状态和物理状态的共同影响,从而识别出单一方法无法检测到的缺陷;Using an integrated algorithm to perform fusion analysis on the functional behavior data and the physical structure data, wherein the fusion analysis is based on the fact that the final performance of the computer board is affected by the functional state and the physical state, thereby identifying defects that cannot be detected by a single method; 根据所述融合分析处理的处理结果,判断计算机板卡是否存在缺陷;According to the processing result of the fusion analysis processing, determining whether the computer board has defects; 如果计算机板卡存在缺陷,则判定为不良品,进而从生产线上移除存在缺陷的计算机板卡,并适应性调整软件测试用例与微纳米级三维扫描仪的扫描参数;其中,所述适应性调整基于对不良品详细分析的结果,以诊断和纳入计算机板卡检测流程中之前未被充分覆盖的计算机板卡缺陷类型,确保对计算机板卡的生产线条件的变化或新引入的板卡设计做出调整;If the computer board has defects, it is determined to be a defective product, and the defective computer board is removed from the production line, and the software test cases and the scanning parameters of the micro-nano-level three-dimensional scanner are adaptively adjusted; wherein the adaptive adjustment is based on the results of the detailed analysis of the defective products, so as to diagnose and incorporate computer board defect types that have not been fully covered in the computer board detection process before, and ensure that adjustments are made to changes in the production line conditions of the computer board or newly introduced board designs; 其中,所述使用集成算法对于所述功能性行为数据和所述物理结构数据进行融合分析处理,包括:The method of using an integrated algorithm to perform fusion analysis on the functional behavior data and the physical structure data includes: 利用多维尺度分析和主成分分析将所述功能性行为数据和物理结构数据转化为标准化的特征向量;The functional behavior data and physical structure data are converted into standardized feature vectors using multidimensional scaling analysis and principal component analysis; 利用标准化的特征向量,使用基于深度学习的特征学习模型,通过学习数据的内在结构来辨识和提取功能性和物理特征之间的潜在关联,从而为每个计算机板卡生成一个综合特征表示;Using the standardized feature vectors, a deep learning-based feature learning model is used to identify and extract the potential associations between functional and physical features by learning the intrinsic structure of the data, thereby generating a comprehensive feature representation for each computer board. 将所述综合特征表示输入一个基于模糊逻辑的集成学习框架,所述集成学习框架将所述综合特征表示与一组预定义的缺陷模式进行智能匹配,从而识别出通过单一测试无法检测到的潜在缺陷;inputting the comprehensive feature representation into a fuzzy logic-based ensemble learning framework, wherein the ensemble learning framework intelligently matches the comprehensive feature representation with a set of predefined defect patterns to identify potential defects that cannot be detected by a single test; 采用基于因果推理的解释算法来分析潜在缺陷的成因,并将复杂的数据和模式转化为直观的报告和图表,从而为技术人员提供一个明确的视觉和理论依据来理解检测结果,以便于进一步的决策和行动;Employs causal reasoning-based interpretation algorithms to analyze the causes of potential defects and convert complex data and patterns into intuitive reports and charts, providing technicians with a clear visual and theoretical basis to understand the inspection results for further decision-making and action; 其中,所述利用多维尺度分析和主成分分析将所述功能性行为数据和物理结构数据转化为标准化的特征向量,包括:The method of converting the functional behavior data and physical structure data into standardized feature vectors by using multidimensional scaling analysis and principal component analysis includes: 对功能性行为数据和物理结构数据分别进行数据归一化处理,计算各自数据集中每个数据点相对于其数据集均值的偏差,并利用所述偏差对原始数据点进行标准化处理,从而形成两个独立的、规范化的功能性行为数据集和物理结构数据集;Performing data normalization on the functional behavior data and physical structure data respectively, calculating the deviation of each data point in each data set relative to the mean of its data set, and using the deviation to normalize the original data points, thereby forming two independent and normalized functional behavior data sets and physical structure data sets; 分别对规范化的功能性行为数据集和物理结构数据集应用多维尺度分析,通过构建数据点间的距离矩阵,并执行非线性降维技术,为每一个数据集生成相应的低维空间坐标;Multidimensional scaling analysis was applied to the normalized functional behavior dataset and physical structure dataset, respectively, by constructing a distance matrix between data points and performing nonlinear dimensionality reduction techniques to generate corresponding low-dimensional space coordinates for each dataset; 整合功能性行为数据集和物理结构数据集分别对应的低维空间坐标,生成一个联合数据矩阵,并在该联合数据矩阵上计算协方差矩阵;Integrate the low-dimensional space coordinates corresponding to the functional behavior dataset and the physical structure dataset to generate a joint data matrix, and calculate the covariance matrix on the joint data matrix; 利用所述协方差矩阵执行主成分分析,以确定联合数据集的主要变异方向,并提取出与最大特征值相对应的特征向量,形成一个综合的主成分特征向量集;Performing principal component analysis using the covariance matrix to determine the main variation directions of the joint data set and extracting the eigenvectors corresponding to the maximum eigenvalues to form a comprehensive set of principal component eigenvectors; 对所述综合的主成分特征向量集进行规模调整处理,使各维度特征具有相同的量级,以得到标准化特征向量;Performing a scale adjustment process on the comprehensive principal component feature vector set so that the features of each dimension have the same magnitude, so as to obtain a standardized feature vector; 其中,所述特征学习模型包括:Wherein, the feature learning model includes: 输入层,用于接收标准化特征向量;Input layer, used to receive the normalized feature vector; 双通道特征处理层,包括两个并行的子网络,第一子网络配置有针对时间序列数据特征优化的深度网络结构,第二子网络配置有针对空间数据特征优化的深度网络结构,每个子网络分别提取融合特征向量中的时间序列特征和空间特征,并输出两个独立的特征表示;A dual-channel feature processing layer, including two parallel sub-networks, the first sub-network is configured with a deep network structure optimized for time series data features, and the second sub-network is configured with a deep network structure optimized for spatial data features. Each sub-network extracts the time series features and spatial features in the fused feature vector and outputs two independent feature representations; 关联学习层,配置有自定义的注意力机制,用于整合所述双通道特征处理层的输出,通过学习时间序列特征表示和空间特征表示之间的潜在关联来生成一个特征表示;An association learning layer, configured with a custom attention mechanism, for integrating the output of the dual-channel feature processing layer to generate a feature representation by learning the potential association between the time series feature representation and the spatial feature representation; 深度特征融合层,用于接收所述关联学习层的输出,通过一系列全连接层对综合的特征表示进行进一步的整合和提炼,形成用于表征计算机板卡状态的综合特征表示;A deep feature fusion layer, used for receiving the output of the association learning layer, further integrating and refining the comprehensive feature representation through a series of fully connected layers, and forming a comprehensive feature representation for characterizing the state of the computer board; 输出层,配置有全连接层和Softmax激活函数,用于将所述深度特征融合层的综合特征表示转化为描述计算机板卡可能状态的概率分布或分类标签。The output layer is configured with a fully connected layer and a Softmax activation function, which is used to convert the comprehensive feature representation of the deep feature fusion layer into a probability distribution or classification label describing the possible states of the computer board. 2.根据权利要求1所述的计算机板卡的检测方法,其特征在于,还包括:2. The computer board detection method according to claim 1, further comprising: 存储检测过程中的检测数据,所述检测数据包括功能性行为数据、物理结构数据以及融合分析处理的处理结果,并利用所述检测数据持续优化软件测试用例和三维扫描参数,以实现检测系统的自我校准和自我优化,增强系统对计算机板卡各类型质量差异的适应能力和对生产过程中变化的响应性。The detection data during the detection process is stored, and the detection data includes functional behavior data, physical structure data, and processing results of fusion analysis and processing. The detection data is used to continuously optimize software test cases and 3D scanning parameters to achieve self-calibration and self-optimization of the detection system, and enhance the system's adaptability to quality differences of various types of computer boards and responsiveness to changes in the production process. 3.根据权利要求1所述的计算机板卡的检测方法,其特征在于,所述特征学习模型的损失函数定义为:3. The computer board detection method according to claim 1, characterized in that the loss function of the feature learning model is Defined as: 其中,C是输出层的输出的类别的总数,是真实标签的独热编码向量,是对应的模型输出,m是样本的序数;Among them, C is the total number of categories of the output of the output layer, is the one-hot encoded vector of the true label, is the corresponding model output, and m is the ordinal number of the sample; 其中,是交叉熵损失,定义为:in, is the cross entropy loss, defined as: 其中,表示真实标签向量中第n个元素的值;表示模型输出向量中第n个元素的值;in, Represents the true label vector The value of the nth element in ; Represents the model output vector The value of the nth element in ; 是针对难以检测缺陷的加权损失,定义为: is a weighted loss for hard-to-detect defects, defined as: 其中,是真实标签向量中难以检测缺陷类别的元素,是预测向量中对应的元素,是用于平衡两部分损失的超参数。in, is the element of the difficult-to-detect defect category in the true label vector, is the corresponding element in the prediction vector, , is a hyperparameter used to balance the two parts of loss. 4.根据权利要求1所述的计算机板卡的检测方法,其特征在于,所述自定义的注意力机制包括:4. The computer board detection method according to claim 1, wherein the customized attention mechanism comprises: 利用变分自编码器来评估时间序列特征与空间特征之间的相关性,确定时间序列特征与空间特征对于缺陷检测的共同贡献;The variational autoencoder is used to evaluate the correlation between time series features and spatial features, and determine the joint contribution of time series features and spatial features to defect detection; 基于变分自编码器的评估结果,为每个特征分配权重;Assign weights to each feature based on the evaluation results of the variational autoencoder; 进行特征加权融合,并根据分配的权重合并时间序列特征和空间特征,生成针对缺陷检测的特征表示。Perform feature weighted fusion and merge time series features and spatial features according to the assigned weights to generate feature representation for defect detection. 5.根据权利要求4所述的计算机板卡的检测方法,其特征在于,所述变分自编码器进一步包括一个正则化项,该正则化项基于Kullback-Leibler散度来惩罚变分自编码器潜在空间的分布偏差,确保生成的特征表示不仅准确地反映了原始数据的特性,同时也具有良好的泛化能力。5. The computer board detection method according to claim 4 is characterized in that the variational autoencoder further includes a regularization term, which penalizes the distribution deviation of the variational autoencoder latent space based on Kullback-Leibler divergence to ensure that the generated feature representation not only accurately reflects the characteristics of the original data, but also has good generalization ability. 6.一种计算机板卡的检测装置,其特征在于,包括:6. A computer board detection device, characterized in that it comprises: 测试单元,用于运行软件测试用例,模拟计算机板卡在包括温度、电流和数据传输的多种操作环境下的工作状态,以获取计算机板卡的功能性行为数据;A test unit, used to run software test cases to simulate the working state of the computer board under various operating environments including temperature, current and data transmission, so as to obtain functional behavior data of the computer board; 扫描单元,用于在运行软件测试的同时,采用微纳米级三维扫描仪对所述计算机板卡的表面及内部结构进行扫描,以收集计算机板卡的物理结构数据,其中,所述物理结构数据用于识别计算机板卡在微观尺度上的物理缺陷;A scanning unit, used to scan the surface and internal structure of the computer board using a micro-nano three-dimensional scanner while running the software test, so as to collect physical structure data of the computer board, wherein the physical structure data is used to identify physical defects of the computer board at a microscopic scale; 分析单元,用于使用集成算法对于所述功能性行为数据和所述物理结构数据进行融合分析处理,其中,所述融合分析处理基于计算机板卡的最终性能受到功能状态和物理状态的共同影响,从而识别出单一方法无法检测到的缺陷;An analysis unit, configured to perform fusion analysis processing on the functional behavior data and the physical structure data using an integrated algorithm, wherein the fusion analysis processing is based on the fact that the final performance of the computer board is affected by both the functional state and the physical state, thereby identifying defects that cannot be detected by a single method; 判断单元,用于根据所述融合分析处理的处理结果,判断计算机板卡是否存在缺陷;A judgment unit, used for judging whether the computer board has a defect according to the processing result of the fusion analysis processing; 调整单元,用于如果计算机板卡存在缺陷,则判定为不良品,进而从生产线上移除存在缺陷的计算机板卡,并适应性调整软件测试用例与微纳米级三维扫描仪的扫描参数;其中,所述适应性调整基于对不良品详细分析的结果,以诊断和纳入计算机板卡检测流程中之前未被充分覆盖的计算机板卡缺陷类型,确保对计算机板卡的生产线条件的变化或新引入的板卡设计做出调整;An adjustment unit, used to determine if a computer board has defects, as a defective product, and then remove the defective computer board from the production line, and adaptively adjust the software test cases and the scanning parameters of the micro-nano three-dimensional scanner; wherein the adaptive adjustment is based on the results of the detailed analysis of the defective products, so as to diagnose and incorporate computer board defect types that have not been fully covered in the computer board detection process before, and ensure that adjustments are made to changes in production line conditions of computer boards or newly introduced board designs; 其中,所述分析单元,具体用于:Wherein, the analysis unit is specifically used for: 利用多维尺度分析和主成分分析将所述功能性行为数据和物理结构数据转化为标准化的特征向量;The functional behavior data and physical structure data are converted into standardized feature vectors using multidimensional scaling analysis and principal component analysis; 利用标准化的特征向量,使用基于深度学习的特征学习模型,通过学习数据的内在结构来辨识和提取功能性和物理特征之间的潜在关联,从而为每个计算机板卡生成一个综合特征表示;Using the standardized feature vectors, a deep learning-based feature learning model is used to identify and extract the potential associations between functional and physical features by learning the intrinsic structure of the data, thereby generating a comprehensive feature representation for each computer board. 将所述综合特征表示输入一个基于模糊逻辑的集成学习框架,所述集成学习框架将所述综合特征表示与一组预定义的缺陷模式进行智能匹配,从而识别出通过单一测试无法检测到的潜在缺陷;inputting the comprehensive feature representation into a fuzzy logic-based ensemble learning framework, wherein the ensemble learning framework intelligently matches the comprehensive feature representation with a set of predefined defect patterns to identify potential defects that cannot be detected by a single test; 采用基于因果推理的解释算法来分析潜在缺陷的成因,并将复杂的数据和模式转化为直观的报告和图表,从而为技术人员提供一个明确的视觉和理论依据来理解检测结果,以便于进一步的决策和行动;Employs causal reasoning-based interpretation algorithms to analyze the causes of potential defects and convert complex data and patterns into intuitive reports and charts, providing technicians with a clear visual and theoretical basis to understand the inspection results for further decision-making and action; 其中,所述利用多维尺度分析和主成分分析将所述功能性行为数据和物理结构数据转化为标准化的特征向量,包括:The method of converting the functional behavior data and physical structure data into standardized feature vectors by using multidimensional scaling analysis and principal component analysis includes: 对功能性行为数据和物理结构数据分别进行数据归一化处理,计算各自数据集中每个数据点相对于其数据集均值的偏差,并利用所述偏差对原始数据点进行标准化处理,从而形成两个独立的、规范化的功能性行为数据集和物理结构数据集;Performing data normalization on the functional behavior data and physical structure data respectively, calculating the deviation of each data point in each data set relative to the mean of its data set, and using the deviation to normalize the original data points, thereby forming two independent and normalized functional behavior data sets and physical structure data sets; 分别对规范化的功能性行为数据集和物理结构数据集应用多维尺度分析,通过构建数据点间的距离矩阵,并执行非线性降维技术,为每一个数据集生成相应的低维空间坐标;Multidimensional scaling analysis was applied to the normalized functional behavior dataset and physical structure dataset, respectively, by constructing a distance matrix between data points and performing nonlinear dimensionality reduction techniques to generate corresponding low-dimensional space coordinates for each dataset; 整合功能性行为数据集和物理结构数据集分别对应的低维空间坐标,生成一个联合数据矩阵,并在该联合数据矩阵上计算协方差矩阵;Integrate the low-dimensional space coordinates corresponding to the functional behavior dataset and the physical structure dataset to generate a joint data matrix, and calculate the covariance matrix on the joint data matrix; 利用所述协方差矩阵执行主成分分析,以确定联合数据集的主要变异方向,并提取出与最大特征值相对应的特征向量,形成一个综合的主成分特征向量集;Performing principal component analysis using the covariance matrix to determine the main variation directions of the joint data set and extracting the eigenvectors corresponding to the maximum eigenvalues to form a comprehensive set of principal component eigenvectors; 对所述综合的主成分特征向量集进行规模调整处理,使各维度特征具有相同的量级,以得到标准化特征向量;Performing a scale adjustment process on the comprehensive principal component feature vector set so that the features of each dimension have the same magnitude, so as to obtain a standardized feature vector; 其中,所述特征学习模型包括:Wherein, the feature learning model includes: 输入层,用于接收标准化特征向量;Input layer, used to receive the normalized feature vector; 双通道特征处理层,包括两个并行的子网络,第一子网络配置有针对时间序列数据特征优化的深度网络结构,第二子网络配置有针对空间数据特征优化的深度网络结构,每个子网络分别提取融合特征向量中的时间序列特征和空间特征,并输出两个独立的特征表示;A dual-channel feature processing layer, including two parallel sub-networks, the first sub-network is configured with a deep network structure optimized for time series data features, and the second sub-network is configured with a deep network structure optimized for spatial data features. Each sub-network extracts the time series features and spatial features in the fused feature vector and outputs two independent feature representations; 关联学习层,配置有自定义的注意力机制,用于整合所述双通道特征处理层的输出,通过学习时间序列特征表示和空间特征表示之间的潜在关联来生成一个特征表示;An association learning layer, configured with a custom attention mechanism, for integrating the output of the dual-channel feature processing layer to generate a feature representation by learning the potential association between the time series feature representation and the spatial feature representation; 深度特征融合层,用于接收所述关联学习层的输出,通过一系列全连接层对综合的特征表示进行进一步的整合和提炼,形成用于表征计算机板卡状态的综合特征表示;A deep feature fusion layer, used for receiving the output of the association learning layer, further integrating and refining the comprehensive feature representation through a series of fully connected layers, and forming a comprehensive feature representation for characterizing the state of the computer board; 输出层,配置有全连接层和Softmax激活函数,用于将所述深度特征融合层的综合特征表示转化为描述计算机板卡可能状态的概率分布或分类标签。The output layer is configured with a fully connected layer and a Softmax activation function, which is used to convert the comprehensive feature representation of the deep feature fusion layer into a probability distribution or classification label describing the possible states of the computer board. 7.一种计算机板卡的检测系统,其特征在于,包括:7. A computer board detection system, characterized in that it comprises: 数据采集模块,包括中央处理单元和存储单元,其中存储单元中预装有操作指令,用于模拟计算机板卡在多种环境条件下的工作状态,并生成相应的功能性行为数据;A data acquisition module, including a central processing unit and a storage unit, wherein the storage unit is pre-installed with operation instructions for simulating the working state of the computer board under various environmental conditions and generating corresponding functional behavior data; 微纳米级三维成像设备,与数据采集模块同步操作,用于扫描计算机板卡的表面及内部结构并收集物理结构数据,所述数据用于识别计算机板卡的微观尺度物理缺陷;A micro-nanoscale three-dimensional imaging device, operating synchronously with the data acquisition module, is used to scan the surface and internal structure of the computer board and collect physical structure data, which is used to identify microscopic scale physical defects of the computer board; 分析处理单元,配备有数据处理硬件和软件资源,接收来自数据采集模块和三维成像设备的数据,并利用集成算法进行融合分析,以识别潜在缺陷;The analysis and processing unit is equipped with data processing hardware and software resources, receives data from the data acquisition module and the 3D imaging device, and uses integrated algorithms to perform fusion analysis to identify potential defects; 缺陷检测模块,连接分析处理单元,用于评估融合分析结果,并确定计算机板卡是否符合质量标准;A defect detection module, connected to the analysis and processing unit, is used to evaluate the fusion analysis results and determine whether the computer board meets the quality standards; 用户界面终端,为技术操作人员提供交互界面,用于展示检测结果、融合分析反馈和实时系统状态,以及接收用户输入的调整命令;User interface terminal, which provides an interactive interface for technical operators to display test results, integrate analysis feedback and real-time system status, and receive adjustment commands input by users; 远程监控单元,包括至少一个网络接口,用于远程传输检测数据、系统状态和警报信息到中心监控站或移动终端;A remote monitoring unit, including at least one network interface for remotely transmitting detection data, system status and alarm information to a central monitoring station or a mobile terminal; 其中,所述分析处理单元,具体用于:Wherein, the analysis and processing unit is specifically used for: 利用多维尺度分析和主成分分析将所述功能性行为数据和物理结构数据转化为标准化的特征向量;The functional behavior data and physical structure data are converted into standardized feature vectors using multidimensional scaling analysis and principal component analysis; 利用标准化的特征向量,使用基于深度学习的特征学习模型,通过学习数据的内在结构来辨识和提取功能性和物理特征之间的潜在关联,从而为每个计算机板卡生成一个综合特征表示;Using the standardized feature vectors, a deep learning-based feature learning model is used to identify and extract the potential associations between functional and physical features by learning the intrinsic structure of the data, thereby generating a comprehensive feature representation for each computer board. 将所述综合特征表示输入一个基于模糊逻辑的集成学习框架,所述集成学习框架将所述综合特征表示与一组预定义的缺陷模式进行智能匹配,从而识别出通过单一测试无法检测到的潜在缺陷;inputting the comprehensive feature representation into a fuzzy logic-based ensemble learning framework, wherein the ensemble learning framework intelligently matches the comprehensive feature representation with a set of predefined defect patterns to identify potential defects that cannot be detected by a single test; 采用基于因果推理的解释算法来分析潜在缺陷的成因,并将复杂的数据和模式转化为直观的报告和图表,从而为技术人员提供一个明确的视觉和理论依据来理解检测结果,以便于进一步的决策和行动;Employs causal reasoning-based interpretation algorithms to analyze the causes of potential defects and convert complex data and patterns into intuitive reports and charts, providing technicians with a clear visual and theoretical basis to understand the inspection results for further decision-making and action; 其中,所述利用多维尺度分析和主成分分析将所述功能性行为数据和物理结构数据转化为标准化的特征向量,包括:The method of converting the functional behavior data and physical structure data into standardized feature vectors by using multidimensional scaling analysis and principal component analysis includes: 对功能性行为数据和物理结构数据分别进行数据归一化处理,计算各自数据集中每个数据点相对于其数据集均值的偏差,并利用所述偏差对原始数据点进行标准化处理,从而形成两个独立的、规范化的功能性行为数据集和物理结构数据集;Performing data normalization on the functional behavior data and physical structure data respectively, calculating the deviation of each data point in each data set relative to the mean of its data set, and using the deviation to normalize the original data points, thereby forming two independent and normalized functional behavior data sets and physical structure data sets; 分别对规范化的功能性行为数据集和物理结构数据集应用多维尺度分析,通过构建数据点间的距离矩阵,并执行非线性降维技术,为每一个数据集生成相应的低维空间坐标;Multidimensional scaling analysis was applied to the normalized functional behavior dataset and physical structure dataset, respectively, by constructing a distance matrix between data points and performing nonlinear dimensionality reduction techniques to generate corresponding low-dimensional space coordinates for each dataset; 整合功能性行为数据集和物理结构数据集分别对应的低维空间坐标,生成一个联合数据矩阵,并在该联合数据矩阵上计算协方差矩阵;Integrate the low-dimensional space coordinates corresponding to the functional behavior dataset and the physical structure dataset to generate a joint data matrix, and calculate the covariance matrix on the joint data matrix; 利用所述协方差矩阵执行主成分分析,以确定联合数据集的主要变异方向,并提取出与最大特征值相对应的特征向量,形成一个综合的主成分特征向量集;Performing principal component analysis using the covariance matrix to determine the main variation directions of the joint data set and extracting the eigenvectors corresponding to the maximum eigenvalues to form a comprehensive set of principal component eigenvectors; 对所述综合的主成分特征向量集进行规模调整处理,使各维度特征具有相同的量级,以得到标准化特征向量;Performing a scale adjustment process on the comprehensive principal component feature vector set so that the features of each dimension have the same magnitude, so as to obtain a standardized feature vector; 其中,所述特征学习模型包括:Wherein, the feature learning model includes: 输入层,用于接收标准化特征向量;Input layer, used to receive the normalized feature vector; 双通道特征处理层,包括两个并行的子网络,第一子网络配置有针对时间序列数据特征优化的深度网络结构,第二子网络配置有针对空间数据特征优化的深度网络结构,每个子网络分别提取融合特征向量中的时间序列特征和空间特征,并输出两个独立的特征表示;A dual-channel feature processing layer, including two parallel sub-networks, the first sub-network is configured with a deep network structure optimized for time series data features, and the second sub-network is configured with a deep network structure optimized for spatial data features. Each sub-network extracts the time series features and spatial features in the fused feature vector and outputs two independent feature representations; 关联学习层,配置有自定义的注意力机制,用于整合所述双通道特征处理层的输出,通过学习时间序列特征表示和空间特征表示之间的潜在关联来生成一个特征表示;An association learning layer, configured with a custom attention mechanism, for integrating the output of the dual-channel feature processing layer to generate a feature representation by learning the potential association between the time series feature representation and the spatial feature representation; 深度特征融合层,用于接收所述关联学习层的输出,通过一系列全连接层对综合的特征表示进行进一步的整合和提炼,形成用于表征计算机板卡状态的综合特征表示;A deep feature fusion layer, used for receiving the output of the association learning layer, further integrating and refining the comprehensive feature representation through a series of fully connected layers, and forming a comprehensive feature representation for characterizing the state of the computer board; 输出层,配置有全连接层和Softmax激活函数,用于将所述深度特征融合层的综合特征表示转化为描述计算机板卡可能状态的概率分布或分类标签。The output layer is configured with a fully connected layer and a Softmax activation function, which is used to convert the comprehensive feature representation of the deep feature fusion layer into a probability distribution or classification label describing the possible states of the computer board.
CN202311513544.3A 2023-11-13 2023-11-13 Method, device and system for detecting computer board card Active CN118053218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311513544.3A CN118053218B (en) 2023-11-13 2023-11-13 Method, device and system for detecting computer board card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311513544.3A CN118053218B (en) 2023-11-13 2023-11-13 Method, device and system for detecting computer board card

Publications (2)

Publication Number Publication Date
CN118053218A CN118053218A (en) 2024-05-17
CN118053218B true CN118053218B (en) 2024-11-15

Family

ID=91043774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311513544.3A Active CN118053218B (en) 2023-11-13 2023-11-13 Method, device and system for detecting computer board card

Country Status (1)

Country Link
CN (1) CN118053218B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685957A (en) * 2020-12-30 2021-04-20 中国电力科学研究院有限公司 Method for predicting relay protection defects
CN115408287A (en) * 2022-09-02 2022-11-29 龙芯中科(西安)科技有限公司 Method, device and equipment for detecting basic software in board card and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6272437B1 (en) * 1998-04-17 2001-08-07 Cae Inc. Method and apparatus for improved inspection and classification of attributes of a workpiece
GB2567850B (en) * 2017-10-26 2020-11-04 Gb Gas Holdings Ltd Determining operating state from complex sensor data
US20220292239A1 (en) * 2021-03-15 2022-09-15 KuantSol Inc. Smart time series and machine learning end-to-end (e2e) model development enhancement and analytic software

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685957A (en) * 2020-12-30 2021-04-20 中国电力科学研究院有限公司 Method for predicting relay protection defects
CN115408287A (en) * 2022-09-02 2022-11-29 龙芯中科(西安)科技有限公司 Method, device and equipment for detecting basic software in board card and storage medium

Also Published As

Publication number Publication date
CN118053218A (en) 2024-05-17

Similar Documents

Publication Publication Date Title
JP7657743B2 (en) Anomaly detection system, anomaly detection method, anomaly detection program, and trained model generation method
KR20180094111A (en) Image-based sample process control
CN118378196B (en) Industrial control host abnormal behavior identification method based on multi-mode data fusion
CN117933846B (en) E-commerce intelligent logistics distribution system and method based on big data technology
CN116415127A (en) Method, system and medium for paper quality assessment
CN117951646A (en) A data fusion method and system based on edge cloud
CN117251789A (en) Power equipment health state assessment method and system
CN117932394A (en) Electronic component fault management method and system
CN118859037B (en) A method for power equipment fault analysis based on multi-source data fusion
CN118939988B (en) A mechanical equipment fault diagnosis method and system using correlation analysis
CN119293664A (en) A device operation evaluation method based on multi-source data fusion
CN117984024B (en) Welding data management method and system based on automatic production of ship lock lambdoidal doors
CN118053218B (en) Method, device and system for detecting computer board card
CN116956089A (en) Training method and detection method for temperature anomaly detection model of electrical equipment
CN114863178A (en) Image data input detection method and system for neural network vision system
Forest et al. Interpretable prognostics with concept bottleneck models
CN119439876B (en) State monitoring method and system for multi-axis linkage numerical control machining
CN116858509B (en) Processing system and method for automobile parts
CN118167426B (en) Intelligent monitoring equipment and method for mine safety management
KR102667862B1 (en) heavy electrical equipment monitoring system using information visualization and method therefor
CN118690713B (en) A method and system for evaluating integrated circuits
TWI854128B (en) Failure detection apparatus and failure detection method
CN119742016A (en) A plastic part strength detection method and system
CN119314635A (en) Training method, device and medium for medical accelerator fault prediction model
Dwivedi Optimizing Fault Detection for Big Data Analytics Through Evolutionary Computation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant