[go: up one dir, main page]

CN116626051A - A shooting point generation method, device, storage medium and electronic equipment for appearance detection - Google Patents

A shooting point generation method, device, storage medium and electronic equipment for appearance detection Download PDF

Info

Publication number
CN116626051A
CN116626051A CN202310653061.7A CN202310653061A CN116626051A CN 116626051 A CN116626051 A CN 116626051A CN 202310653061 A CN202310653061 A CN 202310653061A CN 116626051 A CN116626051 A CN 116626051A
Authority
CN
China
Prior art keywords
patch
geometric
shooting
cluster
patch set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310653061.7A
Other languages
Chinese (zh)
Inventor
马元巍
黄昳彬
潘正颐
侯大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weiyi Intelligent Manufacturing Technology Co ltd
Original Assignee
Shanghai Weiyi Intelligent Manufacturing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weiyi Intelligent Manufacturing Technology Co ltd filed Critical Shanghai Weiyi Intelligent Manufacturing Technology Co ltd
Priority to CN202310653061.7A priority Critical patent/CN116626051A/en
Publication of CN116626051A publication Critical patent/CN116626051A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Analytical Chemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Chemical & Material Sciences (AREA)
  • Signal Processing (AREA)
  • Biochemistry (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请公开了一种用于外观检测的拍摄点位生成方法、装置、存储介质及电子设备。其中,方法包括:基于待检测产品对象的三维模型确定若干几何面片;对各所述几何面片进行聚类,获得若干面片集;基于同一面片集中各几何面片的法向量、各几何面片的顶点坐标以及预定的拍摄距离,计算获得与各面片集对应的拍摄点位;基于各所述拍摄点位生成目标拍摄点位。本申请中通过获取待检测对象对应的若干几何面片,后续就可以基于各几何面片合理、准确的生成待检测对象的拍摄点位,从而可以直接基于拍摄点位对待检测对象进行拍摄,减少线阵相机的设置数量,减少图片的拍摄数量,提高的检测效率。

The application discloses a method, device, storage medium and electronic equipment for generating shooting points for appearance detection. Wherein, the method includes: determining several geometric faces based on the three-dimensional model of the product object to be detected; clustering each geometric face to obtain several face sets; The vertex coordinates of the geometric patches and the predetermined shooting distance are calculated to obtain the shooting points corresponding to each patch set; and the target shooting points are generated based on each of the shooting points. In this application, by obtaining several geometric facets corresponding to the object to be detected, the shooting point of the object to be detected can be reasonably and accurately generated based on each geometric facet, so that the object to be detected can be directly shot based on the shooting point, reducing The number of settings of the line scan camera reduces the number of pictures taken and improves the detection efficiency.

Description

Shooting point position generation method and device for appearance detection, storage medium and electronic equipment
Technical Field
The application relates to the technical field of industrial quality detection, in particular to a shooting point position generation method, a device, a medium and equipment for appearance detection.
Background
In the production process of products, appearance detection is an important link. By detecting the appearance of the product, the surface defect of the product can be found in time, and the quality and performance of the product are ensured.
In the existing linear array quality inspection system for appearance inspection, in order to photograph each surface of a product object to be inspected as much as possible, a plurality of linear array cameras and a plurality of photographing strategies with photographing angles are adopted, so that the problems of redundancy of the linear array cameras and redundancy of photographed pictures are caused, and further the problems of complex inspection process and lower inspection efficiency are caused.
Disclosure of Invention
In view of the above, the application provides a shooting point position generation method, a device, a storage medium and electronic equipment for appearance detection, and aims to solve the problems of complex detection process and low detection efficiency caused by redundancy of linear array camera arrangement and redundancy of shooting pictures in the existing appearance detection process.
In order to solve the above problems, the present application provides a shooting point position generation method for appearance detection, including:
determining a plurality of geometric patches based on a three-dimensional model of a product object to be detected;
clustering the geometric patches to obtain a plurality of patch sets;
calculating to obtain shooting points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and a preset shooting distance;
and generating a target shooting point position based on each shooting point position.
Optionally, the determining a plurality of geometric patches based on the three-dimensional model of the product object to be detected includes:
based on a three-dimensional model of a product object to be detected, simplifying each patch in the three-dimensional model by adopting a secondary error measurement algorithm to obtain a plurality of geometric patches.
Optionally, the clustering each geometric patch to obtain a plurality of patch sets includes:
determining a plurality of initial cluster centers;
based on the initial cluster cores and the geometric patches, updating the initial cluster cores in an iterative updating mode to obtain a plurality of target cluster cores;
based on each target cluster core, a plurality of geometric patches corresponding to each target cluster core are obtained, and a patch set corresponding to each target cluster core is obtained, so that each patch set is obtained.
Optionally, updating each initial cluster core in an iterative update manner based on each initial cluster core and each geometric patch to obtain a plurality of target cluster cores, including:
respectively calculating the distance between each geometric patch and each initial cluster center;
distributing each geometric patch to a corresponding initial cluster center based on each distance and a preset distribution mode;
performing cluster center calculation based on a plurality of geometric patches corresponding to the same initial cluster center to obtain current cluster centers corresponding to the initial cluster centers;
judging whether preset cluster center updating conditions are met based on the current cluster centers, and acquiring the target cluster centers based on the current cluster centers under the condition that the preset cluster center updating conditions are not met; and under the condition that the cluster core updating conditions are met, taking each current cluster core as each initial cluster core updated by the next round of cluster cores.
Optionally, the calculating to obtain the shooting point location corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and the predetermined shooting distance includes:
calculating to obtain camera shooting positions corresponding to the surface patch sets based on unit normal vectors of the geometrical surface patches in the same surface patch set, the vertex coordinates of the geometrical surface patches and a preset shooting distance;
based on the unit normal vector of each geometric patch in the same patch set, calculating to obtain the shooting pose of the camera corresponding to each patch set;
based on the camera shooting position and the camera shooting pose corresponding to the same patch set, shooting points corresponding to each patch set are determined.
Optionally, the calculating to obtain the camera shooting position corresponding to each patch set based on the unit normal vector of each geometric patch in the same patch set, each vertex coordinate of each geometric patch, and a predetermined shooting distance includes:
calculating to obtain average unit normal vectors corresponding to the patch sets based on the unit normal vectors of the geometric patches in the same patch set;
calculating to obtain the centroid coordinates corresponding to each patch set based on the vertex coordinates of each geometric patch in the same patch set;
and calculating and obtaining camera shooting positions corresponding to the patch sets based on the preset shooting distance, the average unit normal vector corresponding to the same patch set and the centroid point coordinates corresponding to the same patch set.
Optionally, the calculating to obtain the camera shooting pose corresponding to each patch set based on the unit normal vector of each geometric patch in the same patch set includes:
calculating to obtain average unit normal vectors corresponding to the patch sets based on the unit normal vectors of the geometric patches in the same patch set;
and respectively calculating and obtaining the camera shooting pose corresponding to each patch set based on the average unit normal vector corresponding to each patch set and a preset calculation formula.
In order to solve the above problems, the present application provides a shooting point position generation apparatus for appearance detection, comprising:
the determining module is used for determining a plurality of geometric patches based on the three-dimensional model of the product object to be detected;
the clustering module is used for clustering the geometric patches to obtain a plurality of patch sets;
the calculation module is used for calculating and obtaining shooting points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and a preset shooting distance;
and the generation module is used for generating a target shooting point position based on each shooting point position.
In order to solve the above-mentioned problem, a storage medium stores a computer program which, when executed by a processor, implements the steps of the shooting point generation method for appearance detection described in any one of the above.
In order to solve the above problems, the present application provides an electronic device, at least including a memory, and a processor, where the memory stores a computer program, and the processor implements the steps of the shooting point generation method for appearance detection described in any one of the above when executing the computer program on the memory.
According to the shooting point position generation method, the device storage medium and the electronic equipment for appearance detection, the geometric patches corresponding to the object to be detected are obtained, the geometric patches are clustered to obtain the patch sets, the shooting point positions corresponding to the patch sets can be reasonably and accurately generated based on the geometric patches in the same patch set, further the object to be detected can be shot directly based on the shooting point positions, the setting number of line cameras is reduced, the shooting number of pictures is reduced, convenience is provided for subsequent detection, and the appearance detection efficiency is improved.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a flowchart of a shooting point location generation method for appearance detection according to an embodiment of the present application;
fig. 2 is a block diagram of a shooting point generating apparatus for appearance detection according to another embodiment of the present application;
fig. 3 is a block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
Various aspects and features of the present application are described herein with reference to the accompanying drawings.
It should be understood that various modifications may be made to the embodiments of the application herein. Therefore, the above description should not be taken as limiting, but merely as exemplification of the embodiments. Other modifications within the scope and spirit of the application will occur to persons of ordinary skill in the art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and, together with a general description of the application given above, and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the application will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It is also to be understood that, although the application has been described with reference to some specific examples, those skilled in the art can certainly realize many other equivalent forms of the application.
The above and other aspects, features and advantages of the present application will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application will be described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application in unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not intended to be limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the word "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.
The embodiment of the application provides a shooting point position generation method for appearance detection, as shown in fig. 2, the method in the embodiment comprises the following steps:
step S101, determining a plurality of geometric patches based on a three-dimensional model of a product object to be detected;
since the modern factories utilize three-dimensional models/3D models to produce actual workpieces, original 3D model files are generated for the objects of products to be detected/the workpieces to be inspected. Of course, the corresponding 3D model may also be created in advance for the product object to be detected.
In this step, after the three-dimensional model of the product object to be detected is obtained, a simplification process may be specifically performed on each patch in the three-dimensional model by using a secondary error measurement algorithm, so as to obtain a plurality of geometric patches.
Wherein a geometric patch refers to a local surface having a predetermined geometric shape constituting a three-dimensional model. The geometric patch in this step may specifically be a triangular patch or the like. Taking a geometric surface patch as a triangular surface patch as an example, for a three-dimensional model of any workpiece object/product object to be detected, the surface of the three-dimensional model can be divided into m triangular surface patches; for an ith triangular patch consisting of 3 vertices and a unit normal vector, it can be noted that
Step S102, clustering the geometric patches to obtain a plurality of patch sets;
in the implementation process, corresponding unit normal vectors can be calculated in advance for each geometric patch, and then each geometric patch is clustered based on the corresponding unit normal vector of each geometric patch, so that the geometric patches with similar unit normal vectors are divided into one class, and a plurality of patch sets are obtained.
Step S103, calculating and obtaining shooting points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and a preset shooting distance;
in the specific implementation process, the camera shooting positions (x, y, z) corresponding to the surface patch sets can be obtained by calculation based on the unit normal vector of each geometrical surface patch in the same surface patch set, each vertex coordinate of each geometrical surface patch and a preset shooting distance; then, based on the unit normal vector of each geometric patch in the same patch set, calculating to obtain the shooting pose (rx, ry, rz) of the camera corresponding to each patch set; thus, the imaging points (x, y, z, rx, ry, rz) corresponding to the respective patch sets can be determined based on the camera imaging position and the camera imaging pose corresponding to the same patch set.
Step S104, generating a target shooting point position based on each shooting point position.
In this step, after the shooting points corresponding to each patch set are obtained, the shooting points of each patch set may be combined together, i.e. the target shooting point of the object to be detected is generated.
According to the shooting point position generation method for appearance detection, the plurality of geometric patches corresponding to the object to be detected are obtained, the geometric patches are clustered to obtain the patch set, the shooting point positions corresponding to the patch sets can be reasonably and accurately generated based on the geometric patches in the same patch set, further the object to be detected can be shot directly based on the shooting point positions, the setting number of line cameras is reduced, the shooting number of pictures is reduced, convenience is provided for subsequent detection, and the appearance detection efficiency is improved.
In order to make the clustering result more reasonable and accurate, the embodiment of the application can specifically adopt the following clustering mode when executing step S102 to cluster each geometric patch:
step S1021, determining a plurality of initial cluster centers;
in this step, a plurality of initial geometric patches may be randomly determined from each of the geometric patches, and then a plurality of initial cluster centers may be determined based on each of the initial geometric patches. Specifically, the unit normal vector based on the initial geometric patch can be used as the initial cluster center. Whereby a corresponding initial cluster center can be obtained for each initial geometric patch.
Step S1022, based on each initial cluster core and each geometric patch, updating each initial cluster core in an iterative updating mode to obtain a plurality of target cluster cores;
in the specific implementation of the step, the cluster center updating process is as follows:
step one, respectively calculating the distance between each geometric surface piece and each initial cluster center.
In this step, the distance calculation may be performed for the other geometric patches except for the initial geometric patch, that is, the distances between the other geometric patches and each initial geometric patch may be calculated based on the unit normal vector of the other geometric patches and the unit normal vector of each initial geometric patch, respectively.
And step two, distributing each geometric patch to a corresponding initial cluster center based on each distance and a preset distribution mode.
That is, for other geometric patches than the initial geometric patches, the distance from each geometric patch to the respective initial cluster center may be calculated, and then each patch may be allocated to the corresponding initial cluster center, i.e., to the cluster of the corresponding initial cluster center, according to a predetermined allocation manner. Wherein the predetermined allocation may be a near allocation, i.e. allocating the geometric patch to the initial cluster center closest to the cluster center.
And thirdly, performing cluster center calculation based on a plurality of geometric patches corresponding to the same initial cluster center to obtain current cluster centers corresponding to the initial cluster centers.
In this step, for each geometric patch corresponding to the same initial cluster core, the cluster core may be recalculated by using an arithmetic average manner, that is, the unit normal vector of each geometric patch corresponding to the same initial cluster core is calculated by using an arithmetic average method, so as to obtain the current cluster core.
Judging whether a preset cluster center updating condition is met or not based on each current cluster center; executing the fifth step under the condition that the cluster center updating condition is met; and executing the step six under the condition that the cluster center updating condition is not met.
The cluster center update condition in this step may be a predetermined number of update rounds and a predetermined cluster center change amount. That is, when the number of iterative updating rounds reaches the predetermined number of updating rounds, it may be determined that the updating condition is not satisfied, that is, cluster center updating of the next round is not required; otherwise, when the number of the iterative updating rounds does not reach the preset updating round number, the updating condition is determined to be met, and the cluster center of the next round of updating is needed. Or when the difference between the updated current cluster core and the initial cluster core corresponding to the updated current cluster core is smaller than the preset cluster core variation, determining that the updating condition is not met, namely, the cluster core updating of the next round is not needed; otherwise, when the difference between the current cluster core after random updating and the initial cluster core corresponding to the current cluster core before updating is not smaller than the preset cluster core variation, it can be determined that the updating condition is met, and the next round of cluster core updating is needed. The preset updating wheel number and the preset cluster center change amount can be set and adjusted according to actual needs. For example, the predetermined number of update rounds is 300 rounds, 400 rounds, etc.
And fifthly, taking each current cluster core as each initial cluster core updated by the next round of cluster cores, and returning to the step one.
And step six, obtaining each target cluster core based on each current cluster core.
Step S1023, based on each target cluster core, acquiring a plurality of geometric patches corresponding to each target cluster core, and acquiring a patch set corresponding to each target cluster core to acquire each patch set.
That is, after the target cluster center is determined, a corresponding set of patches may be obtained based on a number of geometric patches corresponding to the target cluster center.
In this embodiment, by determining the initial cluster center and then updating each initial cluster center in a cyclic updating manner, the finally determined target cluster center can be more reasonable and accurate, so that the subsequent patch sets obtained based on the target cluster center are more reasonable, and a foundation is laid for the subsequent reasonable calculation of corresponding shooting points based on each patch set.
In this embodiment, when performing step S103 to calculate and obtain the shooting points corresponding to each patch set, the following calculation method may be specifically adopted:
step S1031, calculating and obtaining camera shooting positions corresponding to the surface patch sets based on unit normal vectors of the geometrical surface patches in the same surface patch set, the vertex coordinates of the geometrical surface patches and a preset shooting distance;
in this embodiment, for each patch set, the camera shooting position corresponding to the obtained patch set may be specifically calculated as follows:
step one, calculating and obtaining average unit normal vectors corresponding to each patch set based on unit normal vectors of each geometric patch in the same patch set;
that is, the unit normal vector contained in each geometric patch in the set of geometric patches is arithmetically averaged to obtain the average unit normal vector of the set of patches.
Step two, calculating and obtaining centroid coordinates corresponding to each patch set based on the vertex coordinates of each geometric patch in the same patch set;
that is, the coordinates of each geometric patch in the patch set are arithmetically averaged to obtain the centroid coordinates of the patch set.
And thirdly, calculating and obtaining camera shooting positions corresponding to the patch sets based on a preset shooting distance, an average unit normal vector corresponding to the same patch set and a centroid point coordinate corresponding to the same patch set.
In this step, after the average unit normal vector and the centroid coordinates are obtained, the length of the working distance/shooting distance can be moved along the normal line at the centroid point, whereby the position of the line camera can be obtained.
In this step, the shooting distance refers to the working distance of the line camera, that is, the distance from the lower end of the lens to the surface of the object, and at this time, the shooting of the surface of the object is the most clear, which may be called as the optimal field of view.
Step S1032, calculating and obtaining the shooting pose of the camera corresponding to each patch set based on the unit normal vector of each geometric patch in the same patch set;
in this step, the average unit normal vector normal corresponding to each patch set can be calculated based on the unit normal vector of each geometric patch in the same patch set cluster The method comprises the steps of carrying out a first treatment on the surface of the Average unit normal vector normal cluster The specific calculation process of (2) is the same as the principle of calculating the average unit normal vector in step S1031, and will not be described here again.
After the average unit normal vector of each patch set is obtained by calculation, the camera shooting pose (rx, ry, rz) corresponding to each patch set can be obtained by calculation based on the average unit normal vector corresponding to each patch set and a predetermined calculation formula. Wherein, the predetermined formula calculation formula is:
wherein cluster_nx and cluster_ny may be based on average unit normal vector normal cluster Obtaining; average unit normal vector normal cluster =(cluster_nx,cluster_ny,cluster_nz)。
In step S1033, based on the camera shooting position and the camera shooting pose corresponding to the same patch set, shooting points (x, y, z, rx, ry, rz) corresponding to each patch set are determined.
According to the shooting point position generation method for appearance detection, based on the unit normal vector of each geometric patch in the geometric patch set and the coordinates of each geometric patch, the shooting position and the camera pose corresponding to the patch set can be obtained through reasonable and accurate calculation, so that the shooting point positions corresponding to the patch set can be determined reasonably and accurately, namely, the target shooting point positions obtained through the combination of the shooting point positions can be obtained, the object to be detected can be shot directly based on the target shooting point positions, the setting number of linear array cameras is reduced, the shooting number of pictures is reduced, convenience is provided for subsequent detection, and the appearance detection efficiency is improved.
On the basis of the above embodiment, another embodiment of the present application provides a shooting point position generating method for appearance detection, where in the embodiment, a geometric patch is illustrated as a triangular patch, and the method in the embodiment includes the following steps:
step S201, based on a three-dimensional model of a product object to be detected, simplifying each patch in the three-dimensional model by adopting a quadratic error measure algorithm to obtain a plurality of triangular patches;
step S202, determining a plurality of initial cluster centers;
a number of initial triangular patches may be randomly determined from each of the triangular patches, with a unit normal vector based on the initial triangular patches as an initial cluster center. Whereby for each initial triangular patch a corresponding initial cluster core is obtained, each initial cluster core being denoted as:
step S203, respectively calculating the distance between each triangular patch and each initial cluster center;
in this step, distance calculation may be performed for other triangular patches except for the initial triangular patch, that is, the distance between the unit normal vector of each vertex of the triangular patch and the unit normal vector of each initial triangular patch is calculated, thereby obtaining the distance between each other geometric patch and each initial geometric patch.
In this step, for the ith vertex, it is to the initial cluster center μ j The normal vector distance formula is:
ditance(p ij )=cos -1 ((nx i ×nx j )+(ny i ×ny j )+(nz i ×nz j ))
wherein p is i Unit normal vector representing the ith vertex,p i =(nx i ,ny i ,nz i );μ j Represents the unit normal vector corresponding to the jth initial cluster center, mu j =(nx j ,ny j ,nz j ). In this embodiment, since the 3D model is composed of a plurality of vertexes (vertexes) and a plurality of triangular patches (triangulars), the distances between the vertexes and the initial cluster centers can be calculated based on the unit normal vectors corresponding to the vertexes, that is, the first distances corresponding to the vertexes can be obtained, and therefore the distances between the triangular patches and the initial cluster centers can be obtained for the first distances corresponding to the vertexes in the same triangular patch.
Step S205, assigning each triangular patch to a corresponding initial cluster center based on each distance and a predetermined assignment manner;
in this step, for the geometric patches other than the initial geometric patches, the distance from each geometric patch to each initial cluster center may be calculated, and then each patch is allocated to the corresponding initial cluster center, that is, to the cluster corresponding to the initial cluster center, according to a predetermined allocation manner. Wherein the predetermined allocation may be a near allocation, i.e. allocating the geometric patch to the initial cluster center closest to the cluster center.
Step S206, performing cluster center calculation based on a plurality of triangular patches corresponding to the same initial cluster center to obtain current cluster centers corresponding to the initial cluster centers;
in this step, the cluster center may be recalculated using arithmetic average for each geometric patch corresponding to the same initial cluster center. For example for the jth initial cluster coreIt gets the current cluster core +.>The calculation formula of the current cluster center is as follows:Wherein m represents the number of vertexes corresponding to the jth initial cluster center after clustering; (nx) i ,ny i ,nz i ) A unit normal vector representing an ith vertex among the m vertices.
Step S207, judging whether a preset cluster center updating condition is met based on each current cluster center, and executing step S208 under the condition that the cluster center updating condition is not met; and when the cluster core updating condition is satisfied, taking each current cluster core as each initial cluster core updated by the next round of cluster cores, and re-executing the step S203.
The cluster center update condition in this step may be a predetermined number of update rounds and a predetermined cluster center change amount. That is, when the number of iterative updating rounds reaches the predetermined number of updating rounds, it may be determined that the updating condition is not satisfied, that is, cluster center updating of the next round is not required; otherwise, when the number of the iterative updating rounds does not reach the preset updating round number, the updating condition is determined to be met, and the cluster center of the next round of updating is needed. Or when the difference between the updated current cluster core and the initial cluster core corresponding to the updated current cluster core is smaller than the preset cluster core variation, determining that the updating condition is not met, namely, the cluster core updating of the next round is not needed; otherwise, when the difference between the current cluster core after random updating and the initial cluster core corresponding to the current cluster core before updating is not smaller than the preset cluster core variation, it can be determined that the updating condition is met, and the next round of cluster core updating is needed.
Step S208, obtaining each target cluster core based on each current cluster core;
step S209, based on each target cluster core, obtaining a plurality of triangular patches corresponding to each target cluster core, and obtaining a patch set corresponding to each target cluster core, so as to obtain each patch set;
in this step, after the target cluster centers are obtained, the triangular patches may be clustered again, thereby obtaining a patch set corresponding to each target cluster center.
Step S210, calculating and obtaining average unit normal vectors corresponding to the triangular patches in the same patch set based on the unit normal vectors of the triangular patches;
that is, the unit normal vector contained in each triangular patch in the geometric patch set is calculated to be flatAnd obtaining the average unit normal vector normal of the dough set cluster The calculation formula is as follows:
wherein r represents the number of triangular patches in the patch set; for r triangular patches, there are M vertices, (nx) i ,ny i ,nz i ) A unit normal vector representing the ith vertex among the M vertices.
Step S211, calculating and obtaining mass center coordinates corresponding to each facet set based on each vertex coordinate of each triangular facet in the same facet set;
that is, the coordinates of each triangular patch in the patch set are arithmetically averaged to obtain the centroid coordinates center of the patch set. The calculation formula is as follows:
wherein r represents the number of triangular patches in the patch set, and there are M vertexes for r triangular patches; (x) i ,y i ,z i ) Representing the ith vertex coordinate in the M vertices.
Step S212, calculating and obtaining camera shooting positions corresponding to the patch sets based on a preset shooting distance, an average unit normal vector corresponding to the same patch set and a centroid point coordinate corresponding to the same patch set;
in this step, after the average unit normal vector and the centroid coordinates are obtained, the length of the working distance/photographing distance can be moved along the normal line at the centroid point, whereby the position (x, y, z) of the line camera can be obtained. Namely:
x=x center +working_distance*cluster_nx
y=y center +working_distance*cluster_ny
z=z center +working_distance*cluster_nz
wherein working_distance represents a shooting distance; (x) center ,y center ,z center ) Representing centroid coordinates corresponding to the set of facets; (Cluster_nx, cluster_ny, cluster_nz) represents the average unit normal vector corresponding to the set of facets.
Step S213, based on the average unit normal vector corresponding to each patch set and a preset calculation formula, respectively calculating and obtaining the camera shooting pose corresponding to each patch set;
in this step, the average unit normal vector normal corresponding to each patch set can be calculated based on the unit normal vector of each geometric patch in the same patch set cluster The method comprises the steps of carrying out a first treatment on the surface of the Average unit normal vector normal cluster The specific calculation process of (2) is the same as the principle of calculating the average unit normal vector in step S210, and will not be described here again.
After the average unit normal vector of each patch set is obtained by calculation, the camera shooting pose (rx, ry, rz) corresponding to each patch set can be obtained by calculation based on the average unit normal vector corresponding to each patch set and a predetermined calculation formula. Wherein, the predetermined formula calculation formula is:
wherein cluster_nx and cluster_ny may be based on average unit normal vector normal cluster Obtained by the average unit method normal cluster =(cluster_nx,cluster_ny,cluster_nz)。
In this embodiment, in calculating the pose (rx, ry, rz) of the camera, it can be assumed that the pose Z axis of the camera lens is perpendicular to the lens outward, so the normal vector normal of the camera pose camera Normal vector normal to the cluster of patches after clustering cluster Conversely, thereby, the normal vector normal of the camera pose camera The method comprises the following steps:
normal camera =(-cluster_nx,-cluster_ny,-cluster_nz)
rotation matrix R assuming camera pose camera Then the following equation can be obtained:
according to the general formula of the Euler angular rotation matrix:
solving to obtain
According to the above solution, any rz value can meet the requirement of the camera pose lens towards the clustering triangular surface patch set, so rz=0 can be taken, and it should be noted that in the actual assembly process, rz can be changed according to the structure of the quality inspection equipment and the imaging effect of the quality inspection equipment cannot be changed. Thus, the line scan camera pose (rx, ry, rz) can be finally obtained, namely:
ry=sin -1 (-cluster_nx),
rz=0
step S214, determining shooting points corresponding to each patch set based on the shooting positions and shooting poses of the cameras corresponding to the same patch set;
that is, the shooting points corresponding to each patch set are:
step S215, generating a target shooting point based on each shooting point.
According to the shooting point position generation method for appearance detection, the geometric patches corresponding to the object to be detected are obtained, the geometric patches are clustered to obtain the patch set, the shooting point positions corresponding to the patch sets can be reasonably and accurately generated based on the geometric patches in the same patch set, further the object to be detected can be shot directly based on the shooting point positions, the setting number of line-scan cameras is reduced, the shooting number of pictures is reduced, convenience is provided for subsequent detection, and the appearance detection efficiency is improved.
An embodiment of the present application provides a shooting point position generating device for appearance detection, as shown in fig. 2, including:
a determining module 11, configured to determine a plurality of geometric patches based on a three-dimensional model of a product object to be detected;
a clustering module 12, configured to cluster each of the geometric patches to obtain a plurality of patch sets;
a calculation module 13, configured to calculate and obtain a shooting point location corresponding to each patch set based on a normal vector of each geometric patch in the same patch set, a vertex coordinate of each geometric patch, and a predetermined shooting distance;
the generating module 14 is configured to generate a target shooting point location based on each shooting point location.
In a specific implementation of this embodiment, the determining module is specifically configured to: based on a three-dimensional model of a product object to be detected, simplifying each patch in the three-dimensional model by adopting a secondary error measurement algorithm to obtain a plurality of geometric patches.
In a specific implementation of this embodiment, the clustering module specifically includes a determining unit, an updating unit, and an obtaining unit; the determining unit is used for determining a plurality of initial cluster centers; the updating unit is used for updating each initial cluster core in an iterative updating mode based on each initial cluster core and each geometric patch to obtain a plurality of target cluster cores; the obtaining unit is used for obtaining a plurality of geometric patches corresponding to each target cluster core based on each target cluster core, and obtaining a patch set corresponding to each target cluster core so as to obtain each patch set.
In a specific implementation of this embodiment, the updating unit is specifically configured to: respectively calculating the distance between each geometric patch and each initial cluster center; distributing each geometric patch to a corresponding initial cluster center based on each distance and a preset distribution mode; performing cluster center calculation based on a plurality of geometric patches corresponding to the same initial cluster center to obtain current cluster centers corresponding to the initial cluster centers; judging whether preset cluster center updating conditions are met based on the current cluster centers, and acquiring the target cluster centers based on the current cluster centers under the condition that the preset cluster center updating conditions are not met; and under the condition that the cluster core updating conditions are met, taking each current cluster core as each initial cluster core updated by the next round of cluster cores.
In a specific implementation process of this embodiment, the computing module specifically includes: a first calculation unit, a second calculation unit, and a determination unit; the first calculation unit is used for calculating and obtaining camera shooting positions corresponding to the surface patch sets based on unit normal vectors of the geometrical surface patches in the same surface patch set, the vertex coordinates of the geometrical surface patches and a preset shooting distance; the second calculation unit is used for calculating and obtaining camera shooting pose corresponding to each patch set based on unit normal vectors of each geometric patch in the same patch set; the determining unit is used for determining shooting points corresponding to each patch set based on the shooting positions and the shooting postures of the cameras corresponding to the same patch set.
In a specific implementation process of this embodiment, the first computing unit is specifically configured to: calculating to obtain average unit normal vectors corresponding to the patch sets based on the unit normal vectors of the geometric patches in the same patch set; calculating to obtain the centroid coordinates corresponding to each patch set based on the vertex coordinates of each geometric patch in the same patch set; and calculating and obtaining camera shooting positions corresponding to the patch sets based on the preset shooting distance, the average unit normal vector corresponding to the same patch set and the centroid point coordinates corresponding to the same patch set.
In this embodiment, in a specific implementation process, the second computing unit is specifically configured to: calculating to obtain average unit normal vectors corresponding to the patch sets based on the unit normal vectors of the geometric patches in the same patch set; and respectively calculating and obtaining the camera shooting pose corresponding to each patch set based on the average unit normal vector corresponding to each patch set and a preset calculation formula.
According to the shooting point position generation device for appearance detection, the plurality of geometric patches corresponding to the object to be detected are obtained, the geometric patches are clustered to obtain the patch set, the shooting point positions corresponding to the patch sets can be reasonably and accurately generated based on the geometric patches in the same patch set, further the object to be detected can be shot directly based on the shooting point positions, the setting number of line-scan cameras is reduced, the shooting number of pictures is reduced, convenience is provided for subsequent detection, and the appearance detection efficiency is improved.
Another embodiment of the present application provides a storage medium storing a computer program which, when executed by a processor, performs the method steps of:
step one, determining a plurality of geometric patches based on a three-dimensional model of a product object to be detected;
clustering the geometric patches to obtain a plurality of patch sets;
thirdly, calculating to obtain shooting points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and a preset shooting distance;
and step four, generating a target shooting point position based on each shooting point position.
The specific implementation process of the above method steps may refer to any embodiment of the above method for generating a shooting point for appearance detection, and this embodiment is not repeated here.
According to the storage medium, the plurality of geometric patches corresponding to the object to be detected are obtained, the geometric patches are clustered to obtain the patch set, then the shooting points corresponding to the patch sets can be reasonably and accurately generated based on the geometric patches in the same patch set, further the object to be detected can be shot directly based on the shooting points, the setting number of the linear array cameras is reduced, the shooting number of pictures is reduced, convenience is provided for subsequent detection, and the appearance detection efficiency is improved.
Another embodiment of the present application provides an electronic device, as shown in fig. 3, at least including a memory 1 and a processor 2, where the memory 1 stores a computer program, and the processor 2 implements the following method steps when executing the computer program on the memory:
step one, determining a plurality of geometric patches based on a three-dimensional model of a product object to be detected;
clustering the geometric patches to obtain a plurality of patch sets;
thirdly, calculating to obtain shooting points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and a preset shooting distance;
and step four, generating a target shooting point position based on each shooting point position.
The specific implementation process of the above method steps may refer to any embodiment of the above method for generating a shooting point for appearance detection, and this embodiment is not repeated here.
According to the electronic equipment, the plurality of geometric patches corresponding to the object to be detected are obtained, the geometric patches are clustered to obtain the patch set, then the shooting points corresponding to the patch sets can be reasonably and accurately generated based on the geometric patches in the same patch set, further the object to be detected can be shot directly based on the shooting points, the setting number of the linear array cameras is reduced, the shooting number of pictures is reduced, convenience is provided for subsequent detection, and the appearance detection efficiency is improved.
The above embodiments are only exemplary embodiments of the present application and are not intended to limit the present application, the scope of which is defined by the claims. Various modifications and equivalent arrangements of this application will occur to those skilled in the art, and are intended to be within the spirit and scope of the application.

Claims (10)

1. A shooting point position generation method for appearance detection, characterized by comprising:
determining a plurality of geometric patches based on a three-dimensional model of a product object to be detected;
clustering the geometric patches to obtain a plurality of patch sets;
calculating to obtain shooting points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and a preset shooting distance;
and generating a target shooting point position based on each shooting point position.
2. The method of claim 1, wherein the determining a number of geometric patches based on the three-dimensional model of the product object to be inspected comprises:
based on a three-dimensional model of a product object to be detected, simplifying each patch in the three-dimensional model by adopting a secondary error measurement algorithm to obtain a plurality of geometric patches.
3. The method of claim 1, wherein clustering each of the geometric patches to obtain a plurality of patch sets comprises:
determining a plurality of initial cluster centers;
based on the initial cluster cores and the geometric patches, updating the initial cluster cores in an iterative updating mode to obtain a plurality of target cluster cores;
based on each target cluster core, a plurality of geometric patches corresponding to each target cluster core are obtained, and a patch set corresponding to each target cluster core is obtained, so that each patch set is obtained.
4. The method of claim 3, wherein updating each initial cluster core in an iterative update manner based on each initial cluster core and each geometric patch to obtain a plurality of target cluster cores comprises:
respectively calculating the distance between each geometric patch and each initial cluster center;
distributing each geometric patch to a corresponding initial cluster center based on each distance and a preset distribution mode;
performing cluster center calculation based on a plurality of geometric patches corresponding to the same initial cluster center to obtain current cluster centers corresponding to the initial cluster centers;
judging whether preset cluster center updating conditions are met based on the current cluster centers, and acquiring the target cluster centers based on the current cluster centers under the condition that the preset cluster center updating conditions are not met; and under the condition that the cluster core updating conditions are met, taking each current cluster core as each initial cluster core updated by the next round of cluster cores.
5. The method of claim 1, wherein calculating to obtain the shot points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch, and a predetermined shot distance, comprises:
calculating to obtain camera shooting positions corresponding to the surface patch sets based on unit normal vectors of the geometrical surface patches in the same surface patch set, the vertex coordinates of the geometrical surface patches and a preset shooting distance;
based on the unit normal vector of each geometric patch in the same patch set, calculating to obtain the shooting pose of the camera corresponding to each patch set;
based on the camera shooting position and the camera shooting pose corresponding to the same patch set, shooting points corresponding to each patch set are determined.
6. The method of claim 5, wherein calculating a camera shooting position corresponding to each patch set based on a unit normal vector of each geometric patch in the same patch set, each vertex coordinate of each geometric patch, and a predetermined shooting distance, comprises:
calculating to obtain average unit normal vectors corresponding to the patch sets based on the unit normal vectors of the geometric patches in the same patch set;
calculating to obtain the centroid coordinates corresponding to each patch set based on the vertex coordinates of each geometric patch in the same patch set;
and calculating and obtaining camera shooting positions corresponding to the patch sets based on the preset shooting distance, the average unit normal vector corresponding to the same patch set and the centroid point coordinates corresponding to the same patch set.
7. The method of claim 5, wherein calculating a camera shooting pose corresponding to each patch set based on unit normal vectors of each geometric patch in the same patch set comprises:
calculating to obtain average unit normal vectors corresponding to the patch sets based on the unit normal vectors of the geometric patches in the same patch set;
and respectively calculating and obtaining the camera shooting pose corresponding to each patch set based on the average unit normal vector corresponding to each patch set and a preset calculation formula.
8. A shooting point position generation device for appearance detection, characterized by comprising:
the determining module is used for determining a plurality of geometric patches based on the three-dimensional model of the product object to be detected;
the clustering module is used for clustering the geometric patches to obtain a plurality of patch sets;
the calculation module is used for calculating and obtaining shooting points corresponding to each patch set based on the normal vector of each geometric patch in the same patch set, the vertex coordinates of each geometric patch and a preset shooting distance;
and the generation module is used for generating a target shooting point position based on each shooting point position.
9. A storage medium storing a computer program which, when executed by a processor, implements the steps of the shooting point generation method for appearance detection of any one of the preceding claims 1 to 7.
10. An electronic device comprising at least a memory, a processor, the memory having stored thereon a computer program, the processor, when executing the computer program on the memory, implementing the steps of the shot point generation method for appearance detection of any of the preceding claims 1-7.
CN202310653061.7A 2023-06-05 2023-06-05 A shooting point generation method, device, storage medium and electronic equipment for appearance detection Pending CN116626051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310653061.7A CN116626051A (en) 2023-06-05 2023-06-05 A shooting point generation method, device, storage medium and electronic equipment for appearance detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310653061.7A CN116626051A (en) 2023-06-05 2023-06-05 A shooting point generation method, device, storage medium and electronic equipment for appearance detection

Publications (1)

Publication Number Publication Date
CN116626051A true CN116626051A (en) 2023-08-22

Family

ID=87613235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310653061.7A Pending CN116626051A (en) 2023-06-05 2023-06-05 A shooting point generation method, device, storage medium and electronic equipment for appearance detection

Country Status (1)

Country Link
CN (1) CN116626051A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118602950A (en) * 2024-07-23 2024-09-06 东莞市兆丰精密仪器有限公司 3D model programming method, device, image measuring instrument and storage medium based on image measuring instrument
CN119879727A (en) * 2024-12-24 2025-04-25 思看科技(杭州)股份有限公司 Optimization method and device for automatic scanning path

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410333A (en) * 2018-09-19 2019-03-01 北京大学 A kind of super dough sheet cluster generation method of high quality
CN110147815A (en) * 2019-04-10 2019-08-20 深圳市易尚展示股份有限公司 Multi-frame point cloud fusion method and device based on K-means clustering
WO2021042844A1 (en) * 2019-09-06 2021-03-11 平安科技(深圳)有限公司 Large-scale data clustering method and apparatus, computer device and computer-readable storage medium
CN112489000A (en) * 2020-11-20 2021-03-12 天津朗硕机器人科技有限公司 Autonomous reconfigurable part surface quality detection system
CN112801977A (en) * 2021-01-28 2021-05-14 青岛理工大学 Deep learning-based relative pose estimation and monitoring method for assembly parts
CN113096094A (en) * 2021-04-12 2021-07-09 成都市览图科技有限公司 Three-dimensional object surface defect detection method
WO2022142948A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Dynamic target tracking and positioning method and apparatus, and device and storage medium
CN115325962A (en) * 2022-08-26 2022-11-11 中国科学院长春光学精密机械与物理研究所 Automatic laser three-dimensional scanning track planning method
CN115753781A (en) * 2021-09-03 2023-03-07 株式会社东芝 Processing device, inspection system, processing method, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410333A (en) * 2018-09-19 2019-03-01 北京大学 A kind of super dough sheet cluster generation method of high quality
CN110147815A (en) * 2019-04-10 2019-08-20 深圳市易尚展示股份有限公司 Multi-frame point cloud fusion method and device based on K-means clustering
WO2021042844A1 (en) * 2019-09-06 2021-03-11 平安科技(深圳)有限公司 Large-scale data clustering method and apparatus, computer device and computer-readable storage medium
CN112489000A (en) * 2020-11-20 2021-03-12 天津朗硕机器人科技有限公司 Autonomous reconfigurable part surface quality detection system
WO2022142948A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Dynamic target tracking and positioning method and apparatus, and device and storage medium
CN112801977A (en) * 2021-01-28 2021-05-14 青岛理工大学 Deep learning-based relative pose estimation and monitoring method for assembly parts
CN113096094A (en) * 2021-04-12 2021-07-09 成都市览图科技有限公司 Three-dimensional object surface defect detection method
CN115753781A (en) * 2021-09-03 2023-03-07 株式会社东芝 Processing device, inspection system, processing method, and storage medium
CN115325962A (en) * 2022-08-26 2022-11-11 中国科学院长春光学精密机械与物理研究所 Automatic laser three-dimensional scanning track planning method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118602950A (en) * 2024-07-23 2024-09-06 东莞市兆丰精密仪器有限公司 3D model programming method, device, image measuring instrument and storage medium based on image measuring instrument
CN118602950B (en) * 2024-07-23 2025-08-26 东莞市兆丰精密仪器有限公司 3D model programming method and device based on image measuring instrument, image measuring instrument and storage medium
CN119879727A (en) * 2024-12-24 2025-04-25 思看科技(杭州)股份有限公司 Optimization method and device for automatic scanning path

Similar Documents

Publication Publication Date Title
CN109977466B (en) Three-dimensional scanning viewpoint planning method and device and computer readable storage medium
CN116626051A (en) A shooting point generation method, device, storage medium and electronic equipment for appearance detection
CN113781622B (en) Three-dimensional model texture mapping conversion method, device, equipment and medium
CN115457202B (en) Method, device and storage medium for updating three-dimensional model
CN115131433B (en) A method, device and electronic device for processing non-cooperative target posture
CN113658166A (en) Point cloud defect detection method and device based on grid model
CN114359204A (en) Method, device and electronic device for detecting void in point cloud
CN112529891A (en) Hollow hole identification and contour detection method and device based on point cloud and storage medium
CN118587268A (en) A fitting method, system, medium and device for incomplete cylindrical point cloud
CN114943144A (en) Satellite layout optimization design method for distance control by utilizing Phi function
CN111932628B (en) A method and device for determining posture, electronic device, and storage medium
CN117687543A (en) Three-dimensional model regular curved surface extraction method and device, electronic equipment and storage medium
WO2021114027A1 (en) 3d shape matching method and device for describing 3d local features on the basis of sgh
CN112446952B (en) Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN116342704B (en) Scanning point category determining method and device, computer equipment and storage medium
CN118037601B (en) Point cloud filling method and electronic equipment
CN114359413B (en) Method and system for calculating position parameters of rotating platform for three-dimensional scanning
CN103839241B (en) Interpolation triangle forming method and device
CN116703855A (en) Method, device, medium and equipment for generating photographing points for quality inspection
CN112802201B (en) Method and device for obtaining parallel nearest distance between entity models
CN114241153A (en) Finite element node area weight solution method, electronic device and storage medium
CN114353285B (en) Sound source positioning method and device, computer equipment, air conditioner and storage medium
CN117218143B (en) Point cloud segmentation method and device
CN117830361B (en) Point cloud registration method, device, computer equipment and storage medium
CN119417835B (en) Intelligent detection method and system for parts based on three-dimensional machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination