[go: up one dir, main page]

CN116863220A - Method and device for generating countermeasure point cloud based on geometric density perception - Google Patents

Method and device for generating countermeasure point cloud based on geometric density perception Download PDF

Info

Publication number
CN116863220A
CN116863220A CN202310818209.8A CN202310818209A CN116863220A CN 116863220 A CN116863220 A CN 116863220A CN 202310818209 A CN202310818209 A CN 202310818209A CN 116863220 A CN116863220 A CN 116863220A
Authority
CN
China
Prior art keywords
point cloud
geometric
countermeasure
loss function
deformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310818209.8A
Other languages
Chinese (zh)
Inventor
向镜宇
林渲翔
陈轲
贾奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202310818209.8A priority Critical patent/CN116863220A/en
Publication of CN116863220A publication Critical patent/CN116863220A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于几何密度感知的对抗点云生成方法及装置,属于人工智能技术领域。其中方法包括:S1、获取原始点云数据;S2、采用几何密度感知的方法对所述原始点云数据中的点做非线性几何形变,得到输入样本;S3、将所述输入样本输入到点云分类网络中,根据设计的优化模型,计算出损失函数,使用损失函数规约对抗点云的生成;S4、测试所述对抗点云在物理攻击的设置下的攻击成功率。本发明基于几何密度感知的方法对点云进行非线性形变,不需要预设姿态,生成的对抗点云表面更加光滑,对抗点云的形变更加不容易被肉眼察觉,其对抗性能够得到较好的保持,所以在对抗防御下拥有较好的攻击成功率并且能够在现实世界中得到运用。

The invention discloses an adversarial point cloud generation method and device based on geometric density perception, which belongs to the field of artificial intelligence technology. The method includes: S1, obtain the original point cloud data; S2, use the geometric density sensing method to perform nonlinear geometric deformation on the points in the original point cloud data to obtain the input sample; S3, input the input sample to the point In the cloud classification network, the loss function is calculated based on the designed optimization model, and the loss function is used to stipulate the generation of the adversarial point cloud; S4. Test the attack success rate of the adversarial point cloud under the setting of a physical attack. This invention performs nonlinear deformation of point clouds based on the method of geometric density perception, without the need for preset postures. The surface of the generated confrontation point cloud is smoother, and the deformation of the confrontation point cloud is less likely to be detected by the naked eye, and its confrontation performance can be better. maintained, so it has a better attack success rate under adversarial defense and can be used in the real world.

Description

Method and device for generating countermeasure point cloud based on geometric density perception
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an countermeasure point cloud generation method and device based on geometric density perception.
Background
Initially, the countermeasure learning was mainly directed to the two-dimensional image field, and in recent years, as three-dimensional data acquisition became easier, three-dimensional deep neural networks developed rapidly. Thus, the application of the countermeasure learning in the three-dimensional field is attracting more and more interest, especially the attack on the three-dimensional point cloud classifier. The generation of the resistant point cloud is usually to add/delete certain specific points on the point cloud or to perform disturbance of the point level, and the point-based attacks are weak, so that people can easily identify the point cloud and make correct judgment; the weak attack can cause the three-dimensional point cloud classifier to misjudge, thereby giving out a high-confidence erroneous output. The deep learning method based on the point cloud is attracting more and more attention in various applications such as automatic driving, robotics and monitoring, and therefore, research on the generation of the countermeasure point cloud and the robustness of the 3D deep learning model is very necessary.
At present, an attack method aiming at point cloud has been greatly developed, but the existing resistant point cloud generation mainly depends on the change of point distribution, and the attack mode often causes uneven and uneven distribution of a large number of outliers and surface points. Thus, the success rate of these attacks against defenses tends to be greatly reduced. In addition, the challenge learning in the three-dimensional field is mainly focused on the attacks of the traditional three-dimensional point cloud classifier. The traditional three-dimensional point cloud network has no denaturation such as rotation, and the input model data has a preset gesture. Research shows that the object posture can also have a great influence on the classification result of the classifier. However, in the real world, the pose of the object is often not fixed when the object is subjected to point cloud sampling, and the assumption of point cloud pre-alignment is impractical. Finally, these attacks are only data attacks at the experimental level, and these methods of combating attacks cannot be reproduced and used in the real world because of the limitation of not preserving their resistance in the physical world. Therefore, a good technical solution for resisting point cloud generation is still lacking in the existing technology, so as to solve the above problems.
Disclosure of Invention
In order to solve at least one of the technical problems existing in the prior art to a certain extent, the application aims to provide an antagonism point cloud generation method and device based on geometric density perception.
The technical scheme adopted by the application is as follows:
a method for generating an countermeasure point cloud based on geometric density perception comprises the following steps:
s1, acquiring original point cloud data;
s2, performing nonlinear geometric deformation on points in the original point cloud data by adopting a geometric density sensing method to obtain an input sample;
s3, inputting the input sample into a point cloud classification network, calculating a loss function according to a designed optimization model, and using a loss function protocol to resist generation of point cloud;
s4, testing the attack success rate of the countermeasure point cloud under the physical attack setting.
Further, the step S1 includes:
sampling a plurality of points of point cloud data in an original data set by using a furthest point sampling method to obtain a data set;
training the point cloud classification network by using a training set in the data set; the point cloud classification network can select a rotation sensitive network or a rotation constant-change network;
and after the training of the point cloud classifier is completed, selecting a preset number of point clouds from the test set of the data set as original point cloud data.
Further, the step S2 includes:
sampling a preset number of anchor points in the original point cloud data according to a furthest sampling method;
taking the preset number of anchor points as the center, performing nonlinear geometric deformation operation on points in the original point cloud data by adopting a geometric density sensing method, and performing geometric superposition on the deformation taking each anchor point as the center to obtain an input sample;
wherein the nonlinear geometrical deformation operation includes at least one of a zoom operation, a rotation operation, or a translation operation.
Further, performing nonlinear geometric deformation operation on points in the original point cloud data by adopting a geometric density sensing method, and performing geometric superposition on the deformation with each anchor point as a center to obtain an input sample, wherein the method comprises the following steps:
representing any one of the original point clouds to be attacked, < >>Includes 1024 points->Is->A vector representing the i-th point, n being equal to 1024;Represents->A corresponding input sample; at->M points are selected as anchor points of the geometric density sensing deformation according to the furthest point sampling method>
In order to ensure the space continuity and smoothness of the countermeasure point cloud, converting a group of local anchor points in a three-dimensional space into an anchor point density map on a two-dimensional object surface manifold, and calculating the anchor point density map by adopting Gaussian kernels with the m anchor points as centers;
for point cloudsAny one of the points->Calculate its +.>The expression of the gaussian kernel model is as follows:
in the method, in the process of the application,representing the gaussian kernel function, sigma is a superparameter in the gaussian kernel, ii·ii 2 Representing euclidean norms;
the density maps of all points for the predetermined number m of anchor points are superimposed by means of a Nadaraya-Watson kernel regression, the expression of which is as follows:
in the method, in the process of the application,representing the input sample->I-th point in (a) ->Representing->The i-th point in the (a) is overlapped by taking m anchor points as the center to do the nonlinear deformation operation,/a->Representing->The ith point in (2) is marked by anchor point->And performing the nonlinear deformation operation for the center.
Further, for point cloudsThe nonlinear deformation operation includes scaling, rotation and translation; for point cloudsEvery point inside->The expression of the nonlinear deformation is as follows:
in the method, in the process of the application,and->Respectively represent heel anchor points->The associated scaling matrix, rotation matrix and translation vector.
Further, the step S3 includes:
inputting the input sample into a point cloud classification network, and calculating a loss function according to an optimization model;
calculating the gradient of the parameters of the geometric deformation based on the loss function;
taking the product of the gradient and the step length as the update of the parameters of the geometric deformation, and making the geometric deformation of the original point cloud data according to the updated parameters of the geometric deformation to obtain an countermeasure sample; meanwhile, detecting whether the point cloud classification network can successfully identify the countermeasure sample;
inputting the countermeasure sample as an input sample into the point cloud classification network, updating the parameter preset iteration times of the geometric deformation, and if the input sample can cause the point cloud classification network to be wrongly classified before the preset iteration times are reached, ending the iteration in advance to obtain the countermeasure point cloud; if the point cloud classification network can still successfully identify the challenge sample until the iteration times are over, the attack fails, and the step S2 is re-executed.
Further, the loss function comprises a point cloud classification loss function and a point cloud geometric similarity loss function;
the expression of the optimization model is as follows:
min L loss
where y represents the input sampleBelonging class labels, phi θ For a point cloud classification network, θ is the point cloud classification networkModel parameters;
loss function L loss The expression of (2) is:
in the method, in the process of the application,representing input samples +.>For the point cloud classification loss function of the point cloud classification network,representing a point cloud->Input sample->Is a point cloud geometric similarity loss function; λ represents the trade-off parameter of the two loss functions.
Further, the point cloud classification loss function is related to a success rate of combating attacks;
the method for using the white box attack and the attacker can acquire the point cloud classification network model structure and parameters;
adopting a non-target attack mode, namely only ensuring that the point cloud is classified by the point cloud classification network to be wrong when the attack is resisted, and having no requirement on classification of the classification errors;
using a C & W loss function as the point cloud classification loss function, wherein the expression of the point cloud classification loss function is as follows:
where kappa.gtoreq.0 is an edge threshold number and t is the category label with the highest external confidence except for the correct category label.
Further, the point cloud geometric similarity loss function comprises two parts, namely an implicit regularization term for geometric deformation perception and a display constraint term for geometric deformation perception;
the specific expression of the implicit regularization term of geometric deformation perception is as follows:
in the formula II F Representing the Frobenius norm,expressed as anchor->A rotation matrix R as a center j The calculated rotation angles along three axes (i.e. x, y, z); s is S j Representing heel anchor->Related scaling matrix, < >>Representing a heel anchor pointRelated translation vector, |II 2 Table euclidean norms; m is the preset number of anchor points;
the geometrical deformation perceived display constraint item comprises two parts, namely a Chamfer distance and a local curvature similarity;
the expression for the Chamfer distance is:
in the method, in the process of the application,representing the input sample +.>Point in->Representing the original point cloud->N is the number of points;
the expression of the local curvature similarity is:
in the method, in the process of the application,described as at the point->Field of places->The specific expression of the geometric features is as follows:
wherein k represents the fieldThe number of points in>Representation dot->Normal vector of (2);Representation area->Is a dot in (2);
the point cloud geometric similarity loss function can be expressed as:
where α and β represent trade-off parameters for the loss term.
Further, the updated expression of the parameters of the geometric deformation is:
in the method, in the process of the application,representing the gradient of the loss function to the parameter to be updated, eta represents the updated step length, and t represents the iteration times;Representing heel anchor->Related scaling matrix, < >>Expressed as anchor->A rotation matrix R as a center j Calculated rotation angle along three axes, < >>Representing heel anchor->And an associated translation vector.
Further, the physical attacks include simulated physical attacks and real physical attacks;
the simulation physical attack is realized: reconstructing the countermeasure point cloud to obtain a triangular mesh surface, and sampling the triangular mesh surface by a furthest point sampling method to obtain the countermeasure point cloud simulating physical attack;
realizing real physical attack: reconstructing the countermeasure point cloud to obtain a triangular mesh surface, printing the triangular mesh surface by using a 3D printing mode to obtain a real object, and finally scanning the real object by using a laser scanner to obtain the countermeasure point cloud of physical attack.
The application adopts another technical scheme that:
an opposing point cloud generation device based on geometric density perception, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method as described above.
The application adopts another technical scheme that:
a computer readable storage medium, in which a processor executable program is stored, which when executed by a processor is adapted to carry out the method as described above.
The beneficial effects of the application are as follows: according to the application, the point cloud is subjected to nonlinear deformation based on the geometric density sensing method, unlike the traditional countermeasure point cloud generation method, the preset gesture is not needed, the generated countermeasure point cloud is smoother in surface, deformation of the countermeasure point cloud is not easy to be perceived by naked eyes, and the countermeasure performance can be well maintained, so that the method has a good attack success rate under countermeasure defense and can be applied in the real world.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present application or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present application, and other drawings may be obtained according to these drawings without the need of inventive labor for those skilled in the art.
FIG. 1 is a flow chart of steps of a method for generating an countermeasure point cloud based on geometric density perception according to an embodiment of the present application;
FIG. 2 is a flowchart for updating geometric deformation parameters according to an embodiment of the present application;
fig. 3 is a schematic diagram comparing one sample of the challenge point cloud generated by the method of simulating a physical attack according to an embodiment of the present application with three conventional challenge point cloud generation methods.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present application, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present application and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present application.
In the description of the present application, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the description of the present application, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present application can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
As shown in fig. 1, the present embodiment provides a method for generating an countermeasure point cloud based on geometric density sensing, which generates the countermeasure point cloud by performing nonlinear transformation based on density sensing on the point cloud at different positions and superposing the point clouds together, and compared with other point cloud attack methods, the countermeasure point cloud generated by the method has smoother surface, more robust countermeasure, can maintain better attack success rate under the condition of countermeasure defense, and is physically realizable. The application can be widely applied to the field of artificial intelligence, and the method and the device for generating the countermeasure point cloud based on geometric density sensing, which are provided by the embodiment of the application, are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present application provides a method for generating an countermeasure point cloud based on geometric density sensing, which mainly includes the following steps:
s1, acquiring original point cloud data.
In the embodiment of the application, 1024 points are sampled to the point cloud data in the original data set by using the furthest point sampling method to obtain a data set, wherein the data set can be ModelNet40 or other point cloud data sets; training the point cloud classification network by using a training set in the data set, wherein the point cloud classification network can be a rotation sensitive network or a rotation constant change network, the rotation sensitive network comprises a PointNet and a DGCNN, and the rotation constant change network is selected from Vector Neurons comprising a VN-PointNet and a VN-DGCNN; after training the point cloud classifier is completed, selecting a preset number of point clouds of preset categories which can be correctly classified by the point cloud classification network from the test set of the data set as original point cloud data.
S2, performing nonlinear geometric deformation on points in the original point cloud data by adopting a geometric density sensing method to obtain an input sample.
In the embodiment of the application, a preset number of anchor points are sampled in the original point cloud data according to a furthest distance sampling method; taking the preset number of anchor points as the center, adopting a geometric density sensing method to perform nonlinear geometric deformation, namely scaling, rotation and translation operations on the points in the original point cloud data, and performing geometric superposition on the deformation taking each preset number of anchor points as the center to obtain an input sample;
geometrically deforming the point cloud in the original point cloud data to the input sample by using the geometrical density sensing method, wherein the point cloud is at an original pointCloudM points are selected as anchor points of the geometric density sensing deformation according to the furthest point sampling method>Converting a group of local anchor points in a three-dimensional space into an anchor point density map on a two-dimensional object surface manifold, calculating the anchor point density map by adopting Gaussian kernel with the m anchor points as centers, wherein the expression of the Gaussian kernel model is as follows:
in the method, in the process of the application,representing the gaussian kernel function, sigma is a superparameter in the gaussian kernel, ii·ii 2 Representing the euclidean norm.
And superposing density maps of all points on the preset number m of anchor points in a mode of Nadaraya-Watson kernel regression, and carrying out nonlinear deformation according to the density maps to obtain the input sample, wherein the expression of the Nadaraya-Watson kernel regression is as follows:
in the method, in the process of the application,representing the input sample->I-th point in (a) ->Representing->The i-th point in the (a) is overlapped by taking m anchor points as the center to do the nonlinear deformation operation,/a->Representing->The ith point in (2) is marked by anchor point->And performing the nonlinear deformation operation for the center.
For point cloudsThe nonlinear deformation operation includes scaling, rotation and translation; for point cloud->Every point inside->The expression of the nonlinear deformation is as follows:
in the method, in the process of the application,and->Respectively represent heel anchor points->The associated scaling matrix, rotation matrix and translation vector.
S3, inputting the input sample into a point cloud classification network, calculating a loss function according to a designed optimization model, and using a loss protocol to resist the generation of the point cloud.
Referring to fig. 2, which shows a flowchart for updating geometric deformation parameters provided in the embodiment of the present application, the input samples are input into a point cloud classification network, and a loss function is calculated according to a designed optimization model, including a point cloud classification loss function and a point cloud geometric similarity loss function, where the specific expression is:
in the method, in the process of the application,representing input samples +.>For the point cloud classification loss function of the point cloud classification network,representing a point cloud->Input sample->Is a point cloud geometric similarity loss function; λ represents the trade-off parameter of the two loss functions.
Using a C & W loss function as the point cloud classification loss function, wherein the specific expression is as follows:
wherein, kappa is greater than or equal to 0 and is an edge threshold number phi θ X-y represents the point cloud classification network, namely a mapping function, capable of classifying any pointCloudMapping to space->Is (i.e.)>) θ is the point cloud classification network Φ θ And t is the category label with the highest external reliability except the correct category label.
The point cloud geometric similarity loss function comprises two parts, namely an implicit regularization term for geometric deformation perception and a display constraint term for geometric deformation perception.
The specific expression of the implicit regularization term of geometric deformation perception is as follows:
in the formula II F Representing the Frobenius norm,expressed as anchor->A rotation matrix R as a center j The calculated rotation angles along three axes (i.e. x, y, z); s is S j Representing heel anchor->Related scaling matrix, < >>Representing a heel anchor pointRelated translation vector, |II 2 Surface EuclideanNorms.
The geometrical deformation perceived display constraint item comprises two parts, namely a Chamfer distance and a local curvature similarity;
the expression for the Chamfer distance is:
the expression of the local curvature similarity is:
in the method, in the process of the application,described as at the point->Field of places->The specific expression of the geometric features is as follows:
wherein k represents the fieldThe number of points in>Representation dot->Is defined in the specification.
The point cloud geometric similarity loss function can be expressed as:
where α and β represent trade-off parameters for the loss term.
Based on the loss function, updating the parameters of the geometric deformation by using a random gradient descent mode, calculating the gradient of the parameters of the geometric deformation, taking the product of the gradient and the step length as the update of the parameters of the geometric deformation, wherein the specific expression is as follows:
in the method, in the process of the application,representing the gradient of the loss function to the parameter to be updated, eta represents the updated step length, and t represents the iteration times;Representing heel anchor->Related scaling matrix, < >>Expressed as anchor->A rotation matrix R as a center j Calculated rotation angle along three axes, < >>Representing heel anchor->And an associated translation vector.
Making geometric deformation of the original point cloud data according to updated parameters of the geometric deformation to obtain an countermeasure sample; meanwhile, detecting whether the point cloud classification network can successfully identify the countermeasure sample; inputting the countermeasure sample as the input sample into the point cloud classification network, updating the parameter preset iteration times of the geometric deformation, and if the input sample can cause the point cloud classification network to be wrongly classified before the preset iteration times are reached, ending the iteration in advance to obtain the countermeasure point cloud; if the point cloud classification network can still successfully identify the challenge sample until the iteration times are finished, the attack fails, different preset numbers of anchor points are resampled, and the following steps are repeated.
S4, testing the attack success rate of the countermeasure point cloud under the physical attack setting.
In the embodiment of the application, 10000 points are sampled again uniformly from the data set to obtain initial point cloud data, and the steps S1-S3 are executed to obtain the countermeasure point cloud; reconstructing the countermeasure point cloud by using a method for reconstructing the surface of the screen Poisso as to obtain a triangular mesh surface, and sampling the triangular mesh surface by using a furthest point sampling method so as to obtain the countermeasure point cloud simulating the physical attack.
Referring to FIG. 3, there is provided one sample of challenge point cloud generated by the method of simulating physical attacks (AdvGT) of the embodiment of the present application and three conventional challenge point cloud generation methods (KNN, 3d-Adv, geoA) 3 ) Is a comparison of (c).
Re-uniformly sampling 10000 points from the data set to obtain initial point cloud data, and executing the steps S1-S3 to obtain the countermeasure point cloud; reconstructing the countermeasure point cloud by using a method for reconstructing the surface of the screen Poisso to obtain a triangular grid surface, printing the triangular grid surface by using a 3D printing mode to obtain a real object, and finally scanning the real object by using a laser scanner to obtain the countermeasure point cloud of physical attack.
The embodiment also provides an antagonism point cloud generating device based on geometric density perception, which comprises:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method as shown in fig. 1.
The geometrical density perception-based countermeasure point cloud generating device provided by the embodiment of the application can be used for executing any combination implementation steps of the geometrical density perception-based countermeasure point cloud generating method provided by the embodiment of the method, and has corresponding functions and beneficial effects.
Embodiments of the present application also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
The embodiment also provides a storage medium which stores instructions or programs for executing the method for generating the countermeasure point cloud based on the geometric density perception, and when the instructions or programs are run, the instructions or programs can execute the steps in any combination of the method embodiments, and the method has the corresponding functions and beneficial effects.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (10)

1. The method for generating the countermeasure point cloud based on geometric density perception is characterized by comprising the following steps of:
s1, acquiring original point cloud data;
s2, performing nonlinear geometric deformation on points in the original point cloud data by adopting a geometric density sensing method to obtain an input sample;
s3, inputting the input sample into a point cloud classification network, calculating a loss function according to a designed optimization model, and using a loss function protocol to resist generation of point cloud;
s4, testing the attack success rate of the countermeasure point cloud under the physical attack setting.
2. The method for generating an countermeasure point cloud based on geometric density sensing according to claim 1, wherein the step S1 includes:
sampling a plurality of points of point cloud data in an original data set by using a furthest point sampling method to obtain a data set;
training the point cloud classification network by using a training set in the data set;
and after the training of the point cloud classifier is completed, selecting a preset number of point clouds from the test set of the data set as original point cloud data.
3. The method for generating an countermeasure point cloud based on geometric density sensing according to claim 1, wherein the step S2 includes:
sampling a preset number of anchor points in the original point cloud data according to a furthest sampling method;
taking the preset number of anchor points as the center, performing nonlinear geometric deformation operation on points in the original point cloud data by adopting a geometric density sensing method, and performing geometric superposition on the deformation taking each anchor point as the center to obtain an input sample; wherein the nonlinear geometrical deformation operation includes at least one of a zoom operation, a rotation operation, or a translation operation.
4. The method for generating an countermeasure point cloud based on geometric density sensing according to claim 1, wherein the step S3 includes:
inputting the input sample into a point cloud classification network, and calculating a loss function according to an optimization model;
calculating the gradient of the parameters of the geometric deformation based on the loss function;
taking the product of the gradient and the step length as the update of the parameters of the geometric deformation, and making the geometric deformation of the original point cloud data according to the updated parameters of the geometric deformation to obtain an countermeasure sample; meanwhile, detecting whether the point cloud classification network can successfully identify the countermeasure sample;
inputting the countermeasure sample as an input sample into the point cloud classification network, updating the parameter preset iteration times of the geometric deformation, and if the input sample can cause the point cloud classification network to be wrongly classified before the preset iteration times are reached, ending the iteration in advance to obtain the countermeasure point cloud; if the point cloud classification network can still successfully identify the challenge sample until the iteration times are over, the attack fails, and the step S2 is re-executed.
5. The method for generating an opposing point cloud based on geometric density sensing as claimed in claim 1, wherein said loss function comprises a point cloud classification loss function and a point cloud geometric similarity loss function;
the expression of the optimization model is as follows:
min L loss
where y represents the input sampleBelonging class labels, phi θ The method is characterized in that the method is a point cloud classification network, and theta is a model parameter of the point cloud classification network;
loss function L loss The expression of (2) is:
in the method, in the process of the application,representing input samples +.>Point cloud class loss function for point cloud class network>Representing the original point cloud->Input sample->Is a point cloud geometric similarity loss function; λ represents the trade-off parameter of the two loss functions.
6. The method for generating a challenge point cloud based on geometric density sensing according to claim 5, wherein the point cloud classification loss function is related to a success rate of challenge, and an expression of the point cloud classification loss function is:
where kappa.gtoreq.0 is an edge threshold number and t is the category label with the highest external confidence except for the correct category label.
7. The method for generating an opposing point cloud based on geometric density sensing as claimed in claim 5, wherein said point cloud geometric similarity loss function comprises two parts, namely an implicit regularization term of geometric deformation sensing and a display constraint term of geometric deformation sensing;
the specific expression of the implicit regularization term of geometric deformation perception is as follows:
in the formula II F Representing the Frobenius norm,expressed as anchor->A rotation matrix R as a center j The calculated rotation angles along the three axes; s is S j Representing heel anchor->Related scaling matrix, < >>Representing heel anchor->Related translation vector, |II 2 Table euclidean norms; m is the preset number of anchor points;
the geometrical deformation perceived display constraint item comprises two parts, namely a Chamfer distance and a local curvature similarity;
the expression for the Chamfer distance is:
in the method, in the process of the application,representing input samples +.>Point in->Representing the original point cloud->N is the number of points;
the expression of the local curvature similarity is:
in the method, in the process of the application,described as at the point->Field of places->The specific expression of the geometric features is as follows:
wherein k represents the fieldThe number of points in>Representation dot->Normal vector of (2);Representation area->Is a dot in (2);
the point cloud geometric similarity loss function can be expressed as:
where α and β represent trade-off parameters for the loss term.
8. The method for generating a countermeasure point cloud based on geometric density sensing according to claim 4, wherein the updated expression of the parameters of the geometric deformation is:
in the method, in the process of the application,representing the gradient of the loss function to the parameter to be updated, eta represents the updated step length, and t represents the iteration times;
representing heel anchor->Related scaling matrix, < >>Expressed as anchor->A rotation matrix R as a center j Calculated rotation angle along three axes, < >>Representing heel anchor->And an associated translation vector.
9. The method for generating a countermeasure point cloud based on geometric density sensing according to claim 1, wherein the physical attacks include simulated physical attacks and real physical attacks;
the simulation physical attack is realized: reconstructing the countermeasure point cloud to obtain a triangular mesh surface, and sampling the triangular mesh surface by a furthest point sampling method to obtain the countermeasure point cloud simulating physical attack;
realizing real physical attack: reconstructing the countermeasure point cloud to obtain a triangular mesh surface, printing the triangular mesh surface by using a 3D printing mode to obtain a real object, and finally scanning the real object by using a laser scanner to obtain the countermeasure point cloud of physical attack.
10. An opposing point cloud generating device based on geometric density sensing, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of any one of claims 1-9.
CN202310818209.8A 2023-07-04 2023-07-04 Method and device for generating countermeasure point cloud based on geometric density perception Pending CN116863220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310818209.8A CN116863220A (en) 2023-07-04 2023-07-04 Method and device for generating countermeasure point cloud based on geometric density perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310818209.8A CN116863220A (en) 2023-07-04 2023-07-04 Method and device for generating countermeasure point cloud based on geometric density perception

Publications (1)

Publication Number Publication Date
CN116863220A true CN116863220A (en) 2023-10-10

Family

ID=88227920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310818209.8A Pending CN116863220A (en) 2023-07-04 2023-07-04 Method and device for generating countermeasure point cloud based on geometric density perception

Country Status (1)

Country Link
CN (1) CN116863220A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119312871A (en) * 2024-09-26 2025-01-14 北京航空航天大学 A hybrid mode counterattack method and system based on lidar point cloud

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200302223A1 (en) * 2019-03-21 2020-09-24 Illumina, Inc. Artificial Intelligence-Based Generation of Sequencing Metadata
CN112949675A (en) * 2020-12-22 2021-06-11 上海有个机器人有限公司 Method and device for interfering image recognition system, readable storage medium and terminal
CN114639050A (en) * 2022-03-23 2022-06-17 北京科能腾达信息技术股份有限公司 Sequence image target tracking method based on scale equal-variation convolution twin network
CN114973235A (en) * 2022-05-06 2022-08-30 华中科技大学 Method for generating countermeasure point cloud based on disturbance added in geometric feature field
CN116228825A (en) * 2023-01-29 2023-06-06 重庆邮电大学 A Point Cloud Registration Method Based on Salient Anchor Geometric Embedding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200302223A1 (en) * 2019-03-21 2020-09-24 Illumina, Inc. Artificial Intelligence-Based Generation of Sequencing Metadata
CN112949675A (en) * 2020-12-22 2021-06-11 上海有个机器人有限公司 Method and device for interfering image recognition system, readable storage medium and terminal
CN114639050A (en) * 2022-03-23 2022-06-17 北京科能腾达信息技术股份有限公司 Sequence image target tracking method based on scale equal-variation convolution twin network
CN114973235A (en) * 2022-05-06 2022-08-30 华中科技大学 Method for generating countermeasure point cloud based on disturbance added in geometric feature field
CN116228825A (en) * 2023-01-29 2023-06-06 重庆邮电大学 A Point Cloud Registration Method Based on Salient Anchor Geometric Embedding

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119312871A (en) * 2024-09-26 2025-01-14 北京航空航天大学 A hybrid mode counterattack method and system based on lidar point cloud
CN119312871B (en) * 2024-09-26 2025-10-24 北京航空航天大学 A hybrid mode counterattack method and system based on lidar point cloud

Similar Documents

Publication Publication Date Title
CN111709409B (en) Face living body detection method, device, equipment and medium
Wang et al. Detect globally, refine locally: A novel approach to saliency detection
CN112381879B (en) Object posture estimation method, system and medium based on image and three-dimensional model
US11455495B2 (en) System and method for visual recognition using synthetic training data
Zhang et al. 3d adversarial attacks beyond point cloud
CN108734210B (en) An object detection method based on cross-modal multi-scale feature fusion
CN108764195B (en) Handwriting model training method, handwritten character recognition method, device, equipment and medium
CN109359539B (en) Attention evaluation method, apparatus, terminal device and computer-readable storage medium
CN112257665B (en) Image content recognition methods, image recognition model training methods and media
US20190114824A1 (en) Fast and precise object alignment and 3d shape reconstruction from a single 2d image
US20150325046A1 (en) Evaluation of Three-Dimensional Scenes Using Two-Dimensional Representations
CN111767760A (en) Living body detection method and device, electronic device and storage medium
US11568212B2 (en) Techniques for understanding how trained neural networks operate
CN118982458A (en) Method and device for generating rotated face image
CN106529609B (en) A kind of image-recognizing method and device based on neural network structure
CN110348475A (en) It is a kind of based on spatial alternation to resisting sample Enhancement Method and model
CN112991449B (en) An AGV positioning and mapping method, system, device and medium
CN114187506A (en) Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network
CN115829877A (en) Physical attack-oriented adaptive anti-patch generation method and device
CN114399803A (en) Face key point detection method and device
CN117854155B (en) A human skeleton motion recognition method and system
Salem et al. Semantic image inpainting using self-learning encoder-decoder and adversarial loss
KR20250107418A (en) Elctronic device and method to restore scene image of target view
CN116863220A (en) Method and device for generating countermeasure point cloud based on geometric density perception
US11157765B2 (en) Method and system for determining physical characteristics of objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination