EP3671574B1 - Device and method to improve the robustness against adversarial examples - Google Patents
Device and method to improve the robustness against adversarial examples Download PDFInfo
- Publication number
- EP3671574B1 EP3671574B1 EP18213925.3A EP18213925A EP3671574B1 EP 3671574 B1 EP3671574 B1 EP 3671574B1 EP 18213925 A EP18213925 A EP 18213925A EP 3671574 B1 EP3671574 B1 EP 3671574B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- input image
- classifier
- image
- org
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24143—Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
Definitions
- US10007866 BB discloses a method comprising: accessing, from a memory, a neural network image classifier, the neural network image classifier having been trained using a plurality of training images from an input space, the training images being labeled for a plurality of classes;
- Classifiers like e.g. neural network classification systems can easily be fooled. It is well known that classifiers which may be based on deep learning may be sensitive to small perturbations. In order to deploy such systems in the physical world it is important to provide a proof about the system's robustness.
- the methods of the claim set have the advantage to improve robustness with respect to such perturbations.
- the adversarial input signal (x adv ) may be provided by a targeted attack, i.e. provided to cause the classifier to classify it as belonging to a predefined second class.
- a targeted attack i.e. provided to cause the classifier to classify it as belonging to a predefined second class.
- An efficient way for doing so can be provided if said classifier when provided with said input signal (x), is configured to output a first classification value ( f l 0 ) corresponding to said first class (l 0 ) and a second classification value ( f l ) corresponding to said a predefined second class (l). In this sense, it may be said that said input signal (x) causes said first and/or second classification value.
- a robust way to generate said targeted misclassification is by determining said modified input signal (x mod ) such as to cause a difference ( g ) between said first classification value ( f l 0 ) and said second classification value ( f l ) to be smaller than the difference ( g ) caused by said original input signal (x org ). Conveniently, this may be achieved by determining said modified input signal (x mod ) depending on a gradient ( ⁇ g ) of said difference ( g ).
- Control system 40 receives the stream of sensor signals S of sensor 30 in an optional receiving unit 50.
- Receiving unit 50 transforms the sensor signals S into input signals x.
- each sensor signal S may directly be taken as an input signal x.
- Input signal x may, for example, be given as an excerpt from sensor signal S.
- sensor signal S may be processed to yield input signal x.
- Input signal x comprises image data corresponding to an image recorded by sensor 30. In other words, input signal x is provided in accordance with sensor signal S.
- Input signal x is then passed on to an image classifier 60, which may, for example, be given by an artificial neural network.
- Classifier 60 determines output signals y from input signals x.
- the output signal y comprises information that assigns one or more labels to the input signal x
- Output signals y are transmitted to an optional conversion unit 80, which converts the output signals y into the control commands A.
- Actuator control commands A are then transmitted to actuator 10 for controlling actuator 10 accordingly.
- output signals y may directly be taken as control commands A.
- control system 40 controls a display 10a instead of an actuator 10.
- control system 40 may comprise a processor 45 (or a plurality of processors) and at least one machine-readable storage medium 46 on which instructions are stored which, if carried out, cause control system 40 to carry out a method according to one aspect of the invention.
- the classifier 60 may for example detect objects in the vicinity of the at least partially autonomous robot.
- Output signal y may comprise an information which characterizes where objects are located in the vicinity of the at least partially autonomous robot.
- Control command A may then be determined in accordance with this information, for example to avoid collisions with said detected objects.
- the at least partially autonomous robot may be given by a gardening robot (not shown), which uses sensor 30, preferably an optical sensor, to determine a state of plants in the environment 20.
- Actuator 10 may be a nozzle for spraying chemicals.
- an actuator control command A may be determined to cause actuator 10 to spray the plants with a suitable quantity of suitable chemicals.
- control system 40 is used to control a manufacturing machine 11, e.g. a punch cutter, a cutter or a gun drill) of a manufacturing system 200, e.g. as part of a production line.
- the control system 40 controls an actuator 10 which in turn control the manufacturing machine 11.
- Classifier 60 is configured to compute output signals y from input signals x. These output signals x are also passed on to assessment unit 180.
- modification unit 160 may compute an adversarial dataset T' comprising modified input signals x adv based on original input signals x taken, for example, from training set T and their respective desired output signals y s .
- the trained classifier 60 may then (912) be used for providing a first output signal y1 by receiving sensor signal S comprising data from sensor 30, determining the input signal x depending on said sensor signal S, and inputting said input signal x into classifier 60 to obtain first output signal y1 that characterizes a classification of input signal x.
- This adversarial input signal x adv is then (914) inputted into classifier 60 to obtain a second output signal y2 that characterizes a classification of adversarial input signal x adv .
- An actuator control signal (A) may then (916) be determined in accordance with said parameter vu, and actuator (10) may be controlled in accordance with said actuator control signal (A). For example, if said parameter vu indicates a non-vulnerability, said actuator control signal (A) may then be determined to correspond to normal operation mode, whereas, if said parameter vu indicates a vulnerability, said actuator control signal (A) may then be determined to correspond to a fail-safe operation mode, by e.g. reducing a dynamics of a motion of said actuator (10).
- FIG 11 illustrates schematically a structure of one embodiment of classifier 60.
- Input signal x is inputted into processing unit 61, which may, for example, be given by all but the last layer of a neural network.
- Processing unit 61 is configured to output a vector f comprising preferably at least one entry z l for each of the possible classes l for classification.
- Said vector f is inputted into a selector 62, which may be given, e.g., by an implementation of an argmax function.
- Selector 62 is configured to output signal y, which corresponds to the class corresponding to that one of the entries f l of vector f that has the highest value.
- vector f will also be denoted f ( x ).
- target classification l may be determined as that classification that corresponds to the second largest entry f l of vector f . May be targeted or untargeted attack
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Testing And Monitoring For Control Systems (AREA)
- Image Analysis (AREA)
Description
- The invention concerns a method for assessing robustness of an image classifier according to claim 1, a method for training an image classifier according to claim 8, a computer program according to
claim 11, a machine-readable storage medium according toclaim 12, and a system according to claim 13 -
discloses a method comprising: accessing, from a memory, a neural network image classifier, the neural network image classifier having been trained using a plurality of training images from an input space, the training images being labeled for a plurality of classes;US10007866 BB - computing a plurality of adversarial images by, for each adversarial image, searching a region in the input space around one of the training images, the region being one in which the neural network is linear, to find an image which is incorrectly classified into the plurality of classes by the neural network; applying the training image to the neural network and observing a response of the neural network;
- computing a constraint system which represents the input space using the observed response; and
- further training the neural network image classifier to have improved accuracy using at least the adversarial images.
- "Universal Adversarial Perturbations Against Semantic Image Segmentation", arXiv preprint arXiv:1704.05712v3, Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, Volker Fischer, disclose a method for generating adversarial perturbations.
- A. C. Serban, E. Poll, "Adversarial examples - a complete characterisation of the phenomenon", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853 provides an overview on how to generate and use adversarial examples.
- Classifiers, like e.g. neural network classification systems can easily be fooled. It is well known that classifiers which may be based on deep learning may be sensitive to small perturbations. In order to deploy such systems in the physical world it is important to provide a proof about the system's robustness.
- It is possible to compute robust classifiers with respect to adversarial noise that lies within a small Lp ball. Nonetheless, adversarials with respect to more natural perturbations are not necessarily covered by these robustness statements. More natural perturbations include partial translations, rotations and motion blur. Moving for example a dark object by one pixel will lead to a very large L ∞ -distance if the background is very bright, but will usually be considered as a small change in the physical world. As a consequence, these small physical changes are not covered by L ∞ --robustness.
- The methods of the claim set have the advantage to improve robustness with respect to such perturbations.
- A first embodiment according to the invention is defined by claim 1.
- The at least approximate Wasserstein distance is a Sinkhorn distance which differs from a Wasserstein distance by an entropic term, wherein for any pair of first distribution (P) and second distribution (Q), said entropic term characterizes an entropy of a distribution Π that satisfies Π1 n =P, ΠT 1 n =Q. If P and Q are distributions defined over the same domain Ω, then Π is a distribution over the domain Ω×Ω with P and Q as its marginals.
- It has been discovered that the inclusion of said entropic term enables an approximate solution to the projection on a Wasserstein ball that is a lot faster to compute.
- It should be noted that, as shown in "Sinkhorn Distances: Lightspeed Computation of Optimal Transportation Distances", arXiv preprint arXiv:1306.0895v1, Marco Cuturi (2013), a Sinkhorn distance is in fact not a metric in the mathematical sense, since it is possible to have a zero distance between two distributions that are not the same. Instead, in a mathematical sense, it is a pseudo-metric.
- In fact, it has been discovered that a good way for determining the projected input signal (xproj) by solving a convex optimization corresponding to said minimization under said constraints. This is described in detail in the description corresponding to
figure 14 . - In one further aspect to the invention, the adversarial input signal (xadv) may be provided by a targeted attack, i.e. provided to cause the classifier to classify it as belonging to a predefined second class. An efficient way for doing so can be provided if said classifier when provided with said input signal (x), is configured to output a first classification value (f l
0 ) corresponding to said first class (ℓ0) and a second classification value (fl ) corresponding to said a predefined second class (ℓ). In this sense, it may be said that said input signal (x) causes said first and/or second classification value. A robust way to generate said targeted misclassification is by determining said modified input signal (xmod) such as to cause a difference (g) between said first classification value (f l0 ) and said second classification value (fl ) to be smaller than the difference (g) caused by said original input signal (xorg). Conveniently, this may be achieved by determining said modified input signal (xmod) depending on a gradient (∇g) of said difference (g). - In alternative embodiment to said targeted attack, said adversarial input signal (xadv) may be provided by an untargeted attack, i.e. provided to cause said classifier to classify it as belonging to any different second class. In this case, conveniently said modified input signal (xmod) is provided such as to cause said first classification value (f l
0 ) to be smaller than said first classification value (f l0 ) caused by said original input signal (xorg). Conveniently, this may be achieved by determining said modified input signal (xmod) depending on a gradient (∇f l0 ) of said first classification value (f l0 ). - In a further aspect, the steps of modifying said original input signal (xorg) to yield said modified input signal (xmod) and projecting said modified input signal (xmod) onto said predefined subset to yield said projected input signal (xproj) are carried out iteratively by using said projected input signal (xproj) of a preceding iteration as original input signal (xorg) a subsequent iteration, wherein said step projecting said modified input signal (xmod) onto said predefined subset is carried out after each step of modifying said original input signal (xorg). Such an iterative method is preferable, because it ensures that intermediate modified input signal (xmod) remain close to a boundary of the at least approximate Wasserstein ball, thus enhancing convergence of the method.
- Embodiments of the invention will be discussed with reference to the following figures in more detail. The figures show:
- Figure 1
- a control system having a classifier controlling an actuator in its environment;
- Figure 2
- the control system controlling an at least partially autonomous robot;
- Figure 3
- the control system controlling a manufacturing machine;
- Figure 4
- the control system controlling an automated personal assistant;
- Figure 5
- the control system controlling an access control system;
- Figure 6
- the control system controlling a surveillance system;
- Figure 7
- the control system controlling an imaging system;
- Figure 8
- a training system for controlling the classifier;
- Figure 9
- a flow-chart diagram of a training method carried out by said training system;
- Figure 10
- a flow-chart diagram illustrating a method for operating said control system;
- Figure 11
- an embodiment of a structure of said classifier;
- Figure 12
- a flow-chart diagram illustrating a method for determining said adversarial input signal xadv;
- Figure 13
- a flow-chart diagram illustrating a method for projecting a modified input signal xmod onto a Wasserstein ball;
- Figure 14
- a flow-chart diagram illustrating a method for projecting a modified input signal xmod onto a Sinkhorn ball;
- Figure 15
- a flow-chart diagram illustrating a method for computing Π as defined in equation (2) from the maximizing values Φ*, Ψ*, ρ* that solve equation (5).
- Shown in
figure 1 is one embodiment of anactuator 10 in itsenvironment 20.Actuator 10 interacts with acontrol system 40.Actuator 10 and itsenvironment 20 will be jointly called actuator system. At preferably evenly spaced distances, asensor 30 senses a condition of the actuator system. Thesensor 30 may comprise several sensors. Preferably,sensor 30 is an optical sensor that takes images of theenvironment 20. An output signal S of sensor 30 (or, in case thesensor 30 comprises a plurality of sensors, an output signal S for each of the sensors) which encodes the sensed condition is transmitted to thecontrol system 40. - Thereby,
control system 40 receives a stream of sensor signals S. It then computes a series of actuator control commands A depending on the stream of sensor signals S, which are then transmitted toactuator 10. -
Control system 40 receives the stream of sensor signals S ofsensor 30 in anoptional receiving unit 50. Receivingunit 50 transforms the sensor signals S into input signals x. Alternatively, in case of no receivingunit 50, each sensor signal S may directly be taken as an input signal x. Input signal x may, for example, be given as an excerpt from sensor signal S. Alternatively, sensor signal S may be processed to yield input signal x. Input signal x comprises image data corresponding to an image recorded bysensor 30. In other words, input signal x is provided in accordance with sensor signal S. - Input signal x is then passed on to an
image classifier 60, which may, for example, be given by an artificial neural network. -
Classifier 60 is parametrized by parameters φ, which are stored in and provided by parameter storage St 1. -
Classifier 60 determines output signals y from input signals x. The output signal y comprises information that assigns one or more labels to the input signal x Output signals y are transmitted to anoptional conversion unit 80, which converts the output signals y into the control commands A. Actuator control commands A are then transmitted toactuator 10 for controllingactuator 10 accordingly. Alternatively, output signals y may directly be taken as control commands A. -
Actuator 10 receives actuator control commands A, is controlled accordingly and carries out an action corresponding to actuator control commandsA. Actuator 10 may comprise a control logic which transforms actuator control command A into a further control command, which is then used to controlactuator 10. - In further embodiments,
control system 40 may comprisesensor 30. In even further embodiments,control system 40 alternatively or additionally may compriseactuator 10. - In still further embodiments, it may be envisioned that
control system 40 controls adisplay 10a instead of anactuator 10. - Furthermore,
control system 40 may comprise a processor 45 (or a plurality of processors) and at least one machine-readable storage medium 46 on which instructions are stored which, if carried out, causecontrol system 40 to carry out a method according to one aspect of the invention. -
Figure 2 shows an embodiment in whichcontrol system 40 is used to control an at least partially autonomous robot, e.g. an at least partiallyautonomous vehicle 100. -
Sensor 30 may comprise one or more video sensors and/or one or more radar sensors and/or one or more ultrasonic sensors and/or one or more LiDAR sensors and or one or more position sensors (like e.g. GPS). Some or all of these sensors are preferably but not necessarily integrated invehicle 100. Alternatively or additionallysensor 30 may comprise an information system for determining a state of the actuator system. One example for such an information system is a weather information system which determines a present or future state of the weather inenvironment 20. - For example, using input signal x, the
classifier 60 may for example detect objects in the vicinity of the at least partially autonomous robot. Output signal y may comprise an information which characterizes where objects are located in the vicinity of the at least partially autonomous robot. Control command A may then be determined in accordance with this information, for example to avoid collisions with said detected objects. -
Actuator 10, which is preferably integrated invehicle 100, may be given by a brake, a propulsion system, an engine, a drivetrain, or a steering ofvehicle 100. - Actuator control commands A may be determined such that actuator (or actuators) 10 is/are controlled such that
vehicle 100 avoids collisions with said detected objects. Detected objects may also be classified according to what theclassifier 60 deems them most likely to be, e.g. pedestrians or trees, and actuator control commands A may be determined depending on the classification. - In further embodiments, the at least partially autonomous robot may be given by another mobile robot (not shown), which may, for example, move by flying, swimming, diving or stepping. The mobile robot may, inter alia, be an at least partially autonomous lawn mower, or an at least partially autonomous cleaning robot. In all of the above embodiments, actuator command control A may be determined such that propulsion unit and/or steering and/or brake of the mobile robot are controlled such that the mobile robot may avoid collisions with said identified objects.
- In a further embodiment, the at least partially autonomous robot may be given by a gardening robot (not shown), which uses
sensor 30, preferably an optical sensor, to determine a state of plants in theenvironment 20.Actuator 10 may be a nozzle for spraying chemicals. Depending on an identified species and/or an identified state of the plants, an actuator control command A may be determined to causeactuator 10 to spray the plants with a suitable quantity of suitable chemicals. - In even further embodiments, the at least partially autonomous robot may be given by a domestic appliance (not shown), like e.g. a washing machine, a stove, an oven, a microwave, or a dishwasher.
Sensor 30, e.g. an optical sensor, may detect a state of an object which is to undergo processing by the household appliance. For example, in the case of the domestic appliance being a washing machine,sensor 30 may detect a state of the laundry inside the washing machine. Actuator control signal A may then be determined depending on a detected material of the laundry. - Shown in
figure 3 is an embodiment in whichcontrol system 40 is used to control amanufacturing machine 11, e.g. a punch cutter, a cutter or a gun drill) of amanufacturing system 200, e.g. as part of a production line. Thecontrol system 40 controls anactuator 10 which in turn control themanufacturing machine 11. -
Sensor 30 may be given by an optical sensor which captures properties of e.g. a manufacturedproduct 12.Classifier 60 may determine a state of the manufacturedproduct 12 from these captured properties.Actuator 10 which controlsmanufacturing machine 11 may then be controlled depending on the determined state of the manufacturedproduct 12 for a subsequent manufacturing step of manufacturedproduct 12. Or, it may be envisioned thatactuator 10 is controlled during manufacturing of a subsequent manufacturedproduct 12 depending on the determined state of the manufacturedproduct 12. - Shown in
figure 4 is an embodiment in whichcontrol system 40 is used for controlling an automatedpersonal assistant 250.Sensor 30 may be an optic sensor, e.g. for receiving video images of a gestures ofuser 249. Alternatively,sensor 30 may also be an audio sensor e.g. for receiving a voice command ofuser 249. -
Control system 40 then determines actuator control commands A for controlling the automatedpersonal assistant 250. The actuator control commands A are determined in accordance with sensor signal S ofsensor 30. Sensor signal S is transmitted to thecontrol system 40. For example,classifier 60 may be configured to e.g. carry out a gesture recognition algorithm to identify a gesture made byuser 249.Control system 40 may then determine an actuator control command A for transmission to the automatedpersonal assistant 250. It then transmits said actuator control command A to the automatedpersonal assistant 250. - For example, actuator control command A may be determined in accordance with the identified user gesture recognized by
classifier 60. It may then comprise information that causes the automatedpersonal assistant 250 to retrieve information from a database and output this retrieved information in a form suitable for reception byuser 249. - In further embodiments, it may be envisioned that instead of the automated
personal assistant 250,control system 40 controls a domestic appliance (not shown) controlled in accordance with the identified user gesture. The domestic appliance may be a washing machine, a stove, an oven, a microwave or a dishwasher. - Shown in
figure 5 is an embodiment in which control system controls anaccess control system 300. Access control system may be designed to physically control access. It may, for example, comprise adoor 401.Sensor 30 is configured to detect a scene that is relevant for deciding whether access is to be granted or not. It may for example be an optical sensor for providing image or video data, for detecting a person's face.Classifier 60 may be configured to interpret this image or video data e.g. by matching identities with known people stored in a database, thereby determining an identity of the person. Actuator control signal A may then be determined depending on the interpretation ofclassifier 60, e.g. in accordance with the determined identity.Actuator 10 may be a lock which grants access or not depending on actuator control signal A. A non-physical, logical access control is also possible. - Shown in
figure 6 is an embodiment in whichcontrol system 40 controls asurveillance system 400. This embodiment is largely identical to the embodiment shown infigure 5 . Therefore, only the differing aspects will be described in detail.Sensor 30 is configured to detect a scene that is under surveillance. Control system does not necessarily control anactuator 10, but adisplay 10a. For example, themachine learning system 60 may determine a classification of a scene, e.g. whether the scene detected byoptical sensor 30 is suspicious. Actuator control signal A which is transmitted todisplay 10a may then e.g. be configured to causedisplay 10a to adjust the displayed content dependent on the determined classification, e.g. to highlight an object that is deemed suspicious bymachine learning system 60. - Shown in
figure 7 is an embodiment of acontrol system 40 for controlling animaging system 500, for example an MRI apparatus, x-ray imaging apparatus or ultrasonic imaging apparatus.Sensor 30 may, for example, be an imaging sensor.Machine learning system 60 may then determine a classification of all or part of the sensed image. Actuator control signal A may then be chosen in accordance with this classification, thereby controllingdisplay 10a. For example,machine learning system 60 may interpret a region of the sensed image to be potentially anomalous. In this case, actuator control signal A may be determined to causedisplay 10a to display the imaging and highlighting the potentially anomalous region. - Shown in
figure 8 is an embodiment of atraining system 140 fortraining classifier 60. Atraining data unit 150 determines input signals x, which are passed on toclassifier 60. For example,training data unit 150 may access a computer implemented database St 2 in which a set T of training data is stored. Set T comprises pairs of input signal x and corresponding desired output signal ys.Training data unit 150 selects samples from set T, e.g. randomly. Input signal x of a selected sample is passed on toclassifier 60. Desired output signal ys is passed on toassessment unit 180. -
Classifier 60 is configured to compute output signals y from input signals x. These output signals x are also passed on toassessment unit 180. - A
modification unit 160 determines updated parameters φ' depending on input fromassessment unit 180. Updated parameters φ'are transmitted to parameter storage St 1 to replace present parameters φ. -
- Furthermore,
modification unit 160 may compute an adversarial dataset T' comprising modified input signals xadv based on original input signals x taken, for example, from training set T and their respective desired output signals ys. - Furthermore,
training system 140 may comprise a processor 145 (or a plurality of processors) and at least one machine-readable storage medium 146 on which instructions are stored which, if carried out, causecontrol system 140 to carry out a method according to one aspect of the invention. - Shown in
figure 9 is a flow-chart diagram of an embodiment of the method fortraining classifier 60, what may be implemented bytraining system 140. - First (901),
classifier 60 is trained with training data of set T in a conventional manner, as discussed above. - Then (902), one or more adversarial input signals xadv and corresponding desired output signals ys are generated with the method according illustrated in
figure 11 by modifying input signals from data set T and leaving the corresponding desired output signal ys unchanged. These one or more pairs of adversarial input signal xadv and corresponding desired output signals y are added to adversarial dataset T'. - Now (903),
classifier 60 is trained with training data of set adversarial dataset T'. The trainedclassifier 60 may then (904) be used for providing an actuator control signal A by receiving sensor signal S comprising data fromsensor 30, determining the input signal x depending on said sensor signal S, and feeding said input signal x intoclassifier 60 to obtain output signal y that characterizes a classification of input signal x. 10 or 10a may then be controlled in accordance with provided actuator control signal A. This concludes the method.Actuator - Shown in
figure 10 is a flow-chart diagram of an embodiment of the method for operatingclassifier 60, which may be implemented bycontrol system 40. - First (911), parameters φ that characterize the operation of
classifier 60 are provided. Conventionally, they are obtained by a training method fortraining classifier 60, e.g. by supervised training as outlined above. - The trained
classifier 60 may then (912) be used for providing a first output signal y1 by receiving sensor signal S comprising data fromsensor 30, determining the input signal x depending on said sensor signal S, and inputting said input signal x intoclassifier 60 to obtain first output signal y1 that characterizes a classification of input signal x. - Then (913), an adversarial input signal xadv is generated with the method according illustrated in
figure 11 by modifying input signal x. - This adversarial input signal xadv is then (914) inputted into
classifier 60 to obtain a second output signal y2 that characterizes a classification of adversarial input signal xadv. - Next (915), a parameter vu indicating a vulnerability of
classifier 60 is computed based on said first output signal y1 and said second output signal y2. For example, it is possible to set said parameter vu to a first value (for example "1") indicating a vulnerability, if said first output signal y1 is not equal to said second output signal y2, and equal to a first value (for example "0") indicating a non-vulnerability, if said first output signal y1 is equal to said second output signal y2. - An actuator control signal (A) may then (916) be determined in accordance with said parameter vu, and actuator (10) may be controlled in accordance with said actuator control signal (A). For example, if said parameter vu indicates a non-vulnerability, said actuator control signal (A) may then be determined to correspond to normal operation mode, whereas, if said parameter vu indicates a vulnerability, said actuator control signal (A) may then be determined to correspond to a fail-safe operation mode, by e.g. reducing a dynamics of a motion of said actuator (10).
-
Figure 11 illustrates schematically a structure of one embodiment ofclassifier 60. Input signal x is inputted intoprocessing unit 61, which may, for example, be given by all but the last layer of a neural network. Processingunit 61 is configured to output a vector f comprising preferably at least one entry zℓ for each of the possible classes ℓ for classification. Said vector f is inputted into aselector 62, which may be given, e.g., by an implementation of an argmax function.Selector 62 is configured to output signal y, which corresponds to the class corresponding to that one of the entries fℓ of vector f that has the highest value. To highlight the dependence on input signal x, vector f will also be denoted f(x). -
Figure 12 illustrates a method for determining an adversarial input signal xadv based on a given input signal x, which will also be called original input signal xorg. This method may be implemented bycontrol system 40 or bytraining system 140. In an initialization step (1000), a counter variable may be initialized as counter = 0, a step size τ may be initialized as e.g. τ = 20 and a modified input signal xmod may be initialized as xmod=x. Original input signal xorg is inputted intoclassifier 60 and resulting output signal y is determined. A correct classification ℓ0 is initialized as ℓ0 = y. A target classification ℓ ≠ ℓ0 may be selected either randomly or e.g. by setting it to a predefined value, or by selecting it as that classification ℓ ≠ ℓ0 that is closest to correct classification ℓ0. For example, target classification ℓ may be determined as that classification that corresponds to the second largest entry f ℓ of vector f. May be targeted or untargeted attack -
- Next (1200), a projected input signal xproj is be determined by projecting modified input signal xmod onto a Wasserstein ball with a predefined radius ε centered around original input signal xorg. This projection may be carried out with one of the methods illustrated in
figures 13 and14 . - Then (1300), the counter is incremented counter ← counter + 1 and it is checked (1400), if the counter is a multiple of a predefined number, e.g. 20. If that is the case (1500), the counter is reset to counter = 0 and step size τ is increased by a predefined factor, e.g. τ ← τ · 1.1.
- Both of steps (1400) and (1500) are followed by checking (1600) whether the counter is less than a predefined maximum counter countermax, i.e. if counter < countermax. Furthermore, modified input signal xmod is set equal to projected input signal xproj, and scalar g(xmod) is evaluated. If counter < countermax and if g(x mod) ≤ ub with an upper bound ub which may be set to any non-negative number, e.g. ub = 0 (i.e. classification has not changed from correct classification ℓ0 to target classification ℓ), the method iterates back to step (1200). If not (1600), adversarial input signal xadv is provided as equal to modified input signal xmod. Optionally, if g(xadv) ≤ ub an error message may be provided indicating that no adversarial has been found with the desired confidence. This concludes the method.
-
Figure 13 illustrates a method for determining projected input signal xproj from modified input signal xmod. This projection involves the computation of a Wasserstein distance WD (P, Q) between two n-dimensional vectors P and Q which are given as P = (P 1, ... , Pn ) and Q = (Q 1, ... , Qn ). Distances between indices i and j are stored in a matrix Dij ∈ R n×n (where Dij = ∥i - ∥ p for some pre-defined value of p) and the Wasserstein distance WD (P, Q) can be computed as -
- (Of course, the L 2-metric may be replaced by any other metric).
-
- If that is the case (1320), projected input signal xproj is set equal to modified input signal xmod, and the method ends.
-
- Next (1340), Π as defined in equation (2) is determined from the maximizing values Φ*, Ψ*,ρ* using e.g. the method illustrated in
figure 15 . - Then (1350), projected input signal xproj is set equal to xproj = Π T 1. This concludes the method.
-
Figure 14 illustrates another, approximate but more efficient method, wherein instead of using a Wasserstein distance WD (P,Q) as defined in equation (2) for defining the ball on which to project, one uses a Sinkhorn distance by subtracting an entropic term ET, i.e. with a predefined variable λ ≠ 0, e.g. λ = 1. -
- If that is the case (1321), projected input signal xproj is set equal to modified input signal xmod, and the method ends.
- If that is not case (1331), and denoting P = xmod and Q = xorg, a variable ρ is initialized as ρ = 1 and two n-dimensional vectors R, S are initialized by setting each of their components equal to e.g. Ri = Si = 1/n.
-
-
-
-
-
- Next (1391), it is checked whether the method as converged, e.g. if changes to R and/or S over the last iteration are sufficiently small (e.g. less than a predefined threshold). If that is not the case, the method iterates back to step (1341). If the method has converged, however, step (1392) follows.
-
- This concludes the method.
-
Figure 15 illustrates an embodiment to compute Π as defined in equation (2) from the maximizing values Φ*, Ψ*, ρ* as obtained from the solution of equation (5) in step (1330). First (2000), variable i is initialized as i = 1. Then (2010), all values of j ∈ {1, ..., n} are identified for which holds. If it does, the corresponding component Π ij is set to Π ij = 0. Then (2020), all values of j ∈ {1, ..., n} are identified for which holds (i.e. all remaining values of j), and stored in a set J. Next (2030), the number of elements in J is counted and denoted sz(J). Then (2040), for all j ∈ J, the corresponding component Π ij is set to Π ij = pi /sz(j). It is then checked (2060), if i < n holds. If this is the case (2070), i is incremented i ← i + 1 and the method iterates back to step (2010). If not, the method concludes and continues in step (1350). - The term "computer" covers any device for the processing of pre-defined calculation instructions. These calculation instructions can be in the form of software, or in the form of hardware, or also in a mixed form of software and hardware.
- It is further understood that the procedures cannot only be completely implemented in software as described. They can also be implemented in hardware, or in a mixed form of software and hardware.
Claims (13)
- A computer-implemented method for assessing a robustness of an image classifier (60), wherein the image classifier (60) detects where objects are located in a vicinity of an at least partially autonomous robot from data received from one ore more video sensors or the classifier (60) determines a state of a manufactured product (12) from captured properties by an optical sensor (30) which captures properties of the manufactured product (12), wherein the method comprises the steps of:- receiving a sensor signal (S) comprising data from the one or more video sensors or from the optical sensor (30),- determining an original input image (xorg) which depends on said sensor signal (S),- determining, by the image classifier (60), a first class (y1) that characterizes a classification of said original input image (xorg);- determining an adversarial input image (xadv) with a method of obtaining an adversarial input image (xadv) to the image classifier (60),- determining, by the image classifier (60), a second class (y2) that characterizes said adversarial input image (xadv) and- determining a robustness value (vu) indicating a vulnerability of the classifier (60) depending on said first class (y1) and on said second class (y2), wherein the robustness value (vu) is set to a first value, for example "1", indicating a vulnerability, if said first class (y1) is not equal to said second class (y2), and the robustness value (vu) is set to a second value, for example "0", indicating a non-vulnerability, if said first class (y1) is equal to said second class (y2);wherein the method for obtaining the adversarial input image (xadv) to the image classifier is obtained from an original input image (xorg), and wherein said adversarial input image and said original input image cause the image classifier (60) to classify said original input image (xorg) as belonging to a first class (ℓ0) and said adversarial input image (xadv) as belonging to a second class (ℓ), comprising the steps of:- modifying said original input image (xorg) to yield a modified input image (xmod);- projecting said modified input image (xmod) onto a metric ball around said original input image (xorg) to yield a projected input image (xproj); and- obtaining said adversarial input image (xadv) depending on said projected input image (xproj),characterized in thatsaid metric is an at least approximate Wasserstein distance (WD ,
) and that said at least approximate Wasserstein distance is a Sinkhorn distance ( ) which differs from said Wasserstein distance (WD ) by an entropic term (ET ), and for any pair of first distribution (P) and second distribution (Q), said entropic term (ET ) characterizes an entropy of a distribution Π that satisfies Π1 n =P, ΠT 1 n =Q. - The method according to claim 1, wherein said projected input image (xproj) is determined by minimizing a distance to said modified input image (xmod) under a constraint that a distance, according to said at least approximate Wasserstein distance (WD,
) is not larger than a predefined radius (ε) of said metric ball. - The method according to claim 2, wherein said minimization is obtained by maximizing a dual problem corresponding to a primal problem that is given by said minimization under said constraints.
- The method according to claim 1 or 2, wherein said projected input image (xproj) is determined by solving a convex optimization corresponding to said minimization.
- The method according to any one of the above claims, wherein said image classifier (60), when provided with an input image (x), is configured to output a first classification value (f l
0 ) corresponding to said first class (ℓ0) and a second classification value (fl ) corresponding to said a predefined second class (ℓ), and wherein said modified input image (xmod) causes a difference (g) between said first classification value (f l0 ) and said second classification value (fl ) to be smaller than the difference (g) caused by said original input image (xorg). - The method according to any one of claims 1 to 5, wherein said image classifier (60), when provided with an input image (x), is configured to output a first classification value (f l
0 ) corresponding to said first class (ℓ0), and wherein said modified input image (xmod) causes said first classification value (f l0 ) to be smaller than said first classification value (f l0 ) caused by said original input image (xorg). - The method according to any one of the above claims, wherein the steps of modifying said original input image (xorg) to yield said modified input image (xmod) and projecting said modified input image (xmod) onto said predefined subset to yield said projected input image (xproj) are carried out iteratively by using said projected input image (xproj) of a preceding iteration as original input image (xorg) a subsequent iteration, wherein said step projecting said modified input image (xmod) onto said predefined subset is carried out after each step of modifying said original input image (xorg).
- A computer-implemented method for training an image classifier (60), wherein the classifier (60) detects where objects are located in a vicinity of an at least partially autonomous robot from data received from one or more video sensors or the classifier (60) determines a state of a manufactured product (12) from captured properties by an optical sensor which captures properties of the manufactured product (12), comprising the steps of:- accessing, from a memory (146), the image classifier (60), the image classifier (60) having been trained using a plurality of training input signals, the training images being labeled for a plurality of classes;- receiving a sensor signal (S) comprising data from the one or more video sensors or from the optical sensor (30),- determining an original input image (Xorg) which depends on said sensor signal (S),- determining an adversarial input image (Xadv) with a method of obtaining an adversarial input image (Xadv) to the image classifier (60), wherein the method for obtaining the adversarial input image (Xadv) to the image classifier is obtained from an original input image (Xorg), and wherein said adversarial input image (Xadv) and said original input image (Xorg) cause the image classifier (60) to classify said original input image (Xorg) as belonging to a first class (ℓo) and said adversarial input image (Xadv) as belonging to a second class (ℓ) different from said first class (ℓo), comprising the steps of:- modifying said original input image (Xorg) to yield a modified input image (Xmod);- projecting said modified input image (Xmod) onto a metric ball around said original input image (Xorg) to yield a projected input image (Xproj); and- obtaining said adversarial input image (Xadv) depending on said projected input image (Xproj), characterized in that said metric is an at least approximate Wasserstein distance (WD
λ , WD λ) and that said at least approximate Wasserstein distance is a Sinkhorn distance a (WD λ) which differs from said Wasserstein distance (WD ) by an entropic term (ET), and for any pair of first distribution (P) and second distribution (Q), said entropic term (ET) characterizes an entropy of a distribution Π that satisfies Π1n =P, ΠT 1n =Q;- further training the image classifier (60) to have improved accuracy using at least the adversarial input image (xadv). - A computer-implemented method for providing an actuator control signal (A) for controlling an actuator (10) depending on an output signal (y) of an image classifier (60), comprising the steps of:- assessing whether said image classifier (60) is robust or not using the method according to claim 1 to 7, and determining said actuator control signal (A) in accordance with a result of said assessment, in particular by determining said actuator control signal (A) to cause said actuator (10) to operate in a safe mode if said image classifier (60) is deemed not robust as a result of said assessment.
- The method according to claim 9, in which said actuator (10) controls an at least partially autonomous robot (100) and/or a manufacturing machine (200)).
- Computer program that is configured to cause a computer to carry out the method according to any one of claims 1 to 10 with all of its steps if the computer program is carried out by a processor (45, 145).
- Machine-readable storage medium (46, 146) on which the computer program according to claim 11 is stored.
- System (140) that is configured to carry out the method according to claim 1 to 10.
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP18213925.3A EP3671574B1 (en) | 2018-12-19 | 2018-12-19 | Device and method to improve the robustness against adversarial examples |
| CN201980084684.9A CN113302630A (en) | 2018-12-19 | 2019-11-27 | Apparatus and method for improving robustness against "confrontation examples |
| JP2021535830A JP7264410B2 (en) | 2018-12-19 | 2019-11-27 | Apparatus and method for improving robustness against "adversarial samples" |
| US17/267,638 US20210326647A1 (en) | 2018-12-19 | 2019-11-27 | Device and method to improve the robustness against 'adversarial examples' |
| PCT/EP2019/082757 WO2020126372A1 (en) | 2018-12-19 | 2019-11-27 | Device and method to improve the robustness against 'adversarial examples' |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP18213925.3A EP3671574B1 (en) | 2018-12-19 | 2018-12-19 | Device and method to improve the robustness against adversarial examples |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| EP3671574A1 EP3671574A1 (en) | 2020-06-24 |
| EP3671574B1 true EP3671574B1 (en) | 2024-07-10 |
Family
ID=64746095
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP18213925.3A Active EP3671574B1 (en) | 2018-12-19 | 2018-12-19 | Device and method to improve the robustness against adversarial examples |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20210326647A1 (en) |
| EP (1) | EP3671574B1 (en) |
| JP (1) | JP7264410B2 (en) |
| CN (1) | CN113302630A (en) |
| WO (1) | WO2020126372A1 (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112215292B (en) * | 2020-10-19 | 2022-03-29 | 电子科技大学 | Image countermeasure sample generation device and method based on mobility |
| US20220414531A1 (en) * | 2021-06-25 | 2022-12-29 | International Business Machines Corporation | Mitigating adversarial attacks for simultaneous prediction and optimization of models |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10013477B2 (en) * | 2012-11-19 | 2018-07-03 | The Penn State Research Foundation | Accelerated discrete distribution clustering under wasserstein distance |
| US10007866B2 (en) | 2016-04-28 | 2018-06-26 | Microsoft Technology Licensing, Llc | Neural network image classifier |
| DE102018200724A1 (en) * | 2017-04-19 | 2018-10-25 | Robert Bosch Gmbh | Method and device for improving the robustness against "Adversarial Examples" |
| EP3396603B1 (en) * | 2017-04-27 | 2019-12-25 | Dassault Systèmes | Learning an autoencoder |
| US10624558B2 (en) * | 2017-08-10 | 2020-04-21 | Siemens Healthcare Gmbh | Protocol independent image processing with adversarial networks |
| US20200224172A1 (en) * | 2017-09-19 | 2020-07-16 | The Broad Institute, Inc. | Methods and systems for reconstruction of developmental landscapes by optimal transport analysis |
| US10971142B2 (en) * | 2017-10-27 | 2021-04-06 | Baidu Usa Llc | Systems and methods for robust speech recognition using generative adversarial networks |
| EP3673419B8 (en) * | 2017-11-22 | 2024-09-18 | DeepMind Technologies Limited | Population based training of neural networks |
| US20200380364A1 (en) * | 2018-02-23 | 2020-12-03 | Robert Bosch Gmbh | Adversarial Probabilistic Regularization |
| US10825219B2 (en) * | 2018-03-22 | 2020-11-03 | Northeastern University | Segmentation guided image generation with adversarial networks |
| WO2019241155A1 (en) * | 2018-06-11 | 2019-12-19 | Arterys Inc. | Simulating abnormalities in medical images with generative adversarial networks |
| CN108830334B (en) * | 2018-06-25 | 2020-08-28 | 江西师范大学 | Fine-grained target discrimination method based on antagonistic transfer learning |
| CN113168567A (en) * | 2018-10-29 | 2021-07-23 | Hrl实验室有限责任公司 | System and method for small sample transfer learning |
-
2018
- 2018-12-19 EP EP18213925.3A patent/EP3671574B1/en active Active
-
2019
- 2019-11-27 CN CN201980084684.9A patent/CN113302630A/en active Pending
- 2019-11-27 JP JP2021535830A patent/JP7264410B2/en active Active
- 2019-11-27 US US17/267,638 patent/US20210326647A1/en not_active Abandoned
- 2019-11-27 WO PCT/EP2019/082757 patent/WO2020126372A1/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| CN113302630A (en) | 2021-08-24 |
| JP7264410B2 (en) | 2023-04-25 |
| WO2020126372A1 (en) | 2020-06-25 |
| US20210326647A1 (en) | 2021-10-21 |
| JP2022515756A (en) | 2022-02-22 |
| EP3671574A1 (en) | 2020-06-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3576021B1 (en) | Method, apparatus and computer program for generating robust automated learning systems and testing trained automated learning systems | |
| US20230101810A1 (en) | Device and method for determining a semantic segmentation and/or an instance segmentation of an image | |
| US12050990B2 (en) | Device and method for training an augmented discriminator | |
| US20230259658A1 (en) | Device and method for determining adversarial patches for a machine learning system | |
| US12159447B2 (en) | Device and method for training a classifier | |
| US11960991B2 (en) | Device and method for training a classifier | |
| EP3975011A1 (en) | Device and method for training a normalizing flow using self-normalized gradients | |
| EP3671574B1 (en) | Device and method to improve the robustness against adversarial examples | |
| EP3754557A1 (en) | Robustness indicator unit, certificate determination unit, training unit, control unit and computer-implemented method to determine a robustness indicator | |
| US20230072747A1 (en) | Device and method for training a neural network for image analysis | |
| US20240135699A1 (en) | Device and method for determining an encoder configured image analysis | |
| US20230418246A1 (en) | Device and method for determining adversarial perturbations of a machine learning system | |
| US12468937B2 (en) | Device and method for training a classifier using an invertible factorization model | |
| US12254384B2 (en) | Device and method to improve the robustness against ‘adversarial examples’ | |
| US12333406B2 (en) | Method and device for classifying sensor data and for ascertaining an activation signal for activating an actuator | |
| EP4343619A1 (en) | Method for regularizing a neural network | |
| US12254675B2 (en) | Training of machine learning systems for image processing | |
| US20230368007A1 (en) | Neural network layer for non-linear normalization |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| AX | Request for extension of the european patent |
Extension state: BA ME |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20210111 |
|
| RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| 17Q | First examination report despatched |
Effective date: 20220426 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: G06N0003080000 Ipc: G06N0003094000 Ref document number: 602018071517 Country of ref document: DE |
|
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
| RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06N 3/084 20230101ALI20240415BHEP Ipc: G06N 3/09 20230101ALI20240415BHEP Ipc: G06N 3/094 20230101AFI20240415BHEP |
|
| INTG | Intention to grant announced |
Effective date: 20240502 |
|
| RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: CARNEGIE MELLON UNIVERSITY Owner name: ROBERT BOSCH GMBH |
|
| GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
| GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
| AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018071517 Country of ref document: DE |
|
| REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
| REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241111 |
|
| REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1702694 Country of ref document: AT Kind code of ref document: T Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241111 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241010 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241011 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241110 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241010 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241010 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241010 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241110 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241011 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20250224 Year of fee payment: 7 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602018071517 Country of ref document: DE |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
| 26N | No opposition filed |
Effective date: 20250411 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20241219 |
|
| GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20241219 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240710 |
|
| REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20241231 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20241219 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20241231 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20241231 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20241231 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20241219 |


