Yuan et al., 2024 - Google Patents
Semi-fragile neural network watermarking for content authentication and tampering localizationYuan et al., 2024
- Document ID
- 11401465883500266783
- Author
- Yuan Z
- Zhang X
- Wang Z
- Yin Z
- Publication year
- Publication venue
- Expert Systems with Applications
External Links
Snippet
As an emerging digital product, artificial intelligence models face the risk of being modified. Malicious tampering will severely damage model functions, which is different from normal modifications. In addition, tampering localization for targeted repair can effectively reduce …
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
- G06T1/0028—Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0063—Image watermarking in relation to collusion attacks, e.g. collusion attack resistant
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0083—Image watermarking whereby only watermarked image required at decoder, e.g. source-based, blind, oblivious
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/07—Indexing scheme relating to G06F21/10, protecting distributed programs or content
- G06F2221/0722—Content
- G06F2221/0737—Traceability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2209/00—Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
- H04L2209/60—Digital content management, e.g. content distribution
- H04L2209/608—Watermarking
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Deepmarks: A secure fingerprinting framework for digital rights management of deep learning models | |
Wu et al. | Watermarking neural networks with watermarked images | |
Li et al. | Concealed attack for robust watermarking based on generative model and perceptual loss | |
Li et al. | How to prove your model belongs to you: A blind-watermark based framework to protect intellectual property of DNN | |
Yuan et al. | Semi-fragile neural network watermarking for content authentication and tampering localization | |
Liang et al. | Poisoned forgery face: Towards backdoor attacks on face forgery detection | |
Zhu et al. | Fragile neural network watermarking with trigger image set | |
Zhao et al. | DNN self-embedding watermarking: Towards tampering detection and parameter recovery for deep neural network | |
Zhang et al. | Dual defense: Adversarial, traceable, and invisible robust watermarking against face swapping | |
Hua et al. | Deep fidelity in DNN watermarking: A study of backdoor watermarking for classification models | |
Peng et al. | Intellectual property protection of DNN models | |
Collomosse et al. | To Authenticity, and Beyond! Building safe and fair generative AI upon the three pillars of provenance | |
Atli Tekgul et al. | On the effectiveness of dataset watermarking | |
Sun et al. | Invisible backdoor attack with dynamic triggers against person re-identification | |
Hou et al. | M-to-n backdoor paradigm: A multi-trigger and multi-target attack to deep learning models | |
Cheng et al. | DeepDIST: A black-box anti-collusion framework for secure distribution of deep models | |
Wang et al. | A spatiotemporal chaos based deep learning model watermarking scheme | |
CN115879072B (en) | A copyright protection method, device and medium for a deep fake fingerprint detection model | |
CN118587568A (en) | Active deep fake forensics method based on separable-aware hash enhancement | |
Zou et al. | Anti-neuron watermarking: Protecting personal data against unauthorized neural networks | |
Ito et al. | Access control using spatially invariant permutation of feature maps for semantic segmentation models | |
Chen et al. | A secure image watermarking framework with statistical guarantees via adversarial attacks on secret key networks | |
Li et al. | Not just change the labels, learn the features: watermarking deep neural networks with multi-view data | |
Tang et al. | ImageShield: a responsibility-to-person blind watermarking mechanism for image datasets protection | |
Lv et al. | SVD Mark: A Novel Black-Box Watermarking for Protecting Intellectual Property of Deep Neural Network Model |