CN119784568B - Watermark extraction method and device of three-dimensional model - Google Patents
Watermark extraction method and device of three-dimensional model Download PDFInfo
- Publication number
- CN119784568B CN119784568B CN202510272854.3A CN202510272854A CN119784568B CN 119784568 B CN119784568 B CN 119784568B CN 202510272854 A CN202510272854 A CN 202510272854A CN 119784568 B CN119784568 B CN 119784568B
- Authority
- CN
- China
- Prior art keywords
- frequency
- texture
- frequency domain
- watermark embedding
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
The application discloses a watermark extraction method and device of a three-dimensional model, the method comprises the steps of obtaining a plurality of images of the three-dimensional model shot at multiple angles, screening potential watermark embedding areas according to gray gradient change characteristics of the images in a model texture coordinate space for each image, carrying out multi-scale analysis on the potential watermark embedding areas to obtain high-frequency texture characteristics of the potential watermark embedding areas, carrying out frequency domain transformation on the potential watermark embedding areas to analyze the high-frequency domain characteristics of the potential watermark embedding areas based on frequency domain data and the high-frequency texture characteristics, fusing the high-frequency texture characteristics and the high-frequency domain characteristics of the potential watermark embedding areas of the images, carrying out statistical analysis on the fused two characteristics to determine a final watermark embedding area, and mapping the fused frequency domain characteristics of the final watermark embedding area back to the model texture coordinate space to obtain watermark extraction results.
Description
Technical Field
The application relates to the technical field of watermark extraction, in particular to a watermark extraction method and device of a three-dimensional model.
Background
With the development of three-dimensional digital technology, three-dimensional models are widely applied in the fields of industrial design, cultural heritage protection and virtual reality, and in order to protect the intellectual property of three-dimensional digital assets, digital watermarking technology is gradually an effective protection means. The digital watermark is embedded with hidden information in the texture or geometric characteristics of the three-dimensional model, so that the copyright information is stored on the premise of not significantly influencing the appearance of the model. And when verification is needed later, extracting the watermark in the three-dimensional model, and then verifying.
The current watermark extraction method in the three-dimensional model mainly depends on two-dimensional image processing and frequency domain analysis. In particular, the three-dimensional model surface is unfolded into a two-dimensional texture image mainly by a texture mapping mode. Then, the data in the two-dimensional texture image is subjected to frequency domain conversion, and then high-frequency vectors in the two-dimensional texture image are extracted, so that the watermark can be determined through the distribution of the high-frequency vectors.
However, because texture mapping of the three-dimensional model is complex and nonlinear, shooting angles, texture occlusion, mapping distortion and the like exist, so that texture information is easy to lose, and the accuracy of the global texture of the model cannot be guaranteed by a single mapped image. Moreover, the method simply analyzes the characteristics of the frequency domain, and does not perform effective verification, so that the accuracy of the extracted watermark cannot be effectively ensured by the existing method.
Disclosure of Invention
Based on the defects of the prior art, the application provides a watermark extraction method and device for a three-dimensional model, which are used for solving the problem that the prior art cannot guarantee accurate watermark extraction of the three-dimensional model.
In order to achieve the above object, the present application provides the following technical solutions:
the first aspect of the present application provides a watermark extraction method for a three-dimensional model, including:
acquiring a plurality of target images of a target three-dimensional model shot by multiple visual angles;
Respectively aiming at each target image, and screening out a potential watermark embedding region of the target image according to the gray gradient change characteristics of pixel points of the target image in a model texture coordinate space;
Carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region;
performing frequency domain transformation on the potential watermark embedding region, and analyzing the high-frequency domain characteristic of the potential watermark embedding region based on the frequency domain data of the potential watermark embedding region and the high-frequency texture characteristic;
Respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics;
Determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics;
and mapping the fused frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model.
Optionally, in the method for extracting a watermark from a three-dimensional model, after obtaining a plurality of target images of a target three-dimensional model photographed by multiple perspectives, the method further includes:
Respectively carrying out contrast enhancement on each piece of target image data by using a histogram equalization technology to obtain each piece of enhanced target image;
Gamma correction processing is carried out on each enhanced target image so as to adjust the gray dynamic range of the target image;
Correcting the geometric distortion of each adjusted target image through homography transformation;
And carrying out noise filtering on each corrected target image by adopting an adaptive median filtering algorithm.
Optionally, in the method for extracting a watermark of a three-dimensional model, before screening out a potential watermark embedding area of the target image according to a gray gradient change characteristic of a pixel point of the target image in a model texture coordinate space, the method further includes:
Extracting characteristic points in the target images respectively aiming at each target image;
Calculating characteristic description information of each characteristic point, wherein the characteristic description information of the characteristic points is information representing local textures of the characteristic points;
Establishing a mapping model of each characteristic point and the model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting the target image;
optimizing the mapping model by minimizing a total error function;
Calculating the local consistency degree of each feature point in the optimized mapping model;
and carrying out optimization processing on each characteristic point of which the local consistency degree does not accord with a preset condition.
Optionally, in the method for extracting a watermark of a three-dimensional model, the step of screening, for each target image, a potential watermark embedding area of the target image according to a gray gradient change characteristic of a pixel point of the target image in a model texture coordinate space includes:
determining a target texture region of the target image in the model texture coordinate space for each target image respectively;
Extracting gray gradient change characteristics of each pixel point in a target texture area of the target image;
And screening out a potential watermark embedding region of the target image through a region growing criterion based on the gray gradient change characteristics of each pixel point in the target texture region of the target image.
Optionally, in the method for extracting a watermark of a three-dimensional model, the performing multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region includes:
performing multi-scale characteristic analysis on the potential watermark embedding region to obtain characteristics of multiple scales;
Calculating the difference value of the characteristics of each two adjacent layers to obtain the initial high-frequency texture characteristics of multiple layers;
and combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map.
Optionally, in the method for extracting a watermark of a three-dimensional model, the combining the gray gradient change characteristic of the potential watermark embedding region with the texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map further includes:
And carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map, and carrying out region consistency verification.
Optionally, in the method for extracting a watermark of a three-dimensional model, the analyzing the high-frequency domain characteristic of the potential watermark embedding region based on the frequency domain data of the potential watermark embedding region and the high-frequency texture characteristic includes:
extracting high-frequency domain data from the frequency domain data of the potential watermark embedding region;
weighting the high-frequency domain data by using the high-frequency texture characteristic enhancement map to obtain high-frequency energy of the potential watermark embedding region;
performing discrete wavelet transform on the frequency domain data of the potential watermark embedding region, and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components, wherein the frequency domain components comprise a low-frequency component, a horizontal high-frequency component, a vertical high-frequency component and a diagonal high-frequency component;
Calculating multi-level energy ratio by utilizing the high-frequency texture characteristic enhancement map and the frequency domain components of each level;
And combining the high-frequency texture characteristic enhancement map, the high-frequency energy and the energy ratio of each level to obtain a total characteristic component.
Optionally, in the above method for extracting a watermark of a three-dimensional model, the fusing the high-frequency texture characteristic and the high-frequency domain characteristic of each potential watermark embedding area of each target image to obtain a fused texture characteristic and a fused frequency domain characteristic includes:
And respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain fused texture characteristics and fused frequency domain characteristics, wherein the target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
Optionally, in the above method for extracting a watermark of a three-dimensional model, mapping the fused frequency domain characteristic of the final watermark embedding region back into the model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model, including:
mapping the fused frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a distribution diagram of the final watermark embedding region;
acquiring position information of each characteristic point in a distribution diagram of the final watermark embedding area, wherein the position information comprises texture coordinates and physical coordinates;
Calculating the confidence coefficient of each characteristic point according to the total characteristic component and the maximum characteristic component of each characteristic point in the distribution diagram of the final watermark embedding area;
and generating a watermark extraction report and outputting the watermark extraction report by using the position information and the confidence coefficient of each characteristic point and the distribution diagram of the final watermark embedding area.
A second aspect of the present application provides a watermark extraction apparatus for a three-dimensional model, including:
The image acquisition unit is used for acquiring a plurality of target images of the target three-dimensional model shot by multiple visual angles;
The primary screening unit is used for screening potential watermark embedding areas of the target images according to gray gradient change characteristics of points of the target images in a model texture coordinate space respectively for each target image;
The texture characteristic analysis unit is used for carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region;
the frequency domain transformation unit is used for carrying out frequency domain transformation on the potential watermark embedding area;
A frequency domain characteristic analysis unit configured to analyze a high frequency domain characteristic of the potential watermark embedding region based on the frequency domain data of the potential watermark embedding region and the high frequency texture characteristic;
The image characteristic fusion unit is used for respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fusion texture characteristics and fusion frequency domain characteristics;
The characteristic verification unit is used for determining a final watermark embedding area through carrying out statistical analysis on the fusion texture characteristic and the fusion frequency domain characteristic;
And the result generating unit is used for mapping the fused frequency domain characteristic back to the model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model.
Optionally, the watermark extraction apparatus of a three-dimensional model further includes:
the enhancement unit is used for carrying out contrast enhancement on each piece of target image data by utilizing a histogram equalization technology respectively to obtain each piece of enhanced target image;
the dynamic adjustment unit is used for carrying out gamma correction processing on each enhanced target image so as to adjust the gray dynamic range of the target image;
The correction unit is used for correcting the geometric distortion of each adjusted target image through homography transformation;
and the filtering unit is used for filtering noise of each corrected target image by adopting an adaptive median filtering algorithm.
Optionally, the watermark extraction apparatus of a three-dimensional model further includes:
the characteristic point extraction unit is used for extracting characteristic points in the target images for each target image respectively;
A description information calculation unit for calculating the characteristic description information of each feature point, wherein the characteristic description information of the feature point is information representing the local texture of the feature point;
the mapping model establishing unit is used for establishing a mapping model of each characteristic point and the model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting the target image;
An optimizing unit, configured to optimize the mapping model by minimizing a total error function;
the consistency degree calculation unit is used for calculating the local consistency degree of each feature point in the optimized mapping model;
and the optimizing unit is used for optimizing the characteristic points of which the local consistency degree does not meet the preset conditions.
Optionally, in the above three-dimensional model watermark extraction apparatus, the preliminary screening unit includes:
A region determining unit configured to determine, for each of the target images, a target texture region of the target image in the model texture coordinate space;
A gradient characteristic extraction unit for extracting gray gradient change characteristics of each pixel point in a target texture region of the target image;
and the characteristic screening unit is used for screening out the potential watermark embedding region of the target image through a region growing criterion based on the gray gradient change characteristic of each pixel point in the target texture region of the target image.
Optionally, in the above-mentioned watermark extraction apparatus of a three-dimensional model, the texture characteristic analysis unit includes:
the multi-scale analysis unit is used for carrying out multi-scale characteristic analysis on the potential watermark embedding region to obtain characteristics of multiple scales;
The texture characteristic calculation unit is used for calculating the difference value of the characteristics of the adjacent two layers of scales to obtain the initial high-frequency texture characteristics of multiple layers;
and the texture characteristic combination unit is used for combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map.
Optionally, the watermark extraction apparatus of a three-dimensional model further includes:
and the region verification unit is used for carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map and carrying out region consistency verification.
Optionally, in the above-mentioned watermark extraction apparatus of a three-dimensional model, the frequency domain characteristic analysis unit includes:
the frequency domain data extraction unit is used for extracting high-frequency domain data from the frequency domain data of the potential watermark embedding area;
The frequency domain energy calculating unit is used for weighting the frequency domain data of the high frequency by utilizing the high frequency texture characteristic enhancement graph to obtain the high frequency energy of the potential watermark embedding region;
The decomposition unit is used for performing discrete wavelet transformation on the frequency domain data of the potential watermark embedding region and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components, wherein the frequency domain components comprise a low-frequency component, a horizontal high-frequency component, a vertical high-frequency component and a diagonal high-frequency component;
the energy ratio calculating unit is used for calculating multi-level energy ratios by utilizing the high-frequency texture characteristic enhancement map and the frequency domain components of each level;
and the total characteristic component calculation unit is used for combining the high-frequency texture characteristic enhancement graph, the high-frequency energy and the energy ratio of each layer to obtain a total characteristic component.
Optionally, in the above three-dimensional model watermark extraction apparatus, the image characteristic fusion unit includes:
The image characteristic fusion subunit is used for respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain a fused texture characteristic and a fused frequency domain characteristic, wherein the target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
Optionally, in the above-mentioned watermark extraction apparatus of a three-dimensional model, the result generation unit includes:
the inverse mapping unit is used for mapping the fused frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a distribution diagram of the final watermark embedding region;
the coordinate acquisition unit is used for acquiring the position information of each characteristic point in the distribution diagram of the final watermark embedding area, wherein the position information comprises texture coordinates and physical coordinates;
The confidence coefficient calculating unit is used for calculating the confidence coefficient of each characteristic point according to the total characteristic component and the maximum characteristic component of each characteristic point in the distribution diagram of the final watermark embedding area;
and the report generating unit is used for generating and outputting a watermark extraction report by utilizing the position information, the confidence coefficient and the distribution diagram of the final watermark embedding area of each characteristic point.
According to the watermark extraction method of the three-dimensional model, provided by the application, a plurality of target images of the target three-dimensional model shot by multiple visual angles are obtained, so that the textures of the three-dimensional model can be accurately reflected through the images of the multiple visual angles, and more comprehensive and accurate characteristics can be provided. And the characteristics of each image can be verified, and the accuracy of the characteristics is ensured, so that the accuracy of watermark extraction is improved. And then, screening out a potential watermark embedding region of the target image according to the gray gradient change characteristics of the pixel points of the target image in the model texture coordinate space for each target image, and carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region, so that the accurate texture characteristic is extracted through multi-scale analysis. Then, frequency domain transformation is carried out on the potential watermark embedding region, and the high-frequency domain characteristic of the potential watermark embedding region is analyzed based on the frequency domain data and the high-frequency texture characteristic of the potential watermark embedding region, so that not only is texture characteristic information and frequency domain characteristic information combined and analyzed, but also the watermark can be more accurately positioned. And respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics, so that the characteristics of each image can be comprehensively considered. And then, determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics, so that the accuracy of the analyzed characteristics can be verified, and an accurate final watermark embedding area is obtained. Finally, the fusion frequency domain characteristic is mapped back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model, thereby realizing a method capable of accurately extracting the watermark of the three-dimensional model.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a watermark extraction method of a three-dimensional model according to an embodiment of the present application;
FIG. 2 is a flowchart of a target image preprocessing method according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for mapping a target image into a model texture coordinate space according to an embodiment of the present application;
fig. 4 is a flowchart of a method for screening out a potential watermark embedding area of a target image according to an embodiment of the present application;
FIG. 5 is a flow chart of a method for analyzing texture characteristics according to an embodiment of the present application;
FIG. 6 is a flow chart of a method for analyzing frequency domain characteristics according to an embodiment of the present application;
Fig. 7 is a flowchart of a method for generating watermark extraction results of a target three-dimensional model according to an embodiment of the present application;
Fig. 8 is a schematic architecture diagram of a watermark extraction apparatus for a three-dimensional model according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the present application, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The embodiment of the application provides a watermark extraction method of a three-dimensional model, as shown in fig. 1, comprising the following steps:
S101, acquiring a plurality of target images of a target three-dimensional model shot by multiple visual angles.
The target three-dimensional model refers to a three-dimensional model which is required to be subjected to watermark extraction at present.
In order to accurately recover the texture of the three-dimensional model and obtain an accurate model, in the embodiment of the application, the target three-dimensional model is subjected to multi-view shooting to obtain images of the target three-dimensional model under a plurality of view angles, so that the full texture can be obtained through the images of the target three-dimensional model under the plurality of view angles, the texture of the three-dimensional model can be accurately recovered, and the accuracy of watermark extraction is improved. And the texture characteristics of each image can be mutually verified and supplemented, so that the accuracy of the finally extracted watermark is improved.
Optionally, the shooting angle communication satisfies the coverage of the area of watermark embedding. Specifically, the region can be photographed by 0-180 degrees, so that the loss of key texture details due to shielding or projection distortion is avoided.
Optionally, in order to improve the quality of the target image and thus the accuracy of watermark extraction, in another embodiment of the present application, the target image may be further preprocessed after performing step S101. As shown in fig. 2, a preprocessing method for a target image provided by an embodiment of the present application includes:
s201, contrast enhancement is carried out on each piece of target image data by utilizing a histogram equalization technology, and each piece of enhanced target image is obtained.
Specifically, the contrast enhancement is performed on the target image data by using the histogram equalization technology, so that the pixel value of the enhanced image data can be obtained as follows:
Wherein, I (x, y) is the gray value of the original target image, and I min and I max are the minimum gray value and the maximum gray value on the target image.
S202, gamma correction processing is carried out on each enhanced target image so as to adjust the gray scale dynamic range of the target image.
Specifically, the data of the target image is adjusted through the correction coefficient, and the pixel value of the adjusted image data is obtained as follows:
Wherein, As correction coefficients, it can be adjusted according to the characteristics of the image capturing apparatus.For enhancing dark areas of the imageAnd is used to correct the details of both areas of the image.
S203, correcting the geometric distortion of each adjusted target image through homography transformation.
The change in the shooting angle may cause geometric distortion to occur in the image, especially when the texture and shape of the target object are projected onto the image plane at different viewing angles, the straight lines or planes in the image may exhibit curved or irregular morphology, so that the distorted images are adjusted back to a uniform viewing angle by geometrically correcting the image based on homography transformation, so that the texture mapping of the image and the three-dimensional model is consistent.
S204, adopting an adaptive median filtering algorithm to filter noise of each corrected target image.
Specifically, the pixel value of the target image may be filtered according to a filter window having a certain size. If the pixel value is within the gray scale range of the filter window, the pixel value is kept unchanged, and if the pixel value is beyond the gray scale range, the pixel value can be replaced by using the median of the pixel value in the window.
It should be noted that, in order to accurately analyze the texture characteristics of each target image and facilitate the subsequent fusion of the characteristics of each target image, after step S101 is performed, before step S102 is performed, each target image needs to be mapped into the model texture coordinate space.
Specifically, a mapping relation between a shot target image and texture coordinates of the three-dimensional model is established according to texture mapping parameters when the target three-dimensional model is generated, and each target image is uniformly mapped into a model texture coordinate space by adopting a projection change matrix.
Optionally, in another embodiment of the present application, mapping each target image into a specific implementation in the texture coordinate space of the model, as shown in fig. 3, includes the following steps:
s301, extracting characteristic points in each target image.
Optionally, a plurality of feature points p i=[xi,yi,1]T may be extracted from the target image for subsequent processing, where x i、yi is the abscissa and ordinate values of the feature points. Specifically, a feature point detection algorithm may be used to identify key feature points in a texture region in a target image.
S302, calculating characteristic description information of each characteristic point.
The characteristic description information of the feature points is information of local textures of the feature points, and the characteristic description information of the feature points can be composed of information of multiple dimensions.
S303, establishing a mapping model of each feature point and a model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting a target image.
Specifically, based on the texture mapping parameters of the target three-dimensional model and the camera calibration parameters when the target image is shot, a projection change matrix can be obtained, and then the data of the target image can be mapped to the model texture coordinate space through the projection change matrix, namely, a mapping model of each feature point and the model texture coordinate space is established, so that for any point, namely, the mapping relation can be expressed:
Wherein H enh is a projective transformation matrix, and D (p i) is coordinates of the feature points.
Alternatively, the mapping may be specifically performed using coordinates of the distortion-corrected feature points, that is, D (p i) is the coordinates of the distortion-corrected feature points. So if the distortion coefficients are k 1、k2、p1 and p 2, the coordinates of the feature point after distortion correction can be expressed as:
Wherein x i and y i are coordinates before correction of the feature point i, x c and y c are camera principal point coordinates, and r= (x i-xc)2+(yi-yc)2).
S304, optimizing the mapping model by minimizing the total error function.
In order to ensure the quality after mapping, in the embodiment of the application, further optimization is needed. The mapping relationship can be optimized by minimizing the total error function E as follows:
wherein N is the number of feature points, alpha is the weight coefficient of the balance weight projection error and geometric consistency, G is the geometric feature function, the geometric feature function extracts the geometric attribute of curvature or edge direction at the corresponding texture coordinates, and the optimization process is carried out on the parameters of the enhanced projection transformation matrix H enh.
S305, calculating the local consistency degree of each feature point in the optimized mapping model.
In order to verify the geometric topological consistency of the texture of the target three-dimensional model, the local consistency of each characteristic point is calculated:
Where N (i) is the field set of feature points p i, w ij is the weight, and u i and u j represent coordinates of the ith and jth feature points mapped into the model texture coordinate space.
S306, optimizing each characteristic point of which the local consistency degree does not meet the preset condition.
Specifically, each feature point with local consistency degree not meeting the preset condition can be proposed, so that noise interference of the abnormal points on mapping accuracy is avoided, an accurate mapping result is obtained, and the texture of the target three-dimensional model is accurately reflected.
S102, respectively aiming at each target image, screening out a potential watermark embedding area of the target image according to the gray gradient change characteristics of the pixel points of the target image in the model texture coordinate space.
Because the gray value of the image changes to a relatively large extent due to the position of the embedded watermark, in the embodiment of the application, gray gradient change characteristics are calculated by using the gray value of each pixel point of the target image mapped to the model texture coordinate space, and then the region possibly embedded with the watermark in the target image can be screened out by an image segmentation algorithm based on the gray gradient change characteristics.
Optionally, in another embodiment of the present application, a specific implementation of step S102, as shown in fig. 4, includes the following steps:
s401, determining a target texture region of the target image in the model texture coordinate space for each target image.
In order to increase the processing speed, in the embodiment of the present application, therefore, in the embodiment of the present application, the area most likely to have the watermark is determined as the target texture area, or if the area where the watermark is consistent, the area may be determined as the target texture area directly.
S402, extracting gray gradient change characteristics of each pixel point in a target texture area of a target image.
Specifically, the gray gradient change is calculated based on the gray value of each point on the target image to obtain a gradient vectorThen obtaining gradient assignment through Sobel operator approximation calculationThereby obtaining the gradation gradient change characteristics of each point.
S403, based on the gray gradient change characteristics of each pixel point in the target texture region of the target image, the potential watermark embedding region of the target image is screened out through a region growing criterion.
Specifically, pixel points meeting the region growing criterion are screened out, and the region where the points are located is the potential watermark embedding region. Wherein the region growing criteria is defined as:
wherein I seed is the gray value of a selected seed point, I (u i) is the gray value of the ith pixel point, T and And respectively assigning thresholds to the gray level difference and the gradient.
S103, carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region.
In the implementation of the present application, the multi-scale high-frequency texture characteristic of the potential watermark embedding region is analyzed to separate the background variation and the embedding information in the texture of the target image, thereby obtaining the high-frequency texture characteristic of the potential watermark embedding region, and the regions where the features are located are more accurate watermark embedding regions, namely, the potential watermark embedding region can be narrowed to a more accurate range through the texture characteristic analysis.
Alternatively, in another embodiment of the present application, a specific implementation of step S103, as shown in fig. 5, includes the following steps:
s501, multi-scale characteristic analysis is carried out on the potential watermark embedding area, and characteristics of multiple scales are obtained.
Optionally, the multi-scale characteristic analysis is performed on the potential watermark embedding region, which can be specifically represented by a multi-scale gaussian pyramid:
Wherein "xe" denotes a convolution operation, and Is the scale parameter of the pyramid of the k-th layer.
S502, calculating the difference value of the characteristics of the dimensions of every two adjacent layers to obtain the initial high-frequency texture characteristics of the multiple layers.
Specifically, by calculating the high-frequency characteristics H k(ui of different scales, the high-frequency texture characteristics related to watermark embedding are initially extracted:
and S503, combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map.
Specifically, after the high-frequency texture characteristic is obtained preliminarily, gray gradient change and the high-frequency texture characteristic are used for combination, so that the screened potential watermark embedding area is further optimized, and the potential watermark embedding area is further contracted, and can be specifically expressed as:
wherein w k is the weight of the high-frequency texture characteristic under different scales.
It should be noted that, the optimization process only further proposes a region with low confidence, i.e. a region with limited potential watermark embedding, so as to make it the final watermark embedding region.
Optionally, in another embodiment of the present application, after performing step S503, the method may further include:
and carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map, and carrying out region consistency verification.
Specifically, boundary extraction and confirmation are carried out on the high-frequency texture characteristic enhancement graph, boundary points are detected, and then region consistency verification is carried out.
S104, performing frequency domain transformation on the potential watermark embedding area.
In order to perform watermark analysis from frequency, in the embodiment of the present application, the frequency domain transform is performed on the potential watermark embedding region, so that frequency domain data of the potential watermark embedding region can be obtained.
Alternatively, the gray signals of the potential watermark embedding region in the texture coordinate space may be converted from the spatial domain to the frequency domain by discrete pre-selection transformation, thereby obtaining frequency domain data of the potential watermark embedding region.
Specifically, for each gray signal of a potential watermark embedding region in texture coordinate space, its two-dimensional frequency domain transform, i.e., DCT transform, is:
wherein, C (u, v) is a frequency coefficient matrix, namely transformed frequency domain data; And And M and N respectively represent the number of rows and columns of the regional pixel points.
S105, analyzing the high-frequency domain characteristics of the potential watermark embedding region based on the frequency domain data and the high-frequency texture characteristics of the potential watermark embedding region.
Since the region where the watermark is located also exhibits a high frequency in the frequency domain, it is necessary to classify the characteristics of the frequency domain of the high frequency. And the high-frequency texture characteristics reflect the high-frequency characteristics of the texture, so that the high-frequency characteristics of the potential watermark embedding area can be analyzed on the basis of the high-frequency texture characteristics by taking the high-frequency texture characteristics as a reference. And the high-frequency texture characteristic can be combined, so that the analyzed high-frequency domain characteristic is more accurate.
Alternatively, in another embodiment of the present application, a specific implementation of step S105, as shown in fig. 6, includes the following steps:
s601, extracting high-frequency domain data from the frequency domain data of the potential watermark embedding area.
The specific amount can be obtained by analyzing the high-frequency component of the frequency domain data and screening the high-frequency domain data, so that the area where the high-frequency domain data is located is obtained, and the specific position of watermark embedding can be further determined.
S602, weighting the high-frequency domain data by utilizing the high-frequency texture characteristic enhancement map to obtain the high-frequency energy of the potential watermark embedding region.
Specifically, in the embodiment of the present application, the energy in the frequency domain is used as the characteristic of the frequency domain, so that the embedding position of the watermark is reflected through the distribution condition of the energy. The high-frequency texture characteristic enhancement map also reflects the position condition of watermark embedding, so comprehensive consideration is needed, and therefore, the high-frequency energy of the potential watermark embedding region is calculated by combining the high-frequency texture characteristic enhancement map with the high-frequency domain data.
Specifically, the definition of high frequency capability is:
Where sum u t and v t are frequency domain thresholds that are determined in conjunction with the high frequency texture property enhancement map E (u i) weighted high frequency capability.
S603, performing discrete wavelet transformation on the frequency domain data of the potential watermark embedding region, and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components.
Wherein the frequency domain components include a low frequency component, a horizontal high frequency component, a vertical high frequency component, and a diagonal high frequency component.
Specifically, the frequency domain data of the watermark embedding region is decomposed into a low frequency component LL k, a horizontal high frequency component LH k, a vertical high frequency component HL k, and a diagonal high frequency component HH k, which can be expressed specifically as:
S604, calculating multi-level energy ratio by utilizing the high-frequency texture characteristic enhancement map and the frequency domain components of each level.
In order to represent the distribution of the embedded signal in the high frequency component, a multi-level energy ratio is calculated based on the high frequency texture characteristic enhancement map, specifically:
s605, combining the high-frequency texture characteristic enhancement map, the high-frequency energy and the energy ratio of each layer to obtain a total characteristic component.
In order to comprehensively consider the characteristics of each part analyzed, the watermark is finally extracted, so that the analyzed high-frequency texture characteristic enhancement map, the high-frequency energy and the frequency domain components of each layer are combined, and a total characteristic component is obtained. Specifically, the method can be expressed as:
Wherein, 、AndThe adjustment may be optimized based on the gray scale, frequency domain, and high frequency gradient characteristics of the embedded signal for the weight coefficients.
S106, respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics.
After the analysis, the characteristics of each potential watermark embedding area in each target image are obtained, so that in order to comprehensively consider the characteristics of each potential watermark embedding area of each target image, the position of the final watermark is accurately analyzed, and therefore the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image are fused respectively to obtain the fused texture characteristics and the fused frequency domain characteristics.
Optionally, for the features belonging to the same region, weighting may be performed according to the confidence level, and features of different regions may be combined, so as to obtain a fused texture characteristic and a fused frequency domain characteristic, that is, a fused texture characteristic distribution diagram and a fused frequency domain characteristic distribution diagram.
Optionally, in another embodiment of the present application, a specific implementation of step S106 includes:
and respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain fused texture characteristics and fused frequency domain characteristics.
The target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
S107, determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics.
It should be noted that, the fusion texture characteristic and the fusion frequency domain characteristic can both reflect the position of the watermark, so if the fusion texture characteristic and the fusion frequency domain characteristic both reflect that a certain position is the watermark position, that is, the distribution of the fusion texture characteristic and the fusion frequency domain characteristic of a certain position is consistent, the position of the watermark is indicated. Therefore, the distribution area of the fused frequency domain characteristic is verified whether the watermark embedding area is accurately reflected or not by utilizing the fused texture characteristic. Specifically, the fused texture characteristics and the fused frequency domain characteristics can be counted, and then verification is performed according to the counted results, so that a final watermark embedding area can be determined, namely, the area with the verified and analyzed characteristics is the area where the watermark is located, and the next step can be performed.
Specifically, the frequency domain energy and texture change characteristics of the region can be embedded according to the fused high-frequency texture characteristic enhancement map and the fused total characteristic component watermark, and whether the two are consistent is verified. The region where the two are consistent is the watermark embedding region.
S108, mapping the fused frequency domain characteristics back to a model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model.
The texture characteristic and the frequency domain characteristic are integrated by fusing the frequency domain characteristic, so that the watermark position can be accurately reflected, and the distribution position is verified to be the watermark position through the last step. However, the fused frequency domain characteristic is a frequency domain characteristic, and the position of the watermark cannot be intuitively reflected, so that the fused frequency domain characteristic needs to be mapped back into a model texture coordinate space, so that a characteristic distribution diagram of a final watermark embedding area is obtained, and the characteristic distribution diagram obviously presents the watermark in the target three-dimensional model. Optionally, different colors can be used for rendering according to different eigenvalues, so that the watermark in the target three-dimensional model can be presented more intuitively, and verification can be performed on the watermark. It is possible to generate and output a watermark extraction result of the target three-dimensional model containing detailed information based on the distribution map of the final watermark embedding area.
Alternatively, in another embodiment of the present application, a specific implementation of step S108, as shown in fig. 7, includes:
and S701, mapping the fused frequency domain characteristic of the final watermark embedding region back to a model texture coordinate space to obtain a distribution diagram of the final watermark embedding region.
S702, acquiring position information of each characteristic point in a distribution diagram of a final watermark embedding area.
Wherein the location information includes texture coordinates and physical coordinates.
S703, calculating the confidence coefficient of each feature point according to the total feature component and the maximum feature component of each feature point in the distribution diagram of the final watermark embedding area.
Wherein the maximum characteristic component refers to the maximum value of the total characteristic components of the respective characteristic points. The confidence of each feature point can be expressed as:
Wherein T c(ui) is the total characteristic component of the feature point. max (T c) is the maximum characteristic component.
S704, generating a watermark extraction report and outputting the watermark extraction report by utilizing the position information, the confidence coefficient and the distribution map of the final watermark embedding area of each feature point.
The embodiment of the application provides a watermark extraction method of a three-dimensional model, which is used for acquiring a plurality of target images of a target three-dimensional model shot by multiple visual angles, so that the textures of the three-dimensional model can be accurately reflected through the images of the multiple visual angles, and more comprehensive and accurate characteristics can be provided. And the characteristics of each image can be verified, and the accuracy of the characteristics is ensured, so that the accuracy of watermark extraction is improved. And then, respectively aiming at each target image, screening out a potential watermark embedding region of the target image according to the gray gradient change characteristics of points of the target image in a model texture coordinate space, and carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region, so that the accurate texture characteristic is extracted through multi-scale analysis. Then, the frequency domain transformation is carried out on the potential watermark embedding region, and the high-frequency domain characteristic of the potential watermark embedding region is analyzed based on the frequency domain data and the high-frequency texture characteristic of the potential watermark embedding region, so that the texture characteristic and the frequency domain characteristic are combined and analyzed, and the watermark can be more accurately positioned. And respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics, so that the characteristics of each image can be comprehensively considered. And then, determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics, so that the accuracy of the analyzed characteristics can be verified, and an accurate final watermark embedding area is obtained. Finally, the fusion frequency domain characteristic is mapped back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model, thereby realizing a method capable of accurately extracting the watermark of the three-dimensional model.
Another embodiment of the present application provides a watermark extraction apparatus for a three-dimensional model, as shown in fig. 8, including:
an image acquisition unit 801, configured to acquire a plurality of target images of a target three-dimensional model photographed at multiple angles.
The primary screening unit 802 is configured to screen, for each target image, a potential watermark embedding area of the target image according to a gray gradient change characteristic of a point of the target image in a texture coordinate space of the model.
The texture characteristic analysis unit 803 is configured to perform multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region, so as to obtain the high-frequency texture characteristic of the potential watermark embedding region.
A frequency domain transforming unit 804, configured to perform frequency domain transformation on the latent watermark embedding region.
The frequency domain characteristic analysis unit 805 is configured to analyze the high frequency domain characteristic of the potential watermark embedding region based on the frequency domain data and the high frequency texture characteristic of the potential watermark embedding region.
The image characteristic fusion unit 806 is configured to fuse the high-frequency texture characteristic and the high-frequency domain characteristic of each potential watermark embedding region of each target image, so as to obtain a fused texture characteristic and a fused frequency domain characteristic.
And a characteristic verification unit 807 for determining a final watermark embedding region by performing statistical analysis on the fused texture characteristic and the fused frequency domain characteristic.
And the result generating unit 808 is used for mapping the fused frequency domain characteristic back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the watermark extraction apparatus further includes:
And the enhancement unit is used for carrying out contrast enhancement on each piece of target image data by utilizing a histogram equalization technology respectively to obtain each piece of enhanced target image.
And the dynamic adjustment unit is used for carrying out gamma correction processing on each enhanced target image so as to adjust the gray dynamic range of the target image.
And the correction unit is used for correcting the geometric distortion of each adjusted target image through homography transformation.
And the filtering unit is used for filtering noise of each corrected target image by adopting an adaptive median filtering algorithm.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the watermark extraction apparatus further includes:
and the characteristic point extraction unit is used for extracting characteristic points in the target images for each target image respectively.
And a description information calculation unit for calculating the characteristic description information of each feature point. The characteristic description information of the feature points is information of local textures of the feature points.
The mapping model establishing unit is used for establishing a mapping model of each feature point and a model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting a target image.
And the optimizing unit is used for optimizing the mapping model by minimizing the total error function.
And the consistency degree calculation unit is used for calculating the local consistency degree of each characteristic point in the optimized mapping model.
And the optimizing unit is used for optimizing each characteristic point of which the local consistency degree does not accord with the preset condition.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the preliminary screening unit includes:
And the region determining unit is used for determining the target texture region of the target image in the model texture coordinate space for each target image.
And the gradient characteristic extraction unit is used for extracting the gray gradient change characteristic of each pixel point in the target texture area of the target image.
And the characteristic screening unit is used for screening out the potential watermark embedding region of the target image through a region growing criterion based on the gray gradient change characteristics of each pixel point in the target texture region of the target image.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the texture characteristic analysis unit includes:
and the multi-scale analysis unit is used for carrying out multi-scale characteristic analysis on the potential watermark embedding region to obtain characteristics of multiple scales.
And the texture characteristic calculation unit is used for calculating the difference value of the characteristics of the adjacent two layers to obtain the initial high-frequency texture characteristics of the multiple layers.
And the texture characteristic combining unit is used for combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement chart.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the watermark extraction apparatus further includes:
And the region verification unit is used for carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map and carrying out region consistency verification.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the frequency domain characteristic analysis unit includes:
And the frequency domain data extraction unit is used for extracting high-frequency domain data from the frequency domain data of the potential watermark embedding area.
And the frequency domain energy calculating unit is used for weighting the high-frequency domain data by utilizing the high-frequency texture characteristic enhancement graph to obtain the high-frequency energy of the potential watermark embedding region.
And the decomposition unit is used for performing discrete wavelet transformation on the frequency domain data of the potential watermark embedding region and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components. Wherein the frequency domain components include a low frequency component, a horizontal high frequency component, a vertical high frequency component, and a diagonal high frequency component.
And the energy ratio calculating unit is used for calculating multi-level energy ratios by utilizing the high-frequency texture characteristic enhancement graph and the frequency domain components of each level.
And the total characteristic component calculation unit is used for combining the high-frequency texture characteristic enhancement graph, the high-frequency energy and the energy ratio of each layer to obtain a total characteristic component.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the image characteristic fusion unit includes:
And the image characteristic fusion subunit is used for respectively fusing the high-frequency texture characteristic and the high-frequency domain characteristic of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain the fused texture characteristic and the fused frequency domain characteristic. The target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
Optionally, in the watermark extraction apparatus of a three-dimensional model provided in another embodiment of the present application, the result generation unit includes:
and the reverse mapping unit is used for mapping the fusion frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a distribution diagram of the final watermark embedding region.
And the coordinate acquisition unit is used for acquiring the position information of each characteristic point in the distribution diagram of the final watermark embedding area.
Wherein the location information includes texture coordinates and physical coordinates.
And the confidence calculating unit is used for calculating the confidence of each characteristic point according to the total characteristic component and the maximum characteristic component of each characteristic point in the distribution diagram of the final watermark embedding area.
And the report generating unit is used for generating and outputting a watermark extraction report by utilizing the position information, the confidence coefficient and the distribution map of the final watermark embedding area of each characteristic point.
It should be noted that, for the specific working process of each unit provided in the above embodiment of the present application, reference may be made correspondingly to the implementation process of the corresponding step in the above method embodiment, which is not repeated herein.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510272854.3A CN119784568B (en) | 2025-03-10 | 2025-03-10 | Watermark extraction method and device of three-dimensional model |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510272854.3A CN119784568B (en) | 2025-03-10 | 2025-03-10 | Watermark extraction method and device of three-dimensional model |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119784568A CN119784568A (en) | 2025-04-08 |
| CN119784568B true CN119784568B (en) | 2025-06-10 |
Family
ID=95235770
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510272854.3A Active CN119784568B (en) | 2025-03-10 | 2025-03-10 | Watermark extraction method and device of three-dimensional model |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119784568B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117522666A (en) * | 2023-11-20 | 2024-02-06 | 深圳市证通电子股份有限公司 | A method and device for embedding and extracting invisible digital watermarks in images |
| CN117579836A (en) * | 2023-11-24 | 2024-02-20 | 中图科信数智技术(北京)有限公司 | Digital watermark video copyright protection system based on characteristics unchanged |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7158654B2 (en) * | 1993-11-18 | 2007-01-02 | Digimarc Corporation | Image processor and image processing method |
| JP4713691B2 (en) * | 2009-06-04 | 2011-06-29 | 国立大学法人 鹿児島大学 | Watermark information embedding device, watermark information processing system, watermark information embedding method, and program |
| CN104318505A (en) * | 2014-09-30 | 2015-01-28 | 杭州电子科技大学 | Three-dimensional mesh model blind watermarking method based on image discrete cosine transformation |
| CN110363697B (en) * | 2019-06-28 | 2023-11-24 | 北京字节跳动网络技术有限公司 | Image watermark steganography method, device, medium and electronic equipment |
| CN112801846B (en) * | 2021-02-09 | 2024-04-09 | 腾讯科技(深圳)有限公司 | Watermark embedding and extracting method and device, computer equipment and storage medium |
| CN115994849B (en) * | 2022-10-24 | 2024-01-09 | 南京航空航天大学 | Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling |
| CN119068321A (en) * | 2024-07-30 | 2024-12-03 | 辽宁师范大学 | A deep forgery detection method based on cross-domain feature fusion and separable watermark |
-
2025
- 2025-03-10 CN CN202510272854.3A patent/CN119784568B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117522666A (en) * | 2023-11-20 | 2024-02-06 | 深圳市证通电子股份有限公司 | A method and device for embedding and extracting invisible digital watermarks in images |
| CN117579836A (en) * | 2023-11-24 | 2024-02-20 | 中图科信数智技术(北京)有限公司 | Digital watermark video copyright protection system based on characteristics unchanged |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119784568A (en) | 2025-04-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111739031B (en) | A crop canopy segmentation method based on depth information | |
| CN112419165B (en) | Image restoration using geometric and photometric transformations | |
| CN116228780B (en) | Silicon wafer defect detection method and system based on computer vision | |
| CN119831907B (en) | A method and system for image projection distortion correction | |
| CN114596290A (en) | Defect detection method, defect detection device, storage medium, and program product | |
| CN119477770B (en) | Three-dimensional Gaussian splatter-based orthographic image generation method, device and storage medium | |
| CN110503679A (en) | Infrared reference map preparation and evaluation method | |
| CN120032222B (en) | Training method, system and medium for lampblack concentration detection model | |
| CN113038123A (en) | No-reference panoramic video quality evaluation method, system, terminal and medium | |
| CN114119437A (en) | GMS-based image stitching method for improving moving object distortion | |
| CN110969650B (en) | Intensity image and texture sequence registration method based on central projection | |
| CN111683221B (en) | Real-time video monitoring method and system of natural resources embedded with vector red line data | |
| CN111507919B (en) | A denoising method for 3D point cloud data | |
| CN117152733A (en) | A geological material identification method, system and readable storage medium | |
| CN119784568B (en) | Watermark extraction method and device of three-dimensional model | |
| CN120339289B (en) | Image defect detection method, device and storage medium | |
| CN119206099B (en) | A construction site measurement system based on image processing | |
| CN119338724B (en) | A method for generating fine-grained patch masks for film movies | |
| CN118887200B (en) | Defect identification method, defect identification device, computer device and readable storage medium | |
| CN113240611A (en) | Foreign matter detection method based on picture sequence | |
| CN117934691A (en) | Anti-camouflage generation method, vehicle and device | |
| CN116664647B (en) | Depth map hole compensation method and system based on self-adaptive super-pixel maximum entropy clustering segmentation | |
| CN120997098B (en) | Control point extraction method and system suitable for geometric correction of remote sensing image | |
| CN119575378B (en) | A multi-method fusion SAR image shadow detection method and system | |
| CN120219366B (en) | A method and system for detecting shell defects of a wind power control cabinet |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |