[go: up one dir, main page]

CN119784568B - Watermark extraction method and device of three-dimensional model - Google Patents

Watermark extraction method and device of three-dimensional model Download PDF

Info

Publication number
CN119784568B
CN119784568B CN202510272854.3A CN202510272854A CN119784568B CN 119784568 B CN119784568 B CN 119784568B CN 202510272854 A CN202510272854 A CN 202510272854A CN 119784568 B CN119784568 B CN 119784568B
Authority
CN
China
Prior art keywords
frequency
texture
frequency domain
watermark embedding
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510272854.3A
Other languages
Chinese (zh)
Other versions
CN119784568A (en
Inventor
王书浩
张斌
谢亮
何建
黄泱柯
李文学
陈云中
尹锋
李笑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Mango Smart Art Technology Co ltd
Original Assignee
Hunan Mango Smart Art Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Mango Smart Art Technology Co ltd filed Critical Hunan Mango Smart Art Technology Co ltd
Priority to CN202510272854.3A priority Critical patent/CN119784568B/en
Publication of CN119784568A publication Critical patent/CN119784568A/en
Application granted granted Critical
Publication of CN119784568B publication Critical patent/CN119784568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The application discloses a watermark extraction method and device of a three-dimensional model, the method comprises the steps of obtaining a plurality of images of the three-dimensional model shot at multiple angles, screening potential watermark embedding areas according to gray gradient change characteristics of the images in a model texture coordinate space for each image, carrying out multi-scale analysis on the potential watermark embedding areas to obtain high-frequency texture characteristics of the potential watermark embedding areas, carrying out frequency domain transformation on the potential watermark embedding areas to analyze the high-frequency domain characteristics of the potential watermark embedding areas based on frequency domain data and the high-frequency texture characteristics, fusing the high-frequency texture characteristics and the high-frequency domain characteristics of the potential watermark embedding areas of the images, carrying out statistical analysis on the fused two characteristics to determine a final watermark embedding area, and mapping the fused frequency domain characteristics of the final watermark embedding area back to the model texture coordinate space to obtain watermark extraction results.

Description

Watermark extraction method and device of three-dimensional model
Technical Field
The application relates to the technical field of watermark extraction, in particular to a watermark extraction method and device of a three-dimensional model.
Background
With the development of three-dimensional digital technology, three-dimensional models are widely applied in the fields of industrial design, cultural heritage protection and virtual reality, and in order to protect the intellectual property of three-dimensional digital assets, digital watermarking technology is gradually an effective protection means. The digital watermark is embedded with hidden information in the texture or geometric characteristics of the three-dimensional model, so that the copyright information is stored on the premise of not significantly influencing the appearance of the model. And when verification is needed later, extracting the watermark in the three-dimensional model, and then verifying.
The current watermark extraction method in the three-dimensional model mainly depends on two-dimensional image processing and frequency domain analysis. In particular, the three-dimensional model surface is unfolded into a two-dimensional texture image mainly by a texture mapping mode. Then, the data in the two-dimensional texture image is subjected to frequency domain conversion, and then high-frequency vectors in the two-dimensional texture image are extracted, so that the watermark can be determined through the distribution of the high-frequency vectors.
However, because texture mapping of the three-dimensional model is complex and nonlinear, shooting angles, texture occlusion, mapping distortion and the like exist, so that texture information is easy to lose, and the accuracy of the global texture of the model cannot be guaranteed by a single mapped image. Moreover, the method simply analyzes the characteristics of the frequency domain, and does not perform effective verification, so that the accuracy of the extracted watermark cannot be effectively ensured by the existing method.
Disclosure of Invention
Based on the defects of the prior art, the application provides a watermark extraction method and device for a three-dimensional model, which are used for solving the problem that the prior art cannot guarantee accurate watermark extraction of the three-dimensional model.
In order to achieve the above object, the present application provides the following technical solutions:
the first aspect of the present application provides a watermark extraction method for a three-dimensional model, including:
acquiring a plurality of target images of a target three-dimensional model shot by multiple visual angles;
Respectively aiming at each target image, and screening out a potential watermark embedding region of the target image according to the gray gradient change characteristics of pixel points of the target image in a model texture coordinate space;
Carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region;
performing frequency domain transformation on the potential watermark embedding region, and analyzing the high-frequency domain characteristic of the potential watermark embedding region based on the frequency domain data of the potential watermark embedding region and the high-frequency texture characteristic;
Respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics;
Determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics;
and mapping the fused frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model.
Optionally, in the method for extracting a watermark from a three-dimensional model, after obtaining a plurality of target images of a target three-dimensional model photographed by multiple perspectives, the method further includes:
Respectively carrying out contrast enhancement on each piece of target image data by using a histogram equalization technology to obtain each piece of enhanced target image;
Gamma correction processing is carried out on each enhanced target image so as to adjust the gray dynamic range of the target image;
Correcting the geometric distortion of each adjusted target image through homography transformation;
And carrying out noise filtering on each corrected target image by adopting an adaptive median filtering algorithm.
Optionally, in the method for extracting a watermark of a three-dimensional model, before screening out a potential watermark embedding area of the target image according to a gray gradient change characteristic of a pixel point of the target image in a model texture coordinate space, the method further includes:
Extracting characteristic points in the target images respectively aiming at each target image;
Calculating characteristic description information of each characteristic point, wherein the characteristic description information of the characteristic points is information representing local textures of the characteristic points;
Establishing a mapping model of each characteristic point and the model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting the target image;
optimizing the mapping model by minimizing a total error function;
Calculating the local consistency degree of each feature point in the optimized mapping model;
and carrying out optimization processing on each characteristic point of which the local consistency degree does not accord with a preset condition.
Optionally, in the method for extracting a watermark of a three-dimensional model, the step of screening, for each target image, a potential watermark embedding area of the target image according to a gray gradient change characteristic of a pixel point of the target image in a model texture coordinate space includes:
determining a target texture region of the target image in the model texture coordinate space for each target image respectively;
Extracting gray gradient change characteristics of each pixel point in a target texture area of the target image;
And screening out a potential watermark embedding region of the target image through a region growing criterion based on the gray gradient change characteristics of each pixel point in the target texture region of the target image.
Optionally, in the method for extracting a watermark of a three-dimensional model, the performing multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region includes:
performing multi-scale characteristic analysis on the potential watermark embedding region to obtain characteristics of multiple scales;
Calculating the difference value of the characteristics of each two adjacent layers to obtain the initial high-frequency texture characteristics of multiple layers;
and combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map.
Optionally, in the method for extracting a watermark of a three-dimensional model, the combining the gray gradient change characteristic of the potential watermark embedding region with the texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map further includes:
And carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map, and carrying out region consistency verification.
Optionally, in the method for extracting a watermark of a three-dimensional model, the analyzing the high-frequency domain characteristic of the potential watermark embedding region based on the frequency domain data of the potential watermark embedding region and the high-frequency texture characteristic includes:
extracting high-frequency domain data from the frequency domain data of the potential watermark embedding region;
weighting the high-frequency domain data by using the high-frequency texture characteristic enhancement map to obtain high-frequency energy of the potential watermark embedding region;
performing discrete wavelet transform on the frequency domain data of the potential watermark embedding region, and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components, wherein the frequency domain components comprise a low-frequency component, a horizontal high-frequency component, a vertical high-frequency component and a diagonal high-frequency component;
Calculating multi-level energy ratio by utilizing the high-frequency texture characteristic enhancement map and the frequency domain components of each level;
And combining the high-frequency texture characteristic enhancement map, the high-frequency energy and the energy ratio of each level to obtain a total characteristic component.
Optionally, in the above method for extracting a watermark of a three-dimensional model, the fusing the high-frequency texture characteristic and the high-frequency domain characteristic of each potential watermark embedding area of each target image to obtain a fused texture characteristic and a fused frequency domain characteristic includes:
And respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain fused texture characteristics and fused frequency domain characteristics, wherein the target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
Optionally, in the above method for extracting a watermark of a three-dimensional model, mapping the fused frequency domain characteristic of the final watermark embedding region back into the model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model, including:
mapping the fused frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a distribution diagram of the final watermark embedding region;
acquiring position information of each characteristic point in a distribution diagram of the final watermark embedding area, wherein the position information comprises texture coordinates and physical coordinates;
Calculating the confidence coefficient of each characteristic point according to the total characteristic component and the maximum characteristic component of each characteristic point in the distribution diagram of the final watermark embedding area;
and generating a watermark extraction report and outputting the watermark extraction report by using the position information and the confidence coefficient of each characteristic point and the distribution diagram of the final watermark embedding area.
A second aspect of the present application provides a watermark extraction apparatus for a three-dimensional model, including:
The image acquisition unit is used for acquiring a plurality of target images of the target three-dimensional model shot by multiple visual angles;
The primary screening unit is used for screening potential watermark embedding areas of the target images according to gray gradient change characteristics of points of the target images in a model texture coordinate space respectively for each target image;
The texture characteristic analysis unit is used for carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region;
the frequency domain transformation unit is used for carrying out frequency domain transformation on the potential watermark embedding area;
A frequency domain characteristic analysis unit configured to analyze a high frequency domain characteristic of the potential watermark embedding region based on the frequency domain data of the potential watermark embedding region and the high frequency texture characteristic;
The image characteristic fusion unit is used for respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fusion texture characteristics and fusion frequency domain characteristics;
The characteristic verification unit is used for determining a final watermark embedding area through carrying out statistical analysis on the fusion texture characteristic and the fusion frequency domain characteristic;
And the result generating unit is used for mapping the fused frequency domain characteristic back to the model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model.
Optionally, the watermark extraction apparatus of a three-dimensional model further includes:
the enhancement unit is used for carrying out contrast enhancement on each piece of target image data by utilizing a histogram equalization technology respectively to obtain each piece of enhanced target image;
the dynamic adjustment unit is used for carrying out gamma correction processing on each enhanced target image so as to adjust the gray dynamic range of the target image;
The correction unit is used for correcting the geometric distortion of each adjusted target image through homography transformation;
and the filtering unit is used for filtering noise of each corrected target image by adopting an adaptive median filtering algorithm.
Optionally, the watermark extraction apparatus of a three-dimensional model further includes:
the characteristic point extraction unit is used for extracting characteristic points in the target images for each target image respectively;
A description information calculation unit for calculating the characteristic description information of each feature point, wherein the characteristic description information of the feature point is information representing the local texture of the feature point;
the mapping model establishing unit is used for establishing a mapping model of each characteristic point and the model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting the target image;
An optimizing unit, configured to optimize the mapping model by minimizing a total error function;
the consistency degree calculation unit is used for calculating the local consistency degree of each feature point in the optimized mapping model;
and the optimizing unit is used for optimizing the characteristic points of which the local consistency degree does not meet the preset conditions.
Optionally, in the above three-dimensional model watermark extraction apparatus, the preliminary screening unit includes:
A region determining unit configured to determine, for each of the target images, a target texture region of the target image in the model texture coordinate space;
A gradient characteristic extraction unit for extracting gray gradient change characteristics of each pixel point in a target texture region of the target image;
and the characteristic screening unit is used for screening out the potential watermark embedding region of the target image through a region growing criterion based on the gray gradient change characteristic of each pixel point in the target texture region of the target image.
Optionally, in the above-mentioned watermark extraction apparatus of a three-dimensional model, the texture characteristic analysis unit includes:
the multi-scale analysis unit is used for carrying out multi-scale characteristic analysis on the potential watermark embedding region to obtain characteristics of multiple scales;
The texture characteristic calculation unit is used for calculating the difference value of the characteristics of the adjacent two layers of scales to obtain the initial high-frequency texture characteristics of multiple layers;
and the texture characteristic combination unit is used for combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map.
Optionally, the watermark extraction apparatus of a three-dimensional model further includes:
and the region verification unit is used for carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map and carrying out region consistency verification.
Optionally, in the above-mentioned watermark extraction apparatus of a three-dimensional model, the frequency domain characteristic analysis unit includes:
the frequency domain data extraction unit is used for extracting high-frequency domain data from the frequency domain data of the potential watermark embedding area;
The frequency domain energy calculating unit is used for weighting the frequency domain data of the high frequency by utilizing the high frequency texture characteristic enhancement graph to obtain the high frequency energy of the potential watermark embedding region;
The decomposition unit is used for performing discrete wavelet transformation on the frequency domain data of the potential watermark embedding region and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components, wherein the frequency domain components comprise a low-frequency component, a horizontal high-frequency component, a vertical high-frequency component and a diagonal high-frequency component;
the energy ratio calculating unit is used for calculating multi-level energy ratios by utilizing the high-frequency texture characteristic enhancement map and the frequency domain components of each level;
and the total characteristic component calculation unit is used for combining the high-frequency texture characteristic enhancement graph, the high-frequency energy and the energy ratio of each layer to obtain a total characteristic component.
Optionally, in the above three-dimensional model watermark extraction apparatus, the image characteristic fusion unit includes:
The image characteristic fusion subunit is used for respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain a fused texture characteristic and a fused frequency domain characteristic, wherein the target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
Optionally, in the above-mentioned watermark extraction apparatus of a three-dimensional model, the result generation unit includes:
the inverse mapping unit is used for mapping the fused frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a distribution diagram of the final watermark embedding region;
the coordinate acquisition unit is used for acquiring the position information of each characteristic point in the distribution diagram of the final watermark embedding area, wherein the position information comprises texture coordinates and physical coordinates;
The confidence coefficient calculating unit is used for calculating the confidence coefficient of each characteristic point according to the total characteristic component and the maximum characteristic component of each characteristic point in the distribution diagram of the final watermark embedding area;
and the report generating unit is used for generating and outputting a watermark extraction report by utilizing the position information, the confidence coefficient and the distribution diagram of the final watermark embedding area of each characteristic point.
According to the watermark extraction method of the three-dimensional model, provided by the application, a plurality of target images of the target three-dimensional model shot by multiple visual angles are obtained, so that the textures of the three-dimensional model can be accurately reflected through the images of the multiple visual angles, and more comprehensive and accurate characteristics can be provided. And the characteristics of each image can be verified, and the accuracy of the characteristics is ensured, so that the accuracy of watermark extraction is improved. And then, screening out a potential watermark embedding region of the target image according to the gray gradient change characteristics of the pixel points of the target image in the model texture coordinate space for each target image, and carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region, so that the accurate texture characteristic is extracted through multi-scale analysis. Then, frequency domain transformation is carried out on the potential watermark embedding region, and the high-frequency domain characteristic of the potential watermark embedding region is analyzed based on the frequency domain data and the high-frequency texture characteristic of the potential watermark embedding region, so that not only is texture characteristic information and frequency domain characteristic information combined and analyzed, but also the watermark can be more accurately positioned. And respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics, so that the characteristics of each image can be comprehensively considered. And then, determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics, so that the accuracy of the analyzed characteristics can be verified, and an accurate final watermark embedding area is obtained. Finally, the fusion frequency domain characteristic is mapped back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model, thereby realizing a method capable of accurately extracting the watermark of the three-dimensional model.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a watermark extraction method of a three-dimensional model according to an embodiment of the present application;
FIG. 2 is a flowchart of a target image preprocessing method according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for mapping a target image into a model texture coordinate space according to an embodiment of the present application;
fig. 4 is a flowchart of a method for screening out a potential watermark embedding area of a target image according to an embodiment of the present application;
FIG. 5 is a flow chart of a method for analyzing texture characteristics according to an embodiment of the present application;
FIG. 6 is a flow chart of a method for analyzing frequency domain characteristics according to an embodiment of the present application;
Fig. 7 is a flowchart of a method for generating watermark extraction results of a target three-dimensional model according to an embodiment of the present application;
Fig. 8 is a schematic architecture diagram of a watermark extraction apparatus for a three-dimensional model according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the present application, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The embodiment of the application provides a watermark extraction method of a three-dimensional model, as shown in fig. 1, comprising the following steps:
S101, acquiring a plurality of target images of a target three-dimensional model shot by multiple visual angles.
The target three-dimensional model refers to a three-dimensional model which is required to be subjected to watermark extraction at present.
In order to accurately recover the texture of the three-dimensional model and obtain an accurate model, in the embodiment of the application, the target three-dimensional model is subjected to multi-view shooting to obtain images of the target three-dimensional model under a plurality of view angles, so that the full texture can be obtained through the images of the target three-dimensional model under the plurality of view angles, the texture of the three-dimensional model can be accurately recovered, and the accuracy of watermark extraction is improved. And the texture characteristics of each image can be mutually verified and supplemented, so that the accuracy of the finally extracted watermark is improved.
Optionally, the shooting angle communication satisfies the coverage of the area of watermark embedding. Specifically, the region can be photographed by 0-180 degrees, so that the loss of key texture details due to shielding or projection distortion is avoided.
Optionally, in order to improve the quality of the target image and thus the accuracy of watermark extraction, in another embodiment of the present application, the target image may be further preprocessed after performing step S101. As shown in fig. 2, a preprocessing method for a target image provided by an embodiment of the present application includes:
s201, contrast enhancement is carried out on each piece of target image data by utilizing a histogram equalization technology, and each piece of enhanced target image is obtained.
Specifically, the contrast enhancement is performed on the target image data by using the histogram equalization technology, so that the pixel value of the enhanced image data can be obtained as follows:
Wherein, I (x, y) is the gray value of the original target image, and I min and I max are the minimum gray value and the maximum gray value on the target image.
S202, gamma correction processing is carried out on each enhanced target image so as to adjust the gray scale dynamic range of the target image.
Specifically, the data of the target image is adjusted through the correction coefficient, and the pixel value of the adjusted image data is obtained as follows:
Wherein, As correction coefficients, it can be adjusted according to the characteristics of the image capturing apparatus.For enhancing dark areas of the imageAnd is used to correct the details of both areas of the image.
S203, correcting the geometric distortion of each adjusted target image through homography transformation.
The change in the shooting angle may cause geometric distortion to occur in the image, especially when the texture and shape of the target object are projected onto the image plane at different viewing angles, the straight lines or planes in the image may exhibit curved or irregular morphology, so that the distorted images are adjusted back to a uniform viewing angle by geometrically correcting the image based on homography transformation, so that the texture mapping of the image and the three-dimensional model is consistent.
S204, adopting an adaptive median filtering algorithm to filter noise of each corrected target image.
Specifically, the pixel value of the target image may be filtered according to a filter window having a certain size. If the pixel value is within the gray scale range of the filter window, the pixel value is kept unchanged, and if the pixel value is beyond the gray scale range, the pixel value can be replaced by using the median of the pixel value in the window.
It should be noted that, in order to accurately analyze the texture characteristics of each target image and facilitate the subsequent fusion of the characteristics of each target image, after step S101 is performed, before step S102 is performed, each target image needs to be mapped into the model texture coordinate space.
Specifically, a mapping relation between a shot target image and texture coordinates of the three-dimensional model is established according to texture mapping parameters when the target three-dimensional model is generated, and each target image is uniformly mapped into a model texture coordinate space by adopting a projection change matrix.
Optionally, in another embodiment of the present application, mapping each target image into a specific implementation in the texture coordinate space of the model, as shown in fig. 3, includes the following steps:
s301, extracting characteristic points in each target image.
Optionally, a plurality of feature points p i=[xi,yi,1]T may be extracted from the target image for subsequent processing, where x i、yi is the abscissa and ordinate values of the feature points. Specifically, a feature point detection algorithm may be used to identify key feature points in a texture region in a target image.
S302, calculating characteristic description information of each characteristic point.
The characteristic description information of the feature points is information of local textures of the feature points, and the characteristic description information of the feature points can be composed of information of multiple dimensions.
S303, establishing a mapping model of each feature point and a model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting a target image.
Specifically, based on the texture mapping parameters of the target three-dimensional model and the camera calibration parameters when the target image is shot, a projection change matrix can be obtained, and then the data of the target image can be mapped to the model texture coordinate space through the projection change matrix, namely, a mapping model of each feature point and the model texture coordinate space is established, so that for any point, namely, the mapping relation can be expressed:
Wherein H enh is a projective transformation matrix, and D (p i) is coordinates of the feature points.
Alternatively, the mapping may be specifically performed using coordinates of the distortion-corrected feature points, that is, D (p i) is the coordinates of the distortion-corrected feature points. So if the distortion coefficients are k 1、k2、p1 and p 2, the coordinates of the feature point after distortion correction can be expressed as:
Wherein x i and y i are coordinates before correction of the feature point i, x c and y c are camera principal point coordinates, and r= (x i-xc)2+(yi-yc)2).
S304, optimizing the mapping model by minimizing the total error function.
In order to ensure the quality after mapping, in the embodiment of the application, further optimization is needed. The mapping relationship can be optimized by minimizing the total error function E as follows:
wherein N is the number of feature points, alpha is the weight coefficient of the balance weight projection error and geometric consistency, G is the geometric feature function, the geometric feature function extracts the geometric attribute of curvature or edge direction at the corresponding texture coordinates, and the optimization process is carried out on the parameters of the enhanced projection transformation matrix H enh.
S305, calculating the local consistency degree of each feature point in the optimized mapping model.
In order to verify the geometric topological consistency of the texture of the target three-dimensional model, the local consistency of each characteristic point is calculated:
Where N (i) is the field set of feature points p i, w ij is the weight, and u i and u j represent coordinates of the ith and jth feature points mapped into the model texture coordinate space.
S306, optimizing each characteristic point of which the local consistency degree does not meet the preset condition.
Specifically, each feature point with local consistency degree not meeting the preset condition can be proposed, so that noise interference of the abnormal points on mapping accuracy is avoided, an accurate mapping result is obtained, and the texture of the target three-dimensional model is accurately reflected.
S102, respectively aiming at each target image, screening out a potential watermark embedding area of the target image according to the gray gradient change characteristics of the pixel points of the target image in the model texture coordinate space.
Because the gray value of the image changes to a relatively large extent due to the position of the embedded watermark, in the embodiment of the application, gray gradient change characteristics are calculated by using the gray value of each pixel point of the target image mapped to the model texture coordinate space, and then the region possibly embedded with the watermark in the target image can be screened out by an image segmentation algorithm based on the gray gradient change characteristics.
Optionally, in another embodiment of the present application, a specific implementation of step S102, as shown in fig. 4, includes the following steps:
s401, determining a target texture region of the target image in the model texture coordinate space for each target image.
In order to increase the processing speed, in the embodiment of the present application, therefore, in the embodiment of the present application, the area most likely to have the watermark is determined as the target texture area, or if the area where the watermark is consistent, the area may be determined as the target texture area directly.
S402, extracting gray gradient change characteristics of each pixel point in a target texture area of a target image.
Specifically, the gray gradient change is calculated based on the gray value of each point on the target image to obtain a gradient vectorThen obtaining gradient assignment through Sobel operator approximation calculationThereby obtaining the gradation gradient change characteristics of each point.
S403, based on the gray gradient change characteristics of each pixel point in the target texture region of the target image, the potential watermark embedding region of the target image is screened out through a region growing criterion.
Specifically, pixel points meeting the region growing criterion are screened out, and the region where the points are located is the potential watermark embedding region. Wherein the region growing criteria is defined as:
wherein I seed is the gray value of a selected seed point, I (u i) is the gray value of the ith pixel point, T and And respectively assigning thresholds to the gray level difference and the gradient.
S103, carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region.
In the implementation of the present application, the multi-scale high-frequency texture characteristic of the potential watermark embedding region is analyzed to separate the background variation and the embedding information in the texture of the target image, thereby obtaining the high-frequency texture characteristic of the potential watermark embedding region, and the regions where the features are located are more accurate watermark embedding regions, namely, the potential watermark embedding region can be narrowed to a more accurate range through the texture characteristic analysis.
Alternatively, in another embodiment of the present application, a specific implementation of step S103, as shown in fig. 5, includes the following steps:
s501, multi-scale characteristic analysis is carried out on the potential watermark embedding area, and characteristics of multiple scales are obtained.
Optionally, the multi-scale characteristic analysis is performed on the potential watermark embedding region, which can be specifically represented by a multi-scale gaussian pyramid:
Wherein "xe" denotes a convolution operation, and Is the scale parameter of the pyramid of the k-th layer.
S502, calculating the difference value of the characteristics of the dimensions of every two adjacent layers to obtain the initial high-frequency texture characteristics of the multiple layers.
Specifically, by calculating the high-frequency characteristics H k(ui of different scales, the high-frequency texture characteristics related to watermark embedding are initially extracted:
and S503, combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement map.
Specifically, after the high-frequency texture characteristic is obtained preliminarily, gray gradient change and the high-frequency texture characteristic are used for combination, so that the screened potential watermark embedding area is further optimized, and the potential watermark embedding area is further contracted, and can be specifically expressed as:
wherein w k is the weight of the high-frequency texture characteristic under different scales.
It should be noted that, the optimization process only further proposes a region with low confidence, i.e. a region with limited potential watermark embedding, so as to make it the final watermark embedding region.
Optionally, in another embodiment of the present application, after performing step S503, the method may further include:
and carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map, and carrying out region consistency verification.
Specifically, boundary extraction and confirmation are carried out on the high-frequency texture characteristic enhancement graph, boundary points are detected, and then region consistency verification is carried out.
S104, performing frequency domain transformation on the potential watermark embedding area.
In order to perform watermark analysis from frequency, in the embodiment of the present application, the frequency domain transform is performed on the potential watermark embedding region, so that frequency domain data of the potential watermark embedding region can be obtained.
Alternatively, the gray signals of the potential watermark embedding region in the texture coordinate space may be converted from the spatial domain to the frequency domain by discrete pre-selection transformation, thereby obtaining frequency domain data of the potential watermark embedding region.
Specifically, for each gray signal of a potential watermark embedding region in texture coordinate space, its two-dimensional frequency domain transform, i.e., DCT transform, is:
wherein, C (u, v) is a frequency coefficient matrix, namely transformed frequency domain data; And And M and N respectively represent the number of rows and columns of the regional pixel points.
S105, analyzing the high-frequency domain characteristics of the potential watermark embedding region based on the frequency domain data and the high-frequency texture characteristics of the potential watermark embedding region.
Since the region where the watermark is located also exhibits a high frequency in the frequency domain, it is necessary to classify the characteristics of the frequency domain of the high frequency. And the high-frequency texture characteristics reflect the high-frequency characteristics of the texture, so that the high-frequency characteristics of the potential watermark embedding area can be analyzed on the basis of the high-frequency texture characteristics by taking the high-frequency texture characteristics as a reference. And the high-frequency texture characteristic can be combined, so that the analyzed high-frequency domain characteristic is more accurate.
Alternatively, in another embodiment of the present application, a specific implementation of step S105, as shown in fig. 6, includes the following steps:
s601, extracting high-frequency domain data from the frequency domain data of the potential watermark embedding area.
The specific amount can be obtained by analyzing the high-frequency component of the frequency domain data and screening the high-frequency domain data, so that the area where the high-frequency domain data is located is obtained, and the specific position of watermark embedding can be further determined.
S602, weighting the high-frequency domain data by utilizing the high-frequency texture characteristic enhancement map to obtain the high-frequency energy of the potential watermark embedding region.
Specifically, in the embodiment of the present application, the energy in the frequency domain is used as the characteristic of the frequency domain, so that the embedding position of the watermark is reflected through the distribution condition of the energy. The high-frequency texture characteristic enhancement map also reflects the position condition of watermark embedding, so comprehensive consideration is needed, and therefore, the high-frequency energy of the potential watermark embedding region is calculated by combining the high-frequency texture characteristic enhancement map with the high-frequency domain data.
Specifically, the definition of high frequency capability is:
Where sum u t and v t are frequency domain thresholds that are determined in conjunction with the high frequency texture property enhancement map E (u i) weighted high frequency capability.
S603, performing discrete wavelet transformation on the frequency domain data of the potential watermark embedding region, and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components.
Wherein the frequency domain components include a low frequency component, a horizontal high frequency component, a vertical high frequency component, and a diagonal high frequency component.
Specifically, the frequency domain data of the watermark embedding region is decomposed into a low frequency component LL k, a horizontal high frequency component LH k, a vertical high frequency component HL k, and a diagonal high frequency component HH k, which can be expressed specifically as:
S604, calculating multi-level energy ratio by utilizing the high-frequency texture characteristic enhancement map and the frequency domain components of each level.
In order to represent the distribution of the embedded signal in the high frequency component, a multi-level energy ratio is calculated based on the high frequency texture characteristic enhancement map, specifically:
s605, combining the high-frequency texture characteristic enhancement map, the high-frequency energy and the energy ratio of each layer to obtain a total characteristic component.
In order to comprehensively consider the characteristics of each part analyzed, the watermark is finally extracted, so that the analyzed high-frequency texture characteristic enhancement map, the high-frequency energy and the frequency domain components of each layer are combined, and a total characteristic component is obtained. Specifically, the method can be expressed as:
Wherein, AndThe adjustment may be optimized based on the gray scale, frequency domain, and high frequency gradient characteristics of the embedded signal for the weight coefficients.
S106, respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics.
After the analysis, the characteristics of each potential watermark embedding area in each target image are obtained, so that in order to comprehensively consider the characteristics of each potential watermark embedding area of each target image, the position of the final watermark is accurately analyzed, and therefore the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image are fused respectively to obtain the fused texture characteristics and the fused frequency domain characteristics.
Optionally, for the features belonging to the same region, weighting may be performed according to the confidence level, and features of different regions may be combined, so as to obtain a fused texture characteristic and a fused frequency domain characteristic, that is, a fused texture characteristic distribution diagram and a fused frequency domain characteristic distribution diagram.
Optionally, in another embodiment of the present application, a specific implementation of step S106 includes:
and respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain fused texture characteristics and fused frequency domain characteristics.
The target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
S107, determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics.
It should be noted that, the fusion texture characteristic and the fusion frequency domain characteristic can both reflect the position of the watermark, so if the fusion texture characteristic and the fusion frequency domain characteristic both reflect that a certain position is the watermark position, that is, the distribution of the fusion texture characteristic and the fusion frequency domain characteristic of a certain position is consistent, the position of the watermark is indicated. Therefore, the distribution area of the fused frequency domain characteristic is verified whether the watermark embedding area is accurately reflected or not by utilizing the fused texture characteristic. Specifically, the fused texture characteristics and the fused frequency domain characteristics can be counted, and then verification is performed according to the counted results, so that a final watermark embedding area can be determined, namely, the area with the verified and analyzed characteristics is the area where the watermark is located, and the next step can be performed.
Specifically, the frequency domain energy and texture change characteristics of the region can be embedded according to the fused high-frequency texture characteristic enhancement map and the fused total characteristic component watermark, and whether the two are consistent is verified. The region where the two are consistent is the watermark embedding region.
S108, mapping the fused frequency domain characteristics back to a model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model.
The texture characteristic and the frequency domain characteristic are integrated by fusing the frequency domain characteristic, so that the watermark position can be accurately reflected, and the distribution position is verified to be the watermark position through the last step. However, the fused frequency domain characteristic is a frequency domain characteristic, and the position of the watermark cannot be intuitively reflected, so that the fused frequency domain characteristic needs to be mapped back into a model texture coordinate space, so that a characteristic distribution diagram of a final watermark embedding area is obtained, and the characteristic distribution diagram obviously presents the watermark in the target three-dimensional model. Optionally, different colors can be used for rendering according to different eigenvalues, so that the watermark in the target three-dimensional model can be presented more intuitively, and verification can be performed on the watermark. It is possible to generate and output a watermark extraction result of the target three-dimensional model containing detailed information based on the distribution map of the final watermark embedding area.
Alternatively, in another embodiment of the present application, a specific implementation of step S108, as shown in fig. 7, includes:
and S701, mapping the fused frequency domain characteristic of the final watermark embedding region back to a model texture coordinate space to obtain a distribution diagram of the final watermark embedding region.
S702, acquiring position information of each characteristic point in a distribution diagram of a final watermark embedding area.
Wherein the location information includes texture coordinates and physical coordinates.
S703, calculating the confidence coefficient of each feature point according to the total feature component and the maximum feature component of each feature point in the distribution diagram of the final watermark embedding area.
Wherein the maximum characteristic component refers to the maximum value of the total characteristic components of the respective characteristic points. The confidence of each feature point can be expressed as:
Wherein T c(ui) is the total characteristic component of the feature point. max (T c) is the maximum characteristic component.
S704, generating a watermark extraction report and outputting the watermark extraction report by utilizing the position information, the confidence coefficient and the distribution map of the final watermark embedding area of each feature point.
The embodiment of the application provides a watermark extraction method of a three-dimensional model, which is used for acquiring a plurality of target images of a target three-dimensional model shot by multiple visual angles, so that the textures of the three-dimensional model can be accurately reflected through the images of the multiple visual angles, and more comprehensive and accurate characteristics can be provided. And the characteristics of each image can be verified, and the accuracy of the characteristics is ensured, so that the accuracy of watermark extraction is improved. And then, respectively aiming at each target image, screening out a potential watermark embedding region of the target image according to the gray gradient change characteristics of points of the target image in a model texture coordinate space, and carrying out multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region to obtain the high-frequency texture characteristic of the potential watermark embedding region, so that the accurate texture characteristic is extracted through multi-scale analysis. Then, the frequency domain transformation is carried out on the potential watermark embedding region, and the high-frequency domain characteristic of the potential watermark embedding region is analyzed based on the frequency domain data and the high-frequency texture characteristic of the potential watermark embedding region, so that the texture characteristic and the frequency domain characteristic are combined and analyzed, and the watermark can be more accurately positioned. And respectively fusing the high-frequency texture characteristics and the high-frequency domain characteristics of each potential watermark embedding area of each target image to obtain fused texture characteristics and fused frequency domain characteristics, so that the characteristics of each image can be comprehensively considered. And then, determining a final watermark embedding area by carrying out statistical analysis on the fusion texture characteristics and the fusion frequency domain characteristics, so that the accuracy of the analyzed characteristics can be verified, and an accurate final watermark embedding area is obtained. Finally, the fusion frequency domain characteristic is mapped back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model, thereby realizing a method capable of accurately extracting the watermark of the three-dimensional model.
Another embodiment of the present application provides a watermark extraction apparatus for a three-dimensional model, as shown in fig. 8, including:
an image acquisition unit 801, configured to acquire a plurality of target images of a target three-dimensional model photographed at multiple angles.
The primary screening unit 802 is configured to screen, for each target image, a potential watermark embedding area of the target image according to a gray gradient change characteristic of a point of the target image in a texture coordinate space of the model.
The texture characteristic analysis unit 803 is configured to perform multi-scale high-frequency texture characteristic analysis on the potential watermark embedding region, so as to obtain the high-frequency texture characteristic of the potential watermark embedding region.
A frequency domain transforming unit 804, configured to perform frequency domain transformation on the latent watermark embedding region.
The frequency domain characteristic analysis unit 805 is configured to analyze the high frequency domain characteristic of the potential watermark embedding region based on the frequency domain data and the high frequency texture characteristic of the potential watermark embedding region.
The image characteristic fusion unit 806 is configured to fuse the high-frequency texture characteristic and the high-frequency domain characteristic of each potential watermark embedding region of each target image, so as to obtain a fused texture characteristic and a fused frequency domain characteristic.
And a characteristic verification unit 807 for determining a final watermark embedding region by performing statistical analysis on the fused texture characteristic and the fused frequency domain characteristic.
And the result generating unit 808 is used for mapping the fused frequency domain characteristic back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the watermark extraction apparatus further includes:
And the enhancement unit is used for carrying out contrast enhancement on each piece of target image data by utilizing a histogram equalization technology respectively to obtain each piece of enhanced target image.
And the dynamic adjustment unit is used for carrying out gamma correction processing on each enhanced target image so as to adjust the gray dynamic range of the target image.
And the correction unit is used for correcting the geometric distortion of each adjusted target image through homography transformation.
And the filtering unit is used for filtering noise of each corrected target image by adopting an adaptive median filtering algorithm.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the watermark extraction apparatus further includes:
and the characteristic point extraction unit is used for extracting characteristic points in the target images for each target image respectively.
And a description information calculation unit for calculating the characteristic description information of each feature point. The characteristic description information of the feature points is information of local textures of the feature points.
The mapping model establishing unit is used for establishing a mapping model of each feature point and a model texture coordinate space based on texture mapping parameters of the target three-dimensional model and camera calibration parameters when shooting a target image.
And the optimizing unit is used for optimizing the mapping model by minimizing the total error function.
And the consistency degree calculation unit is used for calculating the local consistency degree of each characteristic point in the optimized mapping model.
And the optimizing unit is used for optimizing each characteristic point of which the local consistency degree does not accord with the preset condition.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the preliminary screening unit includes:
And the region determining unit is used for determining the target texture region of the target image in the model texture coordinate space for each target image.
And the gradient characteristic extraction unit is used for extracting the gray gradient change characteristic of each pixel point in the target texture area of the target image.
And the characteristic screening unit is used for screening out the potential watermark embedding region of the target image through a region growing criterion based on the gray gradient change characteristics of each pixel point in the target texture region of the target image.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the texture characteristic analysis unit includes:
and the multi-scale analysis unit is used for carrying out multi-scale characteristic analysis on the potential watermark embedding region to obtain characteristics of multiple scales.
And the texture characteristic calculation unit is used for calculating the difference value of the characteristics of the adjacent two layers to obtain the initial high-frequency texture characteristics of the multiple layers.
And the texture characteristic combining unit is used for combining the gray gradient change characteristic of the potential watermark embedding region with the initial texture high-frequency information of each layer to obtain an optimized high-frequency texture characteristic enhancement chart.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the watermark extraction apparatus further includes:
And the region verification unit is used for carrying out boundary extraction and confirmation on the high-frequency texture characteristic enhancement map and carrying out region consistency verification.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the frequency domain characteristic analysis unit includes:
And the frequency domain data extraction unit is used for extracting high-frequency domain data from the frequency domain data of the potential watermark embedding area.
And the frequency domain energy calculating unit is used for weighting the high-frequency domain data by utilizing the high-frequency texture characteristic enhancement graph to obtain the high-frequency energy of the potential watermark embedding region.
And the decomposition unit is used for performing discrete wavelet transformation on the frequency domain data of the potential watermark embedding region and decomposing the frequency domain data of the potential watermark embedding region into multi-level frequency domain components. Wherein the frequency domain components include a low frequency component, a horizontal high frequency component, a vertical high frequency component, and a diagonal high frequency component.
And the energy ratio calculating unit is used for calculating multi-level energy ratios by utilizing the high-frequency texture characteristic enhancement graph and the frequency domain components of each level.
And the total characteristic component calculation unit is used for combining the high-frequency texture characteristic enhancement graph, the high-frequency energy and the energy ratio of each layer to obtain a total characteristic component.
Optionally, in the watermark extraction apparatus for a three-dimensional model provided in another embodiment of the present application, the image characteristic fusion unit includes:
And the image characteristic fusion subunit is used for respectively fusing the high-frequency texture characteristic and the high-frequency domain characteristic of each potential watermark embedding area of each target image according to the weight corresponding to the target attribute of each target image to obtain the fused texture characteristic and the fused frequency domain characteristic. The target attribute at least comprises shooting angle, resolution and confidence coefficient parameters.
Optionally, in the watermark extraction apparatus of a three-dimensional model provided in another embodiment of the present application, the result generation unit includes:
and the reverse mapping unit is used for mapping the fusion frequency domain characteristic of the final watermark embedding region back to the model texture coordinate space to obtain a distribution diagram of the final watermark embedding region.
And the coordinate acquisition unit is used for acquiring the position information of each characteristic point in the distribution diagram of the final watermark embedding area.
Wherein the location information includes texture coordinates and physical coordinates.
And the confidence calculating unit is used for calculating the confidence of each characteristic point according to the total characteristic component and the maximum characteristic component of each characteristic point in the distribution diagram of the final watermark embedding area.
And the report generating unit is used for generating and outputting a watermark extraction report by utilizing the position information, the confidence coefficient and the distribution map of the final watermark embedding area of each characteristic point.
It should be noted that, for the specific working process of each unit provided in the above embodiment of the present application, reference may be made correspondingly to the implementation process of the corresponding step in the above method embodiment, which is not repeated herein.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1.一种三维模型的水印提取方法,其特征在于,包括:1. A method for extracting a watermark from a three-dimensional model, comprising: 获取多视角拍摄的目标三维模型的多张目标图像;Acquire multiple target images of a target three-dimensional model shot from multiple perspectives; 分别针对每张所述目标图像,根据在模型纹理坐标空间中的所述目标图像的像素点的灰度梯度变化特性,筛选出所述目标图像的潜在水印嵌入区域;For each of the target images, according to the grayscale gradient variation characteristics of the pixel points of the target image in the model texture coordinate space, a potential watermark embedding area of the target image is screened out; 对所述潜在水印嵌入区域进行多尺度的高频纹理特性分析,得到所述潜在水印嵌入区域的高频纹理特性,包括:对所述潜在水印嵌入区域进行多尺度特性分析,得到多个尺度的特性;计算相邻的每两层的尺度的特性的差值,得到多层初始的高频纹理特性;将所述潜在水印嵌入区域的灰度梯度变化特性与各层初始的所述高频纹理特性进行联合,得到优化后的高频纹理特性增强图;The method comprises: performing a multi-scale high-frequency texture characteristic analysis on the latent watermark embedding area to obtain the high-frequency texture characteristic of the latent watermark embedding area, comprising: performing a multi-scale characteristic analysis on the latent watermark embedding area to obtain characteristics of multiple scales; calculating the difference between the characteristics of scales of each two adjacent layers to obtain the initial high-frequency texture characteristics of multiple layers; combining the grayscale gradient change characteristics of the latent watermark embedding area with the initial high-frequency texture characteristics of each layer to obtain an optimized high-frequency texture characteristic enhancement map; 对所述潜在水印嵌入区域进行频域变换,并基于所述潜在水印嵌入区域的频域数据和所述高频纹理特性,分析所述潜在水印嵌入区域的高频频域特性;所述基于所述潜在水印嵌入区域的频域数据和所述高频纹理特性,分析所述潜在水印嵌入区域的高频频域特性,包括:从所述潜在水印嵌入区域的频域数据中提取出高频的频域数据;利用所述高频纹理特性增强图对所述高频的频域数据进行加权,得到所述潜在水印嵌入区域的高频能量;对所述潜在水印嵌入区域的频域数据进行离散小波变换,将所述潜在水印嵌入区域的频域数据分解为多层次的频域分量;其中,所述频域分量包括低频分量、水平高频分量、垂直高频分量和对角高频分量;利用所述高频纹理特性增强图和各层次的所述频域分量,计算出多层次的能量比;将所述高频纹理特性增强图、所述高频能量以及各个层次的所述能量比进行联合,得到总特性分量;Performing frequency domain transformation on the potential watermark embedding area, and analyzing the high-frequency frequency domain characteristics of the potential watermark embedding area based on the frequency domain data of the potential watermark embedding area and the high-frequency texture characteristics; analyzing the high-frequency frequency domain characteristics of the potential watermark embedding area based on the frequency domain data of the potential watermark embedding area and the high-frequency texture characteristics, including: extracting high-frequency frequency domain data from the frequency domain data of the potential watermark embedding area; weighting the high-frequency frequency domain data using the high-frequency texture characteristic enhancement map to obtain the high-frequency energy of the potential watermark embedding area; performing discrete wavelet transformation on the frequency domain data of the potential watermark embedding area, and decomposing the frequency domain data of the potential watermark embedding area into frequency domain components at multiple levels; wherein the frequency domain components include low-frequency components, horizontal high-frequency components, vertical high-frequency components and diagonal high-frequency components; calculating the energy ratio of multiple levels using the high-frequency texture characteristic enhancement map and the frequency domain components at each level; combining the high-frequency texture characteristic enhancement map, the high-frequency energy and the energy ratios at each level to obtain the total characteristic component; 分别将各张所述目标图像的各个所述潜在水印嵌入区域的所述高频纹理特性以及所述高频频域特性进行融合,得到融合纹理特性和融合频域特性;The high-frequency texture characteristics and the high-frequency frequency domain characteristics of each of the latent watermark embedding regions of each of the target images are respectively fused to obtain fused texture characteristics and fused frequency domain characteristics; 通过对所述融合纹理特性和所述融合频域特性进行统计分析,确定最终水印嵌入区域;Determining a final watermark embedding area by statistically analyzing the fused texture characteristics and the fused frequency domain characteristics; 将所述最终水印嵌入区域的所述融合频域特性映射回所述模型纹理坐标空间中,得到所述目标三维模型的水印提取结果。The fused frequency domain characteristics of the final watermark embedding area are mapped back to the model texture coordinate space to obtain a watermark extraction result of the target three-dimensional model. 2.根据权利要求1所述的方法,其特征在于,所述获取多视角拍摄的目标三维模型的多张目标图像之后,还包括:2. The method according to claim 1, characterized in that after acquiring a plurality of target images of the target three-dimensional model shot from multiple perspectives, it further comprises: 分别利用直方图均衡化技术对各张所述目标图像数据进行对比度增强,得到增强后的各张所述目标图像;Using histogram equalization technology to perform contrast enhancement on each of the target image data to obtain enhanced target images; 对增强后的各张所述目标图像进行伽马校正处理,以调整所述目标图像的灰度动态范围;Performing gamma correction processing on each of the enhanced target images to adjust the grayscale dynamic range of the target image; 通过单应性变换对调整后的各张所述目标图像的几何畸变进行校正;Correcting the geometric distortion of each of the adjusted target images by homography transformation; 采用自适应中值滤波算法对校正后的各张所述目标图像进行噪声滤除。Adopting an adaptive median filtering algorithm to filter out noise from each of the corrected target images. 3.根据权利要求1所述的方法,其特征在于,所述分别针对每张所述目标图像,根据在模型纹理坐标空间中的所述目标图像的像素点的灰度梯度变化特性,筛选出所述目标图像的潜在水印嵌入区域之前,还包括:3. The method according to claim 1, characterized in that before screening out the potential watermark embedding area of each target image according to the grayscale gradient change characteristics of the pixel points of the target image in the model texture coordinate space, the method further comprises: 分别针对每张所述目标图像,提取出所述目标图像中的特征点;For each of the target images, extract feature points in the target image; 计算每个所述特征点的特性描述信息;其中,所述特征点的特性描述信息为表征特征点的局部纹理的信息;Calculating characteristic description information of each of the feature points; wherein the characteristic description information of the feature point is information characterizing the local texture of the feature point; 基于所述目标三维模型的纹理映射参数和拍摄所述目标图像时的相机标定参数,建立各个所述特征点与所述模型纹理坐标空间的映射模型;Based on the texture mapping parameters of the target three-dimensional model and the camera calibration parameters when shooting the target image, a mapping model between each of the feature points and the model texture coordinate space is established; 通过最小化总误差函数对所述映射模型进行优化;Optimizing the mapping model by minimizing a total error function; 计算优化后的所述映射模型中的各个所述特征点的局部一致性度;Calculating the local consistency of each of the feature points in the optimized mapping model; 对局部一致性度不符合预设条件的各个所述特征点进行优化处理。Optimizing the feature points whose local consistency does not meet the preset conditions. 4.根据权利要求1所述的方法,其特征在于,所述分别针对每张所述目标图像,根据在模型纹理坐标空间中的所述目标图像的像素点的灰度梯度变化特性,筛选出所述目标图像的潜在水印嵌入区域,包括:4. The method according to claim 1, characterized in that the step of selecting the potential watermark embedding area of each target image according to the grayscale gradient variation characteristics of the pixel points of the target image in the model texture coordinate space comprises: 分别针对每张所述目标图像,确定所述模型纹理坐标空间中的所述目标图像的目标纹理区域;For each of the target images, determining a target texture region of the target image in the model texture coordinate space; 提取所述目标图像的目标纹理区域内的各个像素点的灰度梯度变化特性;Extracting the grayscale gradient variation characteristics of each pixel point in the target texture area of the target image; 基于所述目标图像的目标纹理区域内的各个像素点的灰度梯度变化特性,通过区域增长准则筛选出所述目标图像的潜在水印嵌入区域。Based on the grayscale gradient variation characteristics of each pixel point in the target texture region of the target image, the potential watermark embedding region of the target image is screened out through a region growing criterion. 5.根据权利要求1所述的方法,其特征在于,所述将所述潜在水印嵌入区域的灰度梯度变化特性与各层初始的所述高频纹理特性进行联合,得到优化后的高频纹理特性增强图之后,还包括:5. The method according to claim 1, characterized in that after combining the grayscale gradient change characteristics of the potential watermark embedding area with the initial high-frequency texture characteristics of each layer to obtain an optimized high-frequency texture characteristic enhancement map, it also includes: 对所述高频纹理特性增强图进行边界提取与确认,以及进行区域一致性验证。Boundary extraction and confirmation are performed on the high-frequency texture characteristic enhancement map, and regional consistency verification is performed. 6.根据权利要求1所述的方法,其特征在于,所述分别将各张所述目标图像的各个所述潜在水印嵌入区域的所述高频纹理特性以及所述高频频域特性进行融合,得到融合纹理特性和融合频域特性,包括:6. The method according to claim 1, characterized in that the step of fusing the high-frequency texture characteristics and the high-frequency frequency domain characteristics of each of the latent watermark embedding regions of each of the target images to obtain fused texture characteristics and fused frequency domain characteristics comprises: 按照各张所述目标图像的目标属性对应的权重,分别将各张所述目标图像的各个所述潜在水印嵌入区域的所述高频纹理特性以及所述高频频域特性进行融合,得到融合纹理特性和融合频域特性;其中,所述目标属性至少包括拍摄角度、分辨率以及置信度参数。According to the weights corresponding to the target attributes of each of the target images, the high-frequency texture characteristics and the high-frequency frequency domain characteristics of each of the potential watermark embedding areas of each of the target images are fused respectively to obtain fused texture characteristics and fused frequency domain characteristics; wherein the target attributes include at least shooting angle, resolution and confidence parameters. 7.根据权利要求1所述的方法,其特征在于,所述将所述最终水印嵌入区域的所述融合频域特性映射回所述模型纹理坐标空间中,得到所述目标三维模型的水印提取结果,包括:7. The method according to claim 1, characterized in that mapping the fused frequency domain characteristics of the final watermark embedding area back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model comprises: 将所述最终水印嵌入区域的所述融合频域特性映射回所述模型纹理坐标空间中,得到所述最终水印嵌入区域的分布图;Mapping the fused frequency domain characteristics of the final watermark embedding area back to the model texture coordinate space to obtain a distribution map of the final watermark embedding area; 获取所述最终水印嵌入区域的分布图内的各个特征点的位置信息;其中,所述位置信息包括纹理坐标和物理坐标;Acquire the position information of each feature point in the distribution map of the final watermark embedding area; wherein the position information includes texture coordinates and physical coordinates; 根据所述最终水印嵌入区域的分布图内的各个所述特征点的总特性分量以及最大特性分量,计算的得到各个所述特征点的置信度;Calculate the confidence of each feature point according to the total feature component and the maximum feature component of each feature point in the distribution diagram of the final watermark embedding area; 利用各个所述特征点的位置信息、置信度以及所述最终水印嵌入区域的分布图,生成水印提取报告并输出。Using the location information and confidence level of each feature point and the distribution map of the final watermark embedding area, a watermark extraction report is generated and output. 8.一种三维模型的水印提取装置,其特征在于,包括:8. A watermark extraction device for a three-dimensional model, comprising: 图像采集单元,用于获取多视角拍摄的目标三维模型的多张目标图像;An image acquisition unit, used to acquire multiple target images of a target three-dimensional model shot from multiple perspectives; 初筛单元,用于分别针对每张所述目标图像,根据在模型纹理坐标空间中的所述目标图像的像素点的灰度梯度变化特性,筛选出所述目标图像的潜在水印嵌入区域;A primary screening unit, for screening out a potential watermark embedding region of each target image according to the grayscale gradient variation characteristics of the pixel points of the target image in the model texture coordinate space; 纹理特性分析单元,用于对所述潜在水印嵌入区域进行多尺度的高频纹理特性分析,得到所述潜在水印嵌入区域的高频纹理特性;A texture characteristic analysis unit, used for performing multi-scale high-frequency texture characteristic analysis on the latent watermark embedding area to obtain the high-frequency texture characteristics of the latent watermark embedding area; 所述纹理特性分析单元,包括:多尺度分析单元、纹理特性计算单元和纹理特性联合单元;The texture characteristic analysis unit comprises: a multi-scale analysis unit, a texture characteristic calculation unit and a texture characteristic combination unit; 所述多尺度分析单元,用于对所述潜在水印嵌入区域进行多尺度特性分析,得到多个尺度的特性;The multi-scale analysis unit is used to perform multi-scale characteristic analysis on the potential watermark embedding area to obtain characteristics of multiple scales; 所述纹理特性计算单元,用于计算相邻的每两层的尺度的特性的差值,得到多层初始的高频纹理特性;The texture characteristic calculation unit is used to calculate the difference between the scale characteristics of each two adjacent layers to obtain the initial high-frequency texture characteristics of multiple layers; 所述纹理特性联合单元,用于将所述潜在水印嵌入区域的灰度梯度变化特性与各层初始的所述高频纹理特性进行联合,得到优化后的高频纹理特性增强图;The texture characteristic combining unit is used to combine the grayscale gradient change characteristics of the potential watermark embedding area with the initial high-frequency texture characteristics of each layer to obtain an optimized high-frequency texture characteristic enhancement map; 频域变换单元,用于对所述潜在水印嵌入区域进行频域变换;A frequency domain transform unit, used for performing frequency domain transform on the latent watermark embedding area; 频域特性分析单元,用于基于所述潜在水印嵌入区域的频域数据和所述高频纹理特性,分析所述潜在水印嵌入区域的高频频域特性;A frequency domain characteristic analysis unit, configured to analyze the high frequency domain characteristics of the latent watermark embedding region based on the frequency domain data of the latent watermark embedding region and the high frequency texture characteristics; 所述频域特性分析单元,包括:频域数据提取单元、频域能量计算单元、分解单元、能量比计算单元和总特性分量计算单元;The frequency domain characteristic analysis unit includes: a frequency domain data extraction unit, a frequency domain energy calculation unit, a decomposition unit, an energy ratio calculation unit and a total characteristic component calculation unit; 所述频域数据提取单元,用于从所述潜在水印嵌入区域的频域数据中提取出高频的频域数据;The frequency domain data extraction unit is used to extract high-frequency frequency domain data from the frequency domain data of the potential watermark embedding area; 所述频域能量计算单元,用于利用所述高频纹理特性增强图对所述高频的频域数据进行加权,得到所述潜在水印嵌入区域的高频能量;The frequency domain energy calculation unit is used to weight the high-frequency frequency domain data using the high-frequency texture characteristic enhancement map to obtain the high-frequency energy of the potential watermark embedding area; 所述分解单元,用于对所述潜在水印嵌入区域的频域数据进行离散小波变换,将所述潜在水印嵌入区域的频域数据分解为多层次的频域分量;其中,所述频域分量包括低频分量、水平高频分量、垂直高频分量和对角高频分量;The decomposition unit is used to perform discrete wavelet transform on the frequency domain data of the potential watermark embedding area, and decompose the frequency domain data of the potential watermark embedding area into multi-level frequency domain components; wherein the frequency domain components include low frequency components, horizontal high frequency components, vertical high frequency components and diagonal high frequency components; 所述能量比计算单元,用于利用所述高频纹理特性增强图和各层次的所述频域分量,计算出多层次的能量比;The energy ratio calculation unit is used to calculate the energy ratio of multiple levels by using the high-frequency texture characteristic enhancement map and the frequency domain components of each level; 所述总特性分量计算单元,用于将所述高频纹理特性增强图、所述高频能量以及各个层次的所述能量比进行联合,得到总特性分量;The total characteristic component calculation unit is used to combine the high-frequency texture characteristic enhancement map, the high-frequency energy and the energy ratio of each level to obtain a total characteristic component; 图像特性融合单元,用于分别将各张所述目标图像的各个所述潜在水印嵌入区域的所述高频纹理特性以及所述高频频域特性进行融合,得到融合纹理特性和融合频域特性;An image characteristic fusion unit, used to fuse the high-frequency texture characteristics and the high-frequency frequency domain characteristics of each of the latent watermark embedding regions of each of the target images, respectively, to obtain fused texture characteristics and fused frequency domain characteristics; 特性验证单元,用于通过对所述融合纹理特性和所述融合频域特性进行统计分析,确定最终水印嵌入区域;A characteristic verification unit, used for determining a final watermark embedding area by statistically analyzing the fused texture characteristic and the fused frequency domain characteristic; 结果生成单元,用于将所述最终水印嵌入区域的所述融合频域特性映射回所述模型纹理坐标空间中,得到所述目标三维模型的水印提取结果。The result generating unit is used to map the fused frequency domain characteristics of the final watermark embedding area back to the model texture coordinate space to obtain the watermark extraction result of the target three-dimensional model.
CN202510272854.3A 2025-03-10 2025-03-10 Watermark extraction method and device of three-dimensional model Active CN119784568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510272854.3A CN119784568B (en) 2025-03-10 2025-03-10 Watermark extraction method and device of three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510272854.3A CN119784568B (en) 2025-03-10 2025-03-10 Watermark extraction method and device of three-dimensional model

Publications (2)

Publication Number Publication Date
CN119784568A CN119784568A (en) 2025-04-08
CN119784568B true CN119784568B (en) 2025-06-10

Family

ID=95235770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510272854.3A Active CN119784568B (en) 2025-03-10 2025-03-10 Watermark extraction method and device of three-dimensional model

Country Status (1)

Country Link
CN (1) CN119784568B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522666A (en) * 2023-11-20 2024-02-06 深圳市证通电子股份有限公司 A method and device for embedding and extracting invisible digital watermarks in images
CN117579836A (en) * 2023-11-24 2024-02-20 中图科信数智技术(北京)有限公司 Digital watermark video copyright protection system based on characteristics unchanged

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7158654B2 (en) * 1993-11-18 2007-01-02 Digimarc Corporation Image processor and image processing method
JP4713691B2 (en) * 2009-06-04 2011-06-29 国立大学法人 鹿児島大学 Watermark information embedding device, watermark information processing system, watermark information embedding method, and program
CN104318505A (en) * 2014-09-30 2015-01-28 杭州电子科技大学 Three-dimensional mesh model blind watermarking method based on image discrete cosine transformation
CN110363697B (en) * 2019-06-28 2023-11-24 北京字节跳动网络技术有限公司 Image watermark steganography method, device, medium and electronic equipment
CN112801846B (en) * 2021-02-09 2024-04-09 腾讯科技(深圳)有限公司 Watermark embedding and extracting method and device, computer equipment and storage medium
CN115994849B (en) * 2022-10-24 2024-01-09 南京航空航天大学 Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling
CN119068321A (en) * 2024-07-30 2024-12-03 辽宁师范大学 A deep forgery detection method based on cross-domain feature fusion and separable watermark

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522666A (en) * 2023-11-20 2024-02-06 深圳市证通电子股份有限公司 A method and device for embedding and extracting invisible digital watermarks in images
CN117579836A (en) * 2023-11-24 2024-02-20 中图科信数智技术(北京)有限公司 Digital watermark video copyright protection system based on characteristics unchanged

Also Published As

Publication number Publication date
CN119784568A (en) 2025-04-08

Similar Documents

Publication Publication Date Title
CN111739031B (en) A crop canopy segmentation method based on depth information
CN112419165B (en) Image restoration using geometric and photometric transformations
CN116228780B (en) Silicon wafer defect detection method and system based on computer vision
CN119831907B (en) A method and system for image projection distortion correction
CN114596290A (en) Defect detection method, defect detection device, storage medium, and program product
CN119477770B (en) Three-dimensional Gaussian splatter-based orthographic image generation method, device and storage medium
CN110503679A (en) Infrared reference map preparation and evaluation method
CN120032222B (en) Training method, system and medium for lampblack concentration detection model
CN113038123A (en) No-reference panoramic video quality evaluation method, system, terminal and medium
CN114119437A (en) GMS-based image stitching method for improving moving object distortion
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN111683221B (en) Real-time video monitoring method and system of natural resources embedded with vector red line data
CN111507919B (en) A denoising method for 3D point cloud data
CN117152733A (en) A geological material identification method, system and readable storage medium
CN119784568B (en) Watermark extraction method and device of three-dimensional model
CN120339289B (en) Image defect detection method, device and storage medium
CN119206099B (en) A construction site measurement system based on image processing
CN119338724B (en) A method for generating fine-grained patch masks for film movies
CN118887200B (en) Defect identification method, defect identification device, computer device and readable storage medium
CN113240611A (en) Foreign matter detection method based on picture sequence
CN117934691A (en) Anti-camouflage generation method, vehicle and device
CN116664647B (en) Depth map hole compensation method and system based on self-adaptive super-pixel maximum entropy clustering segmentation
CN120997098B (en) Control point extraction method and system suitable for geometric correction of remote sensing image
CN119575378B (en) A multi-method fusion SAR image shadow detection method and system
CN120219366B (en) A method and system for detecting shell defects of a wind power control cabinet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant