EP1034635A1 - Watermarking of digital image data - Google Patents
Watermarking of digital image dataInfo
- Publication number
- EP1034635A1 EP1034635A1 EP98956257A EP98956257A EP1034635A1 EP 1034635 A1 EP1034635 A1 EP 1034635A1 EP 98956257 A EP98956257 A EP 98956257A EP 98956257 A EP98956257 A EP 98956257A EP 1034635 A1 EP1034635 A1 EP 1034635A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- watermark
- image
- dct
- frame
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 abstract description 5
- 230000006835 compression Effects 0.000 abstract description 2
- 238000007906 compression Methods 0.000 abstract description 2
- 239000013598 vector Substances 0.000 description 20
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
- G06T1/0028—Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32154—Transform domain methods
- H04N1/32165—Transform domain methods using cosine transforms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32154—Transform domain methods
- H04N1/32187—Transform domain methods with selective or adaptive application of the additional information, e.g. in selected frequency coefficients
- H04N1/32192—Transform domain methods with selective or adaptive application of the additional information, e.g. in selected frequency coefficients according to calculated or estimated visibility of the additional information in the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/467—Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0052—Embedding of the watermark in the frequency domain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0053—Embedding of the watermark in the coding stream, possibly without decoding; Embedding of the watermark in the compressed domain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20052—Discrete cosine transform [DCT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3225—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
- H04N2201/3233—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of authentication information, e.g. digital signature, watermark
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3269—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs
- H04N2201/327—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs which are undetectable to the naked eye, e.g. embedded codes
Definitions
- This invention relates to providing digital image data with a watermark, and, more particularly, where the image data are video data.
- a conventional watermark, on a paper document may consist of a translucent design which is visible when the document is held to the light. Or, more generally, a watermark may be viewed under certain lighting conditions or at certain viewing angles. Such watermarks, which are difficult to forge, can be included for the sake of authentication of documents such as bank notes, checks and stock certificates, for example.
- watermarks are being used to betoken certain proprietary rights such as a copyright, for example.
- the watermark is a visible or invisible pattern which is superposed on an image, and which is not readily removable without leaving evidence of tampering. Resistance to tampering is called "robustness" .
- a luminance level, ⁇ L is selected for the strength of the watermark, and the luminance of each individual pixel of the image is modified by ⁇ L and a nonlinear function. For increased security, the level ⁇ L is randomized over all the pixels in the image.
- DCT discrete cosine transformation
- Fig. 1 is an illustration for motion-compensated discrete cosine transformation (MC-DCT) .
- Fig. 2a is a watermark mask.
- Fig. 2b is an original image.
- Fig. 2c is a superposition of the original image and the watermark mask.
- Fig. 3 is a flow diagram of initial processing.
- Fig. 4 is a flow diagram of watermark superposition processing.
- Fig. 5 is a flow diagram of scaling for a region.
- a Mask Generation Module generates a DCT watermark mask based on the original video content.
- a Motion Compensation Module efficiently inserts the watermark in the DCT domain and outputs a valid video bitstream at specified bitrate. The following description applies specifically to image data in MPEG format.
- MPEG video consists of groups of pictures (GOP) as described in document ISO/IEC 13818 - 2 Committee Draft
- Each GOP starts with an intra coded "I-frame”, followed by a number of forward-referencing “P-frames” and bidirectionally-referencing “B- frames” .
- the image content changes from frame to frame.
- the watermark must be adapted to the video contents. For example, when an image is complicated or "busy", i.e., when it has many high-frequency components, the watermark should be stronger. For different regions in the same video frame, the watermark should be scaled regionally— thereby enhancing the security against tampering.
- a watermark mask image is first generated for each GOP, or for the first P-frame after a scene cut.
- the input watermark image is first converted to a gray scale image. Only the luminance channel of each image is modified. A transparent color (background color) is chosen. The luminance of all watermark pixels having the transparent color value is set to 0.
- the mask image is randomly shifted in both x- and y- direction.
- a DCT is applied to obtain the DCT mask of the watermark. The luminance of the mask will be scaled adaptively according to the input image content before adding to the input image.
- the following formulae have been proposed in the above-referenced report by G. W. Braudaway et al . :
- w nra ' nm • y w /38.667 • (y ⁇ ly 3 • ⁇ L for y nra /y w > 0.008856,
- m ' w nm - y w /903.3 • ⁇ L for y nm /y w ⁇ 0.008856 (1)
- w nra ' is the scaled watermark mask that will be added to the original image
- w nm is the non-transparent watermark pixel value at pixel (n,m)
- y w is the scene white
- y nm is the luminance value of the input image at image coordinates (n,m)
- ⁇ L is the scaling factor which controls the watermark strength.
- Equation (2) Equation (2)
- E [y 23 ] is a function of the mean and the variance of the pixel values .
- Equation (2) specifies a relationship between the moments of random variables w, w' and y. This relationship can be extended to the deterministic case to simplify Equation (2) , resulting in a linear approximation.
- Equation 3 and the mean ⁇ is used to approximate y in deciding which of the formulae to use in Equation 2.
- w- y ⁇ 0.1607 • w (/4 /( ⁇ . p 2 ) ⁇ L , o> 17.9319 ,
- Equation 4 approximates the nonlinear function according to Equation 2, by linear functions block by block.
- the scaled watermark strength depends on the mean and variance of the image block. For each image block, the higher the mean (i.e. the brighter), and the higher the variance (i.e. the more cluttered), the greater the required strength of the watermark for maintaining consistent visibility of the watermark.
- the DCT of Equation 4 can be used to obtain the DCT of the watermark mask, which can be inserted in the image in the DCT domain.
- the mean and variance of the input image can be derived from the DCT coefficients, ⁇ « (Y DC %) and
- a new watermark mask is calculated for each I -frame and P-frame, the latter in case of a scene cut.
- I- frames all DCT coefficients are readily accessible after minimal decoding of the MPEG sequence, i.e. inverse variable length coding, inverse run length coding and inverse quantization.
- P-frames since most blocks are in the scene cut, these DCT coefficient can be used immediately.
- non-intra coded blocks the average DC and AC energy obtained from intra coded blocks are substituted.
- the block-based ( ⁇ . ⁇ .-) pair can be replaced by the average ( ⁇ ,T3) over the whole image or over certain regions.
- ⁇ ,T3 the average
- the input image can be separated into many rectangular regions. As illustrated by Fig. 5, for each region an ( ⁇ ,T3) pair is calculated, and the mask is generated accordingly. Typically, the watermark is divided into top and bottom regions. This is suitable for most outdoor views with sky in the upper half of the frame and darker scenery in the lower half, as shown in Fig. 2a, for example. Each region will have a relatively visible watermark using different ( ,T3) pairs.
- a randomized location shift can be applied to the watermark image before applying the DCT. This makes removal of the watermark more difficult for attackers who are in possession of the original watermark image, e.g. when a known logo is used for watermark purposes.
- Sub-pixel randomized location shifting will make it very difficult for the attacker to remove the watermark without leaving some error residue.
- the following can be used for shifting. Two random numbers, for x- and y-direction, respectively, are generated and normalized to lie between -1.00 to 1.00.
- sub-pixel shifting is effected by bi-linear interpolation which involves only linear scaling and addition.
- a similar bi-linear operation can be used.
- the DCT blocks of the watermark are inserted into the DCT frames of the input video in one of three ways, as illustrated by Fig. 4, Section (ii) .
- the DCT of the scaled watermark is added directly:
- E'-. is the i,j-th resulting DCT block, E-- the original DCT block, and W'.. the scaled watermark DCT according to Equation 6.
- the watermark added in the anchor frame has to be removed when adding the current watermark.
- the resulting DCT error residue is :
- E' 13 E.- - MCDCT (W' p ,Vp.- )+ W'-_ (8)
- MCDCT is the motion compensation function in the DCT domain as described in the above-referenced paper by S.-F. Chang et al .
- W' F is the watermark DCT used in the forward anchor frame
- V Fl - is the forward motion vector, as shown in Fig . 1.
- E' 1D E ⁇ : - (MCDCT (W' F ,V P ⁇ . )+ MCDCT(W' B ,V B- .))/2 + W .. (9)
- V F and V B are forward and backward motion vector, respectively, as shown in Fig. 1.
- skipped blocks which are the 0-motion, 0-residue error blocks in B- and P-frames, no operations are necessary, as the watermark inserted in the anchor frame will be carried over.
- Motion vector selection setting the motion vector of a micro-block in P-frame to 0 when the error residue from using motion compensation of this motion vector is larger than without its use.
- Figs. 2a, 2b and 2c illustrate the use of the adaptive watermarking techniques.
- Fig. 2a shows the original watermark mask. While a binary version is shown here, the algorithm is capable of handling gray scale with any specified transparent color.
- Fig. 2b shows an original image.
- Fig. 2c shows the new watermarked image.
- the watermarking algorithm was tested on a HP J210 workstation, achieving a rate of 6 frames/second. Most of the computational effort went into the MC-DCT operations. If all possible MC-DCT blocks were precomputed, real time performance would be possible. This would require 12 megabytes of memory for 352x240 image size.
- preferred watermarks offer robustness in that they are not easily defeated or removed by tampering. For example, if a watermark is inserted in MPEG video by the method described above, it would be necessary to recover the watermark mask, estimate the embedding locations by extensive sub-pixel block matching, and then estimate the ( , " ⁇ ) factors for each watermark region. In experiments, there always remained noticeable traces in the tampered video, which can be used to reject false claims of ownership and to deter piracy.
- a watermark should not be binary, but should have texture which is similar to that of the scene on which it is placed. This can be accomplished by arbitrarily choosing an I -frame from the scene, decoding it by inverse DCT transform to obtain pixel values, and masking out the watermark from the decoded video frame .
- an inserted watermark may be defeated by applying video mosaicing, i.e. by assembling a large image from small portions of multiple image frames. The watermark then can be filtered out as outlier.
- video mosaicing i.e. by assembling a large image from small portions of multiple image frames.
- the watermark then can be filtered out as outlier.
- this technique will fail when there are actually moving objects in the foreground, as the watermark will be embedded in the moving foreground objects as well.
- a watermark can be used which appears static relative to over-all or background motion.
- the affine model can be described as
- the motion vectors in MPEG are usually generated by block matching: finding a block in the reference frame so that the mean square error is minimized. Although the motion vectors do not represent the true optical flow, it is still good in most cases to estimate the camera parameters in sequences that do not contain large dark or uniform regions.
- Dominant motion can be estimated using clustering as follows : For each B- or P-frame, obtain the forward motion vectors .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Digital watermarks can serve to indicate copyright ownership of digitized video. When video images are transmitted as transformed by discrete cosine transformation (DCT) for compression, with or without motion compensation, it is advantageous to include a watermark after transformation. To this end, a DCT watermark is generated for optimal visibility based on the original image data, and the generated watermark is superposed on the transformed data.
Description
WATERMARKING OF DIGITAL IMAGE DATA
Priority is claimed based on U.S. Provisional Application No. 60/063,509, filed October 27, 1997.
Technical Field
This invention relates to providing digital image data with a watermark, and, more particularly, where the image data are video data.
Background of the Invention
A conventional watermark, on a paper document, may consist of a translucent design which is visible when the document is held to the light. Or, more generally, a watermark may be viewed under certain lighting conditions or at certain viewing angles. Such watermarks, which are difficult to forge, can be included for the sake of authentication of documents such as bank notes, checks and stock certificates, for example.
In digital video technology, watermarks are being used to betoken certain proprietary rights such as a copyright, for example. Here, the watermark is a visible or invisible pattern which is superposed on an image, and which is not readily removable without leaving evidence of tampering. Resistance to tampering is called "robustness" .
One robust way of including a visible watermark in a digitized image is described by Braudaway et al . , "Protecting Publically Available Images with a Visible Image Watermark", IBM Research Division, T. J. Watson Research Center, Technical Report 96A000248. A luminance level, ΔL, is selected for the strength of the watermark, and the luminance of each individual pixel of the image is modified by ΔL and a nonlinear function. For increased security, the level ΔL is randomized over all
the pixels in the image.
Summary of the Invention
When images are transmitted as transformed by discrete cosine transformation (DCT) for compression, with or without motion compensation, it is advantageous to include a watermark after transformation. To this end, (i) a DCT watermark is generated for optimal visibility based on the original image data, and (ii) the generated watermark is superposed on the transformed data.
Brief Description of the Drawing
Fig. 1 is an illustration for motion-compensated discrete cosine transformation (MC-DCT) . Fig. 2a is a watermark mask. Fig. 2b is an original image.
Fig. 2c is a superposition of the original image and the watermark mask.
Fig. 3 is a flow diagram of initial processing. Fig. 4 is a flow diagram of watermark superposition processing.
Fig. 5 is a flow diagram of scaling for a region.
Detailed Description
A Mask Generation Module generates a DCT watermark mask based on the original video content. A Motion Compensation Module efficiently inserts the watermark in the DCT domain and outputs a valid video bitstream at specified bitrate. The following description applies specifically to image data in MPEG format.
MPEG video consists of groups of pictures (GOP) as described in document ISO/IEC 13818 - 2 Committee Draft
(MPEG-2) . Each GOP starts with an intra coded "I-frame", followed by a number of forward-referencing "P-frames"
and bidirectionally-referencing "B- frames" .
With motion compensation, when a watermark is inserted in an I -frame, the P- and B-frames in the GOP will be changed also. For such correction, the motion compensation on the watermark in an anchor or base frame must be subtracted when the watermark is added to a current frame. For such subtraction, the technique of motion compensation in the DCT domain can be used as described by S . F. Chang et al . , "Manipulation and Compositing of MC-DCT Compressed Video", IEEE Journal of Selected Areas in Communications, Special Issue on Intelligent Signal Processing, pp. 1-11, January 1995.
In a video sequence, the image content changes from frame to frame. Thus, to keep a watermark sufficiently visible throughout the video, the watermark must be adapted to the video contents. For example, when an image is complicated or "busy", i.e., when it has many high-frequency components, the watermark should be stronger. For different regions in the same video frame, the watermark should be scaled regionally— thereby enhancing the security against tampering.
(i) Mask Generation Module
In this module, as illustrated by Section (i) of Fig. 4, a watermark mask image is first generated for each GOP, or for the first P-frame after a scene cut.
This is based on the fact that video content tends to be consistent within a GOP which is usually about 15 frames or 0.5 second long. But, when there is a scene cut within a GOP, visual content will change significantly, and a new mask is used to adapt to the new visual content. Thus, the watermark mask is superposed on the I-frame, or on the first P-frame after a scene cut.
To generate the mask, as illustrated by Fig. 3, the input watermark image is first converted to a gray scale
image. Only the luminance channel of each image is modified. A transparent color (background color) is chosen. The luminance of all watermark pixels having the transparent color value is set to 0. Optionally, the mask image is randomly shifted in both x- and y- direction. A DCT is applied to obtain the DCT mask of the watermark. The luminance of the mask will be scaled adaptively according to the input image content before adding to the input image. In the pixel domain, the following formulae have been proposed in the above-referenced report by G. W. Braudaway et al . :
wnra' = nm • yw/38.667 • (y^ly 3 • ΔL for ynra/yw > 0.008856, „m' = wnm- yw/903.3 • ΔL for ynm/yw ≤ 0.008856 (1)
where wnra' is the scaled watermark mask that will be added to the original image, wnm is the non-transparent watermark pixel value at pixel (n,m) , yw is the scene white, ynm is the luminance value of the input image at image coordinates (n,m) , and ΔL is the scaling factor which controls the watermark strength.
In accordance with an aspect of the present invention, for scaling in the DCT domain, a stochastic approximation can be used. If ynm and wnra are considered as independent random variables, if y is normalized to the luminance range used in MPEG, namely from [0, 255] to [16, 235] , and if yw = 235, then, based on Equations 1, the expected values of w' are
E[W] = 0.1607 • E[w] ■ E\y ] • ΔL , y > 17.9319
( 2 ) £[w'] = 0.2602 ■ E[w] • ΔL , y≤ 17.9319
Assuming that y has a normal distribution with mean α and variance β2, the E[y 3]-term in Equation (2) can be represented as
Thus, E [y23] is a function of the mean and the variance of the pixel values .
Equation (2) specifies a relationship between the moments of random variables w, w' and y. This relationship can be extended to the deterministic case to simplify Equation (2) , resulting in a linear approximation.
For each 8 by 8 image block, the mean and variance of the block are used to approximate α and β2 in
Equation 3, and the mean α is used to approximate y in deciding which of the formulae to use in Equation 2. w-yέ = 0.1607 • w(/4 /(α. p2) ΔL , o> 17.9319 ,
( 4 ) w-tft - 0.2602 • w{Jk ■ ΔL , o ≤ 17.9319 where, for k = 0, ..., 63, w1-k is the k-th pixel of the i,j-th 8 by 8 block in the watermark image. w' 1-k is for the scaled watermark.
Equation 4 approximates the nonlinear function according to Equation 2, by linear functions block by block. The scaled watermark strength depends on the mean and variance of the image block. For each image block, the higher the mean (i.e. the brighter), and the higher the variance (i.e. the more cluttered), the greater the required strength of the watermark for maintaining consistent visibility of the watermark. The DCT of Equation 4 can be used to obtain the DCT of the watermark mask, which can be inserted in the image in the DCT domain. The mean and variance of the input image can be derived from the DCT coefficients, α « (YDC %) and
( 5 )
P " Var{y) ~
( 6 )
where YDC and YAC are DC- and AC-DCT coefficients, respectively, of the image block Y.
A new watermark mask is calculated for each I -frame and P-frame, the latter in case of a scene cut. For I- frames, all DCT coefficients are readily accessible after minimal decoding of the MPEG sequence, i.e. inverse variable length coding, inverse run length coding and inverse quantization. For P-frames, since most blocks are in the scene cut, these DCT coefficient can be used immediately. For non-intra coded blocks, the average DC and AC energy obtained from intra coded blocks are substituted.
For further speed-up, the block-based (α.^β.-) pair can be replaced by the average (α,T3) over the whole image or over certain regions. In the following, a multi-region approach is described.
The input image can be separated into many rectangular regions. As illustrated by Fig. 5, for each region an (α,T3) pair is calculated, and the mask is generated accordingly. Typically, the watermark is divided into top and bottom regions. This is suitable for most outdoor views with sky in the upper half of the frame and darker scenery in the lower half, as shown in Fig. 2a, for example. Each region will have a relatively visible watermark using different ( ,T3) pairs.
To enhance the security of the watermark further, a randomized location shift can be applied to the watermark image before applying the DCT. This makes removal of the watermark more difficult for attackers who are in possession of the original watermark image, e.g. when a known logo is used for watermark purposes. Sub-pixel randomized location shifting will make it very difficult for the attacker to remove the watermark without leaving some error residue.
The following can be used for shifting. Two random numbers, for x- and y-direction, respectively, are generated and normalized to lie between -1.00 to 1.00. In the spatial domain, sub-pixel shifting is effected by bi-linear interpolation which involves only linear scaling and addition. In the DCT domain, a similar bi-linear operation can be used.
(ii) Motion Compensation Module
Once the DCT blocks of the watermark have been obtained, they are inserted into the DCT frames of the input video in one of three ways, as illustrated by Fig. 4, Section (ii) . For I -frame or intra coded blocks in the B- or P-frames, the DCT of the scaled watermark is added directly:
E'.- = E1D + W\- (7)
where E'-. is the i,j-th resulting DCT block, E-- the original DCT block, and W'.. the scaled watermark DCT according to Equation 6.
For blocks with forward motion vector in P-frame, or backward motion vector only in B- frame, the watermark added in the anchor frame has to be removed when adding the current watermark. The resulting DCT error residue is :
E'13 = E.- - MCDCT (W'p,Vp.- )+ W'-_ (8)
where MCDCT is the motion compensation function in the DCT domain as described in the above-referenced paper by S.-F. Chang et al . W'F is the watermark DCT used in the forward anchor frame, and VFl- is the forward motion vector, as shown in Fig . 1. For bidirectional predicted blocks in B-frame, both forward and backward motion compensation has to be
averaged and subtracted when adding the current watermark:
E'1D = Eι: - (MCDCT (W'F,VPα. )+ MCDCT(W'B,VB-.))/2 + W .. (9)
where VF and VB are forward and backward motion vector, respectively, as shown in Fig. 1. For skipped blocks, which are the 0-motion, 0-residue error blocks in B- and P-frames, no operations are necessary, as the watermark inserted in the anchor frame will be carried over.
For control of the final bit rate one or more of the following features can be included:
1. Quantize/inverse-quantize the DCT coefficients of the watermark so that most high-frequency coefficients will become zero. The result is a coarser watermark, using fewer bits. 2. Cut off high-frequency coefficients. The effect is similar to low-pass filtering in the pixel domain. There results a smoother watermark with more rounded edges .
3. Motion vector selection, setting the motion vector of a micro-block in P-frame to 0 when the error residue from using motion compensation of this motion vector is larger than without its use.
If the motion vector is used, the residual error is
E Y - Ei3 " MCDCT (w ' F , VF1] ) + w ' 1- ; otherwise set : VV,F1- = 0 .
E" __ = E13 - MCDCT ( IF , VF1. ) + w ' 1D where IF is the DCT of anchor frame. If \ E'4 \ < \ E ' j \ , set VF1D = 0.
Figs. 2a, 2b and 2c illustrate the use of the adaptive watermarking techniques. Fig. 2a shows the original watermark mask. While a binary version is shown here, the algorithm is capable of handling gray scale with any specified transparent color. Fig. 2b shows an
original image. Fig. 2c shows the new watermarked image.
The watermarking algorithm was tested on a HP J210 workstation, achieving a rate of 6 frames/second. Most of the computational effort went into the MC-DCT operations. If all possible MC-DCT blocks were precomputed, real time performance would be possible. This would require 12 megabytes of memory for 352x240 image size.
In accordance with an aspect of the invention, preferred watermarks offer robustness in that they are not easily defeated or removed by tampering. For example, if a watermark is inserted in MPEG video by the method described above, it would be necessary to recover the watermark mask, estimate the embedding locations by extensive sub-pixel block matching, and then estimate the ( ,"β) factors for each watermark region. In experiments, there always remained noticeable traces in the tampered video, which can be used to reject false claims of ownership and to deter piracy.
For robustness, a watermark should not be binary, but should have texture which is similar to that of the scene on which it is placed. This can be accomplished by arbitrarily choosing an I -frame from the scene, decoding it by inverse DCT transform to obtain pixel values, and masking out the watermark from the decoded video frame . When there is camera motion such as panning and zooming in a video sequence, an inserted watermark may be defeated by applying video mosaicing, i.e. by assembling a large image from small portions of multiple image frames. The watermark then can be filtered out as outlier. However, this technique will fail when there are actually moving objects in the foreground, as the watermark will be embedded in the moving foreground objects as well. As a countermeasure in accordance with a further embodiment of the invention, a watermark can be used which appears static relative to over-all or background motion. Such a
camera motion using a 2-D affine model, and then translating and scaling the watermark using the estimated camera motion. The affine model can be described as
The motion vectors in MPEG are usually generated by block matching: finding a block in the reference frame so that the mean square error is minimized. Although the motion vectors do not represent the true optical flow, it is still good in most cases to estimate the camera parameters in sequences that do not contain large dark or uniform regions.
When the distance between the object/background and the camera is large, it is usually sufficient to use a 6 parameter affine transform to describe the global motion of the current frame,
where (χ,y) is the coordinate of a macroblock in the current
Given the motion vector for each macroblock, we find the global parameter using the Least Squares (LS) estimation, that is to find a set of parameter ά to minimize the error between the motion vectors estimated in (1) and the actual motion vectors obtained from the MPEG stream «
i where β ^] is the estimated motion vector. To solve for ά , set the first derivative of S(ά) to 0, then we get
where,
All summations are over all valid macro-blocks whose motion vectors survive after the nonlinear noise reduction process. After the first LS estimation, motion vectors that have large distance from the estimated ones are filtered out before a second LS estimation. The estimation process is iterated several times to refine the accuracy.
Dominant motion can be estimated using clustering as follows : For each B- or P-frame, obtain the forward motion vectors .
Assign each motion vector to one of a number (e.g. 4) of pre-defined classes.
Perform one round of global affine parameter estimation.
Assign the global affine parameter to the first class and assign zero to all other classes.
Iterate a number of times, e.g. 20, or until the residual error is stabilized: assigning each motion vector to the class that minimizes Euclidean distance and recalculating the 2-D affine parameters for each class using its member motion vectors.
Claims
1. A method for including a watermark in a digital image, comprising: obtaining digital data of a transformed representation of the image; determining a transformed representation of the watermark for optimized visibility of the watermark in the image ; and superposing the transformed representation of the watermark on the transformed representation of the image.
2. The method in accordance with claim 1, wherein the transformed representation of the image is a compressed representation.
3. The method in accordance with claim 1, wherein the transformed representation of the image is a discrete cosine transformed representation.
4. The method in accordance with claim 1, wherein the image is one of a sequence of video images .
5. The method in accordance with claim 3, wherein the transformed representation includes motion compensation.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US6350997P | 1997-10-27 | 1997-10-27 | |
US63509P | 1997-10-27 | ||
PCT/US1998/022790 WO1999022480A1 (en) | 1997-10-27 | 1998-10-27 | Watermarking of digital image data |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1034635A1 true EP1034635A1 (en) | 2000-09-13 |
Family
ID=22049688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP98956257A Pending EP1034635A1 (en) | 1997-10-27 | 1998-10-27 | Watermarking of digital image data |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1034635A1 (en) |
JP (1) | JP2001522165A (en) |
KR (1) | KR20010031526A (en) |
CA (1) | CA2308402A1 (en) |
WO (1) | WO1999022480A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001061052A (en) | 1999-08-20 | 2001-03-06 | Nec Corp | Method for inserting electronic watermark data, its device and electronic watermark data detector |
KR100472072B1 (en) * | 2001-11-05 | 2005-03-08 | 한국전자통신연구원 | Apparatus and method of injecting and detecting time-domain local mean value removed watermark signal for watermarking system |
US7352374B2 (en) * | 2003-04-07 | 2008-04-01 | Clairvoyante, Inc | Image data set with embedded pre-subpixel rendered image |
GB2421136A (en) * | 2004-12-09 | 2006-06-14 | Sony Uk Ltd | Detection of code word coefficients in a watermarked image |
JP2008225904A (en) * | 2007-03-13 | 2008-09-25 | Sony Corp | Data processing system and data processing method |
KR101729032B1 (en) | 2015-08-17 | 2017-04-21 | (주)스토리허브 | Method and device of generating synthesis image file comprising additional information and computer readable program for the same |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5488664A (en) * | 1994-04-22 | 1996-01-30 | Yeda Research And Development Co., Ltd. | Method and apparatus for protecting visual information with printed cryptographic watermarks |
US5530759A (en) * | 1995-02-01 | 1996-06-25 | International Business Machines Corporation | Color correct digital watermarking of images |
US5664018A (en) * | 1996-03-12 | 1997-09-02 | Leighton; Frank Thomson | Watermarking process resilient to collusion attacks |
-
1998
- 1998-10-27 CA CA002308402A patent/CA2308402A1/en not_active Abandoned
- 1998-10-27 WO PCT/US1998/022790 patent/WO1999022480A1/en not_active Application Discontinuation
- 1998-10-27 KR KR1020007004567A patent/KR20010031526A/en not_active Withdrawn
- 1998-10-27 JP JP2000518471A patent/JP2001522165A/en active Pending
- 1998-10-27 EP EP98956257A patent/EP1034635A1/en active Pending
Non-Patent Citations (1)
Title |
---|
See references of WO9922480A1 * |
Also Published As
Publication number | Publication date |
---|---|
JP2001522165A (en) | 2001-11-13 |
KR20010031526A (en) | 2001-04-16 |
WO1999022480A1 (en) | 1999-05-06 |
CA2308402A1 (en) | 1999-05-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7154560B1 (en) | Watermarking of digital image data | |
Biswas et al. | An adaptive compressed MPEG-2 video watermarking scheme | |
Meng et al. | Embedding visible video watermarks in the compressed domain | |
Hsu et al. | DCT-based watermarking for video | |
Noorkami et al. | Compressed-domain video watermarking for H. 264 | |
Noorkami et al. | A framework for robust watermarking of H. 264-encoded video with controllable detection performance | |
US6285775B1 (en) | Watermarking scheme for image authentication | |
PL183090B1 (en) | Method of concealing data and method of dispensing concealed data | |
Hzu et al. | Digital watermarking for video | |
EP1639829A2 (en) | Optical flow estimation method | |
EP2011075B1 (en) | Digital watermarking method | |
US20040005077A1 (en) | Anti-compression techniques for visual images | |
Wang et al. | High-capacity data hiding in MPEG-2 compressed video | |
Thiemert et al. | Applying interest operators in semi-fragile video watermarking | |
EP1034635A1 (en) | Watermarking of digital image data | |
Lin et al. | An embedded watermark technique in video for copyright protection | |
Golikeri et al. | Robust digital video watermarking scheme for H. 264 advanced video coding standard | |
Lee et al. | Adaptive video watermarking using motion information | |
Kim et al. | A robust video watermarking method | |
Mohankumar et al. | VLSI architecture for compressed domain video watermarking | |
Ghosh et al. | Watermarking compressed video stream over Internet | |
Li et al. | Based on motion characteristics to calculate the adaptive embedding tolerance for imperceptible video watermarking | |
Kang et al. | Real-time video watermarking for MPEG streams | |
Lei et al. | A blind and robust watermarking scheme for H. 264 video | |
Lu et al. | Real-time frame-dependent watermarking in MPEG-2 video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20000517 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
18D | Application deemed to be withdrawn |
Effective date: 20050503 |
|
D18D | Application deemed to be withdrawn (deleted) |