[go: up one dir, main page]

CN119011867A - Image processing method and related equipment - Google Patents

Image processing method and related equipment Download PDF

Info

Publication number
CN119011867A
CN119011867A CN202310576718.4A CN202310576718A CN119011867A CN 119011867 A CN119011867 A CN 119011867A CN 202310576718 A CN202310576718 A CN 202310576718A CN 119011867 A CN119011867 A CN 119011867A
Authority
CN
China
Prior art keywords
image
watermark
video
target
target element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310576718.4A
Other languages
Chinese (zh)
Inventor
肖鑫雨
崔宗阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310576718.4A priority Critical patent/CN119011867A/en
Publication of CN119011867A publication Critical patent/CN119011867A/en
Pending legal-status Critical Current

Links

Landscapes

  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例公开了一种图像处理方法及相关设备,该图像处理方法包括:显示第一服务界面;获取在所述第一服务界面中配置的待处理的第一图像和水印指示信息;显示具备水印的第二图像,其中,所述第二图像是基于所述水印指示信息对所述第一图像的目标元素进行水印添加处理之后得到的,所述目标元素是指所述第一图像所包含的任一个或多个元素。通过本申请实施例,可满足水印添加的个性化需求,提升多媒体数据中水印的抗攻击性。

The embodiment of the present application discloses an image processing method and related equipment, the image processing method comprising: displaying a first service interface; obtaining a first image to be processed and watermark indication information configured in the first service interface; displaying a second image with a watermark, wherein the second image is obtained after watermarking a target element of the first image based on the watermark indication information, and the target element refers to any one or more elements contained in the first image. Through the embodiment of the present application, the personalized needs of watermarking can be met, and the anti-attack capability of watermarks in multimedia data can be improved.

Description

Image processing method and related equipment
Technical Field
The present application relates to the field of computer technology, and in particular, to an image processing method, an image processing apparatus, a computer device, a computer readable storage medium, and a computer program product.
Background
With the development of computer technology, the spread of multimedia data such as video and pictures has become more and more widespread. Based on the corresponding service requirement, the computer equipment can add digital watermarks to the multimedia data and then spread the multimedia data, thereby achieving the purpose of protecting and maintaining the multimedia data. The multimedia data added with the corresponding digital watermark often has certain geometric attacks (such as clipping, scaling, translation and the like), so that the information quantity and the position of the watermark in the multimedia data are seriously affected, and the protection effect of the watermark on the multimedia data is further affected. At present, for adding watermarks in multimedia data, some personalized setting requirements cannot be well met, and the resistance to various geometric attacks is still to be improved. Therefore, how to meet the personalized requirement of watermark addition and improve the attack resistance of watermarks in multimedia data has become a current research hotspot.
Disclosure of Invention
The embodiment of the application provides an image processing method and related equipment, which can meet the personalized requirement of watermark addition and improve the attack resistance of watermarks in multimedia data.
In one aspect, an embodiment of the present application provides an image processing method, including:
displaying a first service interface;
Acquiring a first image to be processed and watermark indicating information configured in a first service interface;
and displaying a second image with the watermark, wherein the second image is obtained by watermarking a target element of the first image based on the watermark indicating information, and the target element is any one or more elements contained in the first image.
In one aspect, an embodiment of the present application provides an image processing apparatus, including:
The display module is used for displaying the first service interface;
The acquisition module is used for acquiring a first image to be processed and watermark indicating information configured in the first service interface;
The display module is further configured to display a second image with a watermark, where the second image is obtained after watermarking a target element of the first image based on the watermark indication information, and the target element is any one or more elements included in the first image.
Accordingly, an embodiment of the present application provides a computer device, including:
A processor adapted to execute a computer program;
A computer-readable storage medium having stored therein a computer program which, when executed by the processor, performs an image processing method of an embodiment of the present application.
Accordingly, an embodiment of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the image processing method of the embodiment of the present application.
Accordingly, embodiments of the present application provide a computer program product comprising a computer program or computer instructions which, when executed by a processor, implement the image processing method of embodiments of the present application.
In the embodiment of the application, a first service interface can be displayed, and the first service interface can provide configuration functions related to watermarking and support configuration on the first service interface: the first image to be added with the watermark and the watermark indication information required by the watermark adding process can better meet the personalized requirement of the watermark adding, and enrich the watermark adding modes. Obtaining watermark indicating information and a first image to be processed, which are configured from a first service interface, further performing watermark adding processing on a target element of the first image based on the watermark indicating information to obtain a second image with watermark, and displaying the second image with watermark; the target element is any one or more elements included in the first image. Therefore, the target element of the first image can be provided with the watermark by adding the watermark to the target element contained in the first image, the target element is one or more elements in the first image and contains relatively valuable image content, and the influence caused by geometric attack can be well resisted by adding the watermark on the target element, so that the use value of the watermark is ensured.
Drawings
FIG. 1a is a block diagram of an image processing system according to an exemplary embodiment of the present application;
FIG. 1b is an interactive schematic diagram of an image processing provided by an exemplary embodiment of the present application;
FIG. 2 is a flow chart of an image processing method according to an exemplary embodiment of the present application;
FIG. 3a is a schematic illustration of a first service interface provided by an exemplary embodiment of the present application;
FIG. 3b is a schematic diagram of a display watermark list provided by an exemplary embodiment of the application;
FIG. 3c is a schematic diagram of a display watermark component provided in accordance with an exemplary embodiment of the application;
FIG. 4a is a schematic diagram of indicating information for setting a target element according to an exemplary embodiment of the present application;
FIG. 4b is a schematic diagram of another indication of setting target elements provided by an exemplary embodiment of the present application;
FIG. 4c is a schematic diagram of a list of types of elements provided by an exemplary embodiment of the present application;
FIG. 4d is a schematic diagram of a list of specific elements provided by an exemplary embodiment of the present application;
FIG. 4e is a schematic diagram of a setup addition mode indication message according to an exemplary embodiment of the present application;
FIG. 5a is a schematic flow diagram of an object detection process according to an exemplary embodiment of the present application;
fig. 5b is a schematic flow diagram of a watermarking process according to an exemplary embodiment of the present application;
fig. 5c is a flow diagram of another watermarking process provided by an exemplary embodiment of the present application;
FIG. 6 is a flow chart of another image processing method according to an exemplary embodiment of the present application;
FIG. 7a is a schematic diagram of a second service interface provided by an exemplary embodiment of the present application;
FIG. 7b is a schematic diagram of a detected watermark combined with a reference watermark provided in accordance with an exemplary embodiment of the application;
FIG. 7c is a schematic diagram of another detected watermark combined with a reference watermark provided in accordance with an exemplary embodiment of the application;
FIG. 7d is a flow chart of a watermark detection process provided by an exemplary embodiment of the present application;
FIG. 8 is a flow chart of an image processing provided by an exemplary embodiment of the present application;
Fig. 9 is a schematic structural view of an image processing apparatus according to an exemplary embodiment of the present application;
Fig. 10 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
For ease of understanding, the terms involved in the embodiments of the present application are explained first.
1. Watermarking
Watermarking refers to information employed when marking multimedia data (e.g., images, video or audio, etc.). The watermark can be a pattern, a character or the like, and based on the marking of the watermark, on one hand, the corresponding multimedia data can be prevented from being counterfeited, and the authenticity and the reliability of the marked multimedia data are improved, on the other hand, the copyright maintenance and the complaint of the multimedia data can be carried out, and the copyright of the digital multimedia data (such as game content, video content or article content and the like) is further protected.
The watermark comprises a visual watermark or a non-visual digital watermark, depending on whether the watermark is visual or not. The visible watermark is, for example, a watermark added for confidentiality of a file, or an anti-counterfeiting mark added for electronic resources, etc. A non-visual digital watermark is a watermark hidden in an added object (or called a carrier) that can only be detected by a corresponding digital watermark detection technique. The digital watermark detection technology is a detection algorithm designed for the watermark adding process, and judges whether the watermark exists by using raw watermark (such as content of characters or icons) or a statistical principle-based detection result, for example: the watermark is extracted and compared with the original watermark to judge whether the watermark exists or not.
Digital watermarking is used as an information hiding technology, and mainly plays a role in protecting original carrier copyright or authenticating content by embedding secret information into an original carrier (such as images, audio, video and the like). Such embedded secret information may be a signature, serial number or other specific text or image etc. which is then embedded into the original carrier by a specific watermark embedding algorithm and does not affect the use of the original carrier nor is it easily perceived. When digital watermarking is added to video, the digital watermarking technique is a digital watermarking technique for video carriers, which may be simply referred to as video digital watermarking technique. In the implementation of adding digital watermarks in video digital watermarking technology, watermark signals can be generated according to passwords, and watermarks are added into original videos through a certain adding algorithm, so that the videos embedded with the watermarks (namely videos containing the watermarks) are finally obtained. Based on the embedding of the digital watermark, the video embedded with the watermark has higher video fidelity, and the video fidelity is used for measuring the similarity measurement before and after the video is added with the watermark, namely the distinguishable degree between the original video and the video containing the watermark. And the video with higher fidelity can avoid influencing the watching effect of the video.
The watermark mentioned in the embodiments of the present application may be a digital watermark. By adding the digital watermark, the watermark is hidden in the multimedia data (such as an image or a video) and cannot be easily found, so that the visual presentation effect of the image is not affected, the use value of the original carrier is not affected, the watermark is not easily ascertained and modified again, namely, the watermark cannot be easily removed by a technical means, and more reliable protection can be realized on the multimedia data.
2. Element(s)
In the embodiment of the application, the elements refer to the constituent elements in the image. An object instance in an image may be understood as an element, such as virtual game gear, virtual game character, etc. in the image. The target elements mentioned in the embodiments of the present application refer to one or more elements in an image, which may be instances of a predefined class of objects accurately and efficiently identified and located from an image or video by a target detection technique. For example, the target element includes a life bar appearing in the game video, the life bar identifying a life value of the virtual game character during the game. The target element may also be a type of element in the image, such as: a virtual game character is included in a picture taken from a game video. If the image is a frame of image in the video, the target element may also be an element in the video with occurrence frequency greater than a preset frequency threshold, for example: the target element is a virtual character a that appears in images of a preset number of frames in the video.
In the embodiment of the application, the robustness of the watermark can be increased by carrying out the watermark adding processing on the target element, so that the watermark can be detected even if the multimedia data (such as video) subjected to the watermark adding processing is attacked by intention or not.
The application provides an image processing scheme, and relates to an image processing system, an image processing method and related equipment. The scheme can display a first service interface and acquire a first image to be processed and watermark indicating information configured in the first service interface. The second image with the watermark may then be displayed. The second image is obtained by watermarking the target element of the first image based on the watermark indicating information. In the watermark adding process, the watermark is added to the target element of the first image, so that the target element has the watermark, and as the target element is valuable image content in the first image, the watermark is added to the target element, so that the attack resistance of the watermark can be better improved, and further, the better protection effect on the digitized multimedia data (such as images, videos and the like) is achieved. Further, one or more elements which are not watermarked in the first image are watermarked, elements containing the watermark are arranged in the second image, the watermarking area can be positioned based on the target element, the position information of watermark embedding can be better detected, and the watermark can be detected more quickly even if geometrical attacks such as clipping are carried out. If the watermark is a digital watermark, the images before and after watermarking are visually identical, i.e. the first image and the second image appear to be of little difference, thereby ensuring the fidelity of the multimedia data.
The architecture of the image processing system according to the embodiment of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 1a, an architecture diagram of an image processing system according to an exemplary embodiment of the present application is shown. As shown in fig. 1a, the image processing system comprises a database 101 and a computer device 102; the database 101 may establish a communication connection with the computer device 102 by wire or wirelessly. Wherein the computer device 102 is configured to perform an image processing procedure; database 101 is used to provide data support for image processing by computer device 102, for example, one or more of images and videos may be stored in database 101 to facilitate direct retrieval of images or videos by computer device 102 therefrom to determine a first image to be processed.
The database 101 may be a local database of the computer device 102 or a cloud database capable of establishing a connection with the computer device 102, according to the deployment location division. According to the attribute division, the database 101 may be a public database, i.e., a database opened to all computer devices; but may also be a private database, i.e., a database that is open only to specific computer devices, such as computer device 102. The database 101 includes a content database, a watermark database, an element database, and a result database according to the difference of stored contents. Wherein the content database is for storing one or more of images and videos. The watermark database is used to store predefined watermarks or components that make up the watermark that may be provided to a user for selection to quickly determine a reference watermark. The element database is used for storing the elements identified from the image. The result database is used for storing images or videos obtained after watermark adding processing and watermark detection results obtained by watermark detection processing on any image or video.
Computer device 102 includes one or both of a terminal device and a server, where the terminal device includes, but is not limited to: smart phones, tablet computers, smart wearable devices, smart voice interaction devices, smart home appliances, personal computers, vehicle terminals, and the like, to which the present application is not limited. The present application is not limited with respect to the number of terminal devices. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, but is not limited thereto. The present application is not limited with respect to the number of servers.
The flow of the computer device 102 performing the image processing method generally includes what is described below by ①-④: ① A first service interface is displayed, which refers to any interface provided by a service for performing watermarking, which may be an offline service or an online service. The watermark indicating information and the first image to be processed may be configured in the first service interface. Further, ② obtains the first image to be processed and watermark indicating information configured in the first service interface. The first image may comprise any one of the following: an image input in the first service interface; or downloading the obtained image based on the image address input in the first service interface; or any frame of image selected from the video downloaded based on the video address input in the first service interface; or any one frame of image selected from the video input in the first service interface, and one frame of image specified in the video. The application is not limited in this regard. The watermark indicating information refers to information related to the watermarking process. The watermark indicating information includes a reference watermark, which refers to a watermark that needs to be used when performing watermark adding processing, and may be a digital watermark, schematically. In one implementation, the watermark indicating information may further include an identification of the reference watermark, such as a name of the reference watermark, a pattern identification of the reference watermark, etc., and the reference watermark may be subsequently acquired for watermarking processing based on the identification of the reference watermark.
③ And watermarking the target element of the first image based on the watermark indication information to obtain a second image with the watermark, wherein the target element refers to any one or more elements contained in the first image.
The computer device 102 includes a terminal device, the service program corresponding to the first service interface is an offline service program and is deployed locally on the terminal device, and the computer device 102 may support an offline watermarking service. Thus, the terminal device may perform what is described above by ③ after acquiring the first image and the watermark addition instruction information. In one possible implementation manner, the terminal device may first extract an added area image corresponding to the target element from the first image, where the added area image is an area image containing the target element, and generally is an image corresponding to a local area occupied by the target element in the first image. Then, the terminal device may add the reference watermark included in the watermark adding instruction information to the extracted added area image to obtain a watermarked area image, and then may replace the added area image in the first image with the watermarked area image, to finally obtain a second image with the watermark.
In another implementation, the computer device 102 includes a terminal device and a server, and the service program providing the watermarking service is an online service program and is deployed at the server side. Based on this, the computer device 102 supports an online watermarking service. The terminal device may send the acquired first image and watermark indicating information to the server. And then, the server carries out watermark adding processing on the target element of the first image based on the watermark indication information to obtain a second image with the watermark, and returns the second image to the terminal equipment for display.
As one implementation, the server may also perform ② and ③ described above. The terminal device may transmit watermark indicating information configured in the first service interface to the server. For the first image, then, any of the following may be used: the terminal equipment directly sends the image input in the first service interface to the server as a first image; or directly transmitting the video input in the first service interface to a server and determining any frame of image from the video by the server as a first image; or transmitting the image address input in the first service interface to the server and downloading the corresponding image as the first image by the server based on the image address; or directly transmitting the video address input in the first service interface to the server, and downloading the corresponding video by the server based on the video address, so as to select any frame of image from the video as the first image.
④ Displaying the second image with the watermark.
The computer device 102 may display the second image directly or may output one address information and download the second image for display based on the address information. In one implementation, if the first image is any frame image in the video, the output address information is a video address, the video is downloaded based on the video address, and the video includes the second image with the watermark.
In the image processing system provided by the embodiment of the application, the computer equipment can display the first service interface, and the first service interface can provide the configuration function related to watermark addition and support the configuration on the first service interface: the first image to be added with the watermark and the watermark indication information required by the watermark adding process can better meet the personalized requirement of the watermark adding, and enrich the watermark adding modes. Obtaining watermark indicating information and a first image to be processed, which are configured from a first service interface, further performing watermark adding processing on a target element of the first image based on the watermark indicating information to obtain a second image with watermark, and displaying the second image with watermark; the target element is any one or more elements included in the first image. Therefore, the target element of the first image can be provided with the watermark by adding the watermark to the target element contained in the first image, the target element is one or more elements in the first image and contains relatively valuable image content, and the influence caused by geometric attack can be well resisted by adding the watermark on the target element, so that the use value of the watermark is ensured.
The image processing system and the image processing method provided by the embodiment of the application can be applied to various sharing scenes, wherein the sharing scenes comprise but are not limited to: game sharing scene, live broadcast sharing scene and content sharing scene. The application is not limited in this regard. In the game sharing scene, a user can upload a game video and watermark adding processing is carried out on image frames in the game video, so that the game video with the watermark is obtained. Further, the user may share the watermarked game video to other platforms, thereby protecting the game video from the user's interests.
In one embodiment, based on the image processing system described above, an exemplary interactive schematic diagram of image processing as shown in fig. 1b may also be provided. Wherein, the sharing object refers to a user sharing the propagated multimedia data (such as video, image, etc.), and the watermark system supports watermark adding processing and watermark detecting processing. The watermark detection process may, on the one hand, perform watermark detection processing on the watermarked image to determine whether the watermark was successfully added. On the other hand, watermark detection is performed on any image or video, whether a watermark is added or not can be determined, and in the case of adding the watermark, tracing is performed based on the watermark.
As shown in fig. 1b, when the sharing object has a sharing and spreading requirement, in order to ensure the copyright of the video spreading by itself, a watermark may be added to the video. The sharing object can upload the video to be watermarked based on the first service interface, and then the watermarking system acquires the corresponding image from the video and acquires the watermark indication information. The watermark system can add watermarks to the corresponding images based on the watermark indication information, so that new videos containing the watermarks are obtained, and the sharing objects can share the new videos acquired by transmission. When the sharing object has detection requirement, whether the corresponding video is the own video or not can be verified, whether the video contains the watermark added before the own video or not can be detected through the watermark detection function provided by the watermark system, and accordingly feedback of detection results is obtained. Taking video as a game video for example, the watermark may be added when the computer device detects that the corresponding game video needs to be shared and propagated, so that the game video with the watermark added in the corresponding game image frame can be shared and propagated, and based on the sharing and propagation of the game video with the watermark added, the watermark in the corresponding image frame can be detected and processed in any propagation process of the game video, so that the authenticity of the game video in the propagation process can be confirmed based on the watermark detection result, and the copyright maintenance rights and the claims of the sharing objects corresponding to the game video can be maintained.
Next, an image processing method provided by an embodiment of the present application will be described.
Fig. 2 is a flowchart of an image processing method according to an exemplary embodiment of the present application. The image processing method may be performed by a computer device, such as the computer device 102 in the system shown in fig. 1a, and may comprise the following steps S201-S203.
S201, displaying a first service interface.
The first service interface refers to any interface provided by a service program for performing watermarking. The first service interface may be used to configure the first image to be processed and the watermark indicating information. Illustratively, as shown in fig. 3a, the first service interface 300 may include a file configuration item 301, an address configuration item 302, and a watermark configuration item 303, where an image to be watermarked may be configured based on the file configuration item 301, and a watermark may be added as needed based on the watermark configuration item 303. Wherein, when an uploaded image or video is received, a corresponding image address or video address may be displayed in the address configuration item 202. When auto-configuration is initiated, the watermark may be randomly configured by the system.
S202, a first image to be processed and watermark indicating information configured in a first service interface are acquired.
The computer device may obtain a first image configured in the first service interface, the first image being an image to be watermarked. Specifically, the image uploaded in the first service interface or the image in the video can be uploaded, or the image acquired based on the corresponding address can be acquired.
In one implementation, a specific implementation of the computer device to obtain the first image to be processed configured in the first service interface may include, but is not limited to, any of the following ①-④:
① A first image input in a first service interface is acquired. The first image may be any picture. The first service interface may include an input configuration item, for example, a file upload option as shown in fig. 3a, and by triggering the input configuration item, an image of the local or cloud may be uploaded into the first service interface.
② And acquiring a first video imported in the first service interface, and extracting a first image from the first video, wherein the first image is any frame image in the first video. The imported first video may be a video stored locally on the computer device or a video stored from the cloud. The application is not limited. For example, the first service interface may include an import configuration item, such as an import key, through which the corresponding video may be selected for import by connecting to a corresponding storage space (e.g., local to the computer device or in the cloud). Then, the computer device may randomly select a frame of image from the first video as the first image, or may extract a corresponding frame of image from the first video as the first image according to an indication of a watermark adding time interval in the video, which is specifically referred to the processing of the first video, and will not be described herein.
③ And acquiring an image address input in the first service interface, and downloading the first image according to the image address. Wherein the image address is an address for storing an image and can be used for addressing the image. The image address includes, but is not limited to, any of the following: the application is not limited herein as to the mailbox address for storing the image, the website link for storing the image, the storage path for the image, etc. The computer equipment can find the first image according to the image address, and then downloads the first image.
④ And acquiring a video address of the second video input in the first service interface, and extracting a first image from the second video according to the video address, wherein the first image is any frame image in the second video. The video address is an address where the video is stored and can be used to address the second video. Video addresses include, but are not limited to, any of the following: the application is not limited herein as to the mailbox address for storing the second video, the website link for storing the second video, the storage path for the second video, etc. The computer equipment can find the second video according to the video address and download the second video, and then intercept any frame of image from the second video as the first image. In one implementation, the computer device may also obtain an indication of a watermarking time interval in the video and extract a corresponding image from the second video as the first image based on the indication. The indication of the watermarking time interval in the video may be preconfigured in the first service interface, for example, the indication of the watermarking time interval in the video configured in the first service interface is 1 second, then the watermarking is added every 1 second in the video, then one frame of image may be selected from the frames of images corresponding to the 1 st second in the second video to be used as the first image, after the watermarking process is completed, one frame of image is selected from the frames of images corresponding to the 2nd second to be used as the new first image, and the watermarking process is performed, and so on until the video is processed. For input of address information in the first service interface, an address configuration item 302 may be provided as shown in fig. 3a, where the address configuration item may support one or more of an input video address and an image address. In addition, for the image or video input in the first service interface, a corresponding address can be analyzed and displayed in the address configuration item.
It will be appreciated that if the computer device includes a terminal device and a server, the content shown in ①-④ may be implemented by the cooperation of the terminal device and the server, and the terminal device may directly send content information (such as an image, a video, an image address, and a video address) configured in the first service interface to the server, and then obtain, by the server, the first image based on the received content information, for example: and acquiring a video according to the video address, extracting a frame of image from the video, and taking the frame of image as a first image. In one possible implementation, if the content information is an image address or a video address, the content information may be written into a corresponding field of the script, and then the server may obtain a corresponding image or video to be watermarked according to the content information and perform the watermarking process only by starting the script. In a possible implementation manner, the server may also send the content information to the watermarking service, and the watermarking process is performed by acquiring the image and the watermark by a service program supporting the watermarking service, and finally, the watermarking result is fed back to the terminal device through a communication protocol.
In what is described in ①-④ above, the computer device can acquire the first image based on any one of the image, the image address, the video address, and the like, and support diversified inputs in the first service interface, so that the manner of acquiring the first image can be greatly enriched, and the acquisition path of the first image is expanded.
In one implementation, the watermark indicating information includes a reference watermark, where the reference watermark is a watermark that is needed for the watermarking process, and may be, illustratively, a digital watermark, and may be added to the first image. The reference watermark is, for example, a text pattern, a floral pattern, etc., and the application is not limited thereto. The specific implementation of obtaining, by the computer device, watermark indicating information configured in the first service interface includes any one of the following ⑤-⑦:
⑤ A reference watermark entered in a first service interface is obtained. The first service interface may include a watermark configuration item, such as a watermark input button marked 301 in the first service interface shown in fig. 3 a. The picture can be selected from the local or cloud as a reference watermark through the watermark configuration item. In this way, the user may input any combination containing one or more of text and images as the reference watermark.
⑥ And displaying the watermark list in the first service interface, and acquiring the selected watermark in the watermark list as the reference watermark. In one possible implementation, the watermark list may be displayed in the first service interface in the form of a floating window, and the watermark list includes a plurality of watermarks arranged in an order. When any watermark is confirmed to be selected, the computer device may use the selected watermark as a reference watermark. Each watermark is provided with a watermark identifier, the watermark list can also comprise the identifier of the watermark, and the identifier of the selected watermark can be added to watermark indication information so that the computer equipment can acquire the reference watermark according to the identifier. Illustratively, a schematic diagram of a watermark list is displayed as shown in fig. 3 b. When the watermark configuration item 303 as in fig. 3a is clicked, a watermark list 320 as shown in fig. 3b may be displayed in the first service interface, the watermark list 320 comprising a plurality of watermarks and each watermark having a respective tick, such that the tick when clicked indicates that the corresponding watermark is selected.
By providing a watermark list for a user to select watermarks, reference watermarks can be selected from watermarks provided by a system, and the watermarks can be selected when watermarks are added to other images before, related information (such as a scrambled watermark matrix) can be directly obtained later to finish watermark addition, and therefore processing efficiency is improved to a certain extent.
⑦ The watermark component is displayed in a first service interface and a reference watermark formed by a combination of one or more watermark components is obtained. Watermark composing means elements constituting a watermark, which may be provided by a service program in particular, including but not limited to: pattern, color, background, signature, etc. Wherein a signature, which is an identity, such as a user ID, can be used to identify the copyright to which the watermarking process object (e.g. video) belongs. For example, if user a records a game video that he/she participates in, the watermark added to the game video may include a user ID, thereby indicating that the copyrights of the game video are attributed to user a. The watermark composition may be displayed based on the triggering of a custom watermark button in the first service interface or may be displayed directly following the first service interface display. The watermark composition may be displayed in a fixed area in the first service interface, or in a floating window. Schematically, a schematic diagram of the watermark composition is shown as in fig. 3 c. When the custom watermark control 320 in the first service interface as shown in (1) of fig. 3c is checked, the watermark component as shown in (2) of fig. 3c may be displayed and displayed in the floating window 321, including various patterns, characters, and various forms of signatures.
In one possible implementation, L (an integer greater than 0) watermark components may be displayed in the first service interface, at least one watermark component being selectable from the L watermark components to generate the reference watermark. When a plurality of watermark components are selected, a combination of the plurality of watermark components may also be provided to the user for designation. Wherein the combination mode comprises but is not limited to: random stacking, sequential stitching, intelligent combining, and so forth. The application is not limited in this regard. By providing watermark composing parts, the reference watermark can be customized for users, thereby forming personalized reference watermark and having better identification.
In what is described above at ⑤-⑦, the reference watermark includes text, an image, or a combination of text and image. The computer equipment can directly input the reference watermark, or provide a watermark list for a user to select, or provide a watermark component part for the user to customize the watermark, so that the acquisition mode of the reference watermark can be greatly enriched, and the personalized watermark can be generated. In addition, in the embodiment of the present application, the sequence numbers such as ①、② do not necessarily limit the execution sequence. The content represented by consecutive sequence numbers is not necessarily associated.
Further, in an implementation, the watermark indicating information further includes indicating information of the target element; the computer apparatus may specifically further execute any of the following (1.1) to (1.5) when acquiring watermark indicating information configured in the first service interface.
(1.1) Receiving a watermarking area set for the first image, setting the watermarking area as indication information of the target element.
The watermarking area is used to indicate the location information of the target element in the first image. The watermarking area includes a target element, which may be a local image area of the first image. Since the reference watermark may be added to the partial image area when the watermarking process is performed, the watermarking area is also the image area that needs to include the reference watermark after the watermarking process is performed. If the target element refers to a plurality of elements in the first image, the watermarking area may be used to indicate location information of each element in the first image. The first service interface supports the watermarking area, for example, as shown in fig. 4a, after the computer device acquires the first image, the watermarking area control 410 may be clicked, so as to display the first image, and directly select the image area containing the target element from the first image as the watermarking area 411. The watermarking area is represented by a coordinate range and the coordinate range may be displayed in the watermarking area in the first service interface during the selection process.
(1.2) Displaying an element list of the first image in the first service interface, and setting the identification of the selected element in the element list as the indication information of the target element.
The list of elements of the first image includes an identification of each element in the first image. The identification of an element may be the name of the element, the sequence number of the element, etc., a thumbnail of the element, etc. An element configuration item can be provided in the first service interface, and an element list can be displayed by triggering the element configuration item, so that indication information of a target element is set. For example, the first image is a frame of image taken in a game video, then elements in the first image include a virtual game character controlled by a player, a non-player character, a building in a virtual game environment, and so on. Then clicking on the control 420 (element configuration item) of the watermarking effect element as shown in fig. 4b (1) may display the element list 421 of the first image as shown in fig. 4b (2). The element list includes names of the respective virtual game characters, names of the non-player characters, and names of the respective buildings in the first image. In addition, the identifiers of the elements can be randomly arranged in the element list, and can be classified according to the element types.
As a possible implementation, the computer device may identify the first image, so as to obtain each element included in the first image, and further form an element list of the first image according to the identifier of each element. And the identity of each element in the list of elements is supported to be selected to indicate that the corresponding element is selected as the target element. The computer device may set the identity of the selected element in the list of elements as the indication information of the target element. Illustratively, the identifier of the selected element includes names of 3 elements, and then three elements may be used as indication information of the target element, so that the target element to be subjected to watermarking processing in the first image may be determined based on the indication information of the target element. In this way, the target element to be watermarked in the first image can be specified by the user by providing the element list of the first image for selection, so that the personalized setting of the target element is realized.
(1.3) Displaying a type list of elements of the first image in the first service interface, and setting a selected element type in the type list as indication information of the target element, the selected element type being used to indicate the element type of the target element.
The list of types of elements of the first image includes one or more element types therein. The element type may be determined according to any one or more of the following: the role of the element, the content of the element, and the object that controls the element. There may be one or more elements under each element type. Illustratively, a schematic diagram of a list of types of elements is shown in FIG. 4 c. The displayed element type list 431 includes a player-controlled virtual character (abbreviated as a player character) and a non-player-controlled virtual character (abbreviated as a non-player character) in the first image according to the object of the control element. When any element type is confirmed to be selected, the computer device may set the selected element type as the indication information of the target element. The target element to be watermarked in the first image may then be determined based on the indication information of the target element, the target element comprising all elements of the selected element type. Through the selection of the element types in the element type list, the indication information of the target element can be obtained, and the elements acted by the watermark addition can be conveniently set.
And (1.4) if the first image is a frame of image in the video, displaying a specific element list in the video in the first service interface, and setting the identification of the selected specific element in the specific element list as the indication information of the target element.
The video may be any video, such as: game video, live video, shopping share video, make-up video, and the like. The list of specific elements includes an identification of one or more specific elements, such as the name of the specific element, such as the list of specific elements 440 shown in fig. 4 d. The specific element comprises an element with the occurrence frequency in the video being larger than a preset frequency threshold, wherein the occurrence frequency of the element in the video can be determined based on the number of frames of images occupied by the element in each frame of images in the video. For example, the virtual game character S1 exists in 60 frames of 120 frames of images corresponding to the video, that is, the virtual game character S1 is included in 60 frames of images, and the occurrence frequency of the virtual game character S1 is 60. By setting the identification of the element with higher occurrence frequency as the indication information of the target element, the target element contains the element with higher occurrence frequency in the video, so that the more images containing the watermark in the video are obtained after the watermarking processing, and the resistance to geometric attacks is further improved.
(1.5) If the first image is a frame of image in the game video, setting the identification of the virtual game character in the first image as the indication information of the target element.
The game video may be any of the following: the application is not limited in this regard as such may be a live video of a game, a game play video, a game video obtained by recording a game process, a video obtained by processing a game video, etc. In a game video, the presence of a virtual game character makes the game video of corresponding value, and thus the virtual character is an element of value in the game video. Based on this, the computer device may set the identification of the virtual character in the first image as the indication information of the target element. Based on the setting of the identity of the virtual game character, a watermark may subsequently be added to the indicated virtual game character, thereby better combating geometrical attacks such as clipping. In one implementation, the computer device may first identify and display the virtual game character of the first image with the corresponding identification, and then use the identification of the selected virtual game character as the indication of the target element. In another implementation, the computer device may directly set the identification of all virtual game characters in the first image as the indication of the target element, and in the subsequent processing, the determined target element includes all virtual game characters in the first image.
Alternatively, if there are a number of elements in the first image greater than the number threshold, the identification of the element may also be used as indication information of the target element, so that after watermarking, a second image containing a corresponding number of watermarks is obtained, so that the number of watermarks in the second image is increased, which can also resist geometrical attacks to a certain extent.
In one implementation, the watermark indicating information further includes addition mode indicating information; the addition mode indication information may be used to indicate a specific mode of adding the reference watermark to the target element. The addition mode indication information includes at least one of the following (2.1) to (2.3).
(2.1) Indicating the number of reference watermarks added for the target element.
The addition manner instruction information includes information for indicating the number of reference watermarks to be added by the target element. If the number of target elements is 1, then a corresponding number of reference watermarks may be added to the target elements based on this information. If the number of target elements is greater than 1, then a corresponding number of reference watermarks may be added for each target element based on this information. Illustratively, the number of reference watermarks added for the target element is indicated as 2, the target element includes 2 elements, and then 2 reference watermarks can be respectively added for the 2 elements based on the information, so that the robustness of the watermark can be enhanced.
(2.2) If the number of target elements is greater than 1, indicating that the same reference watermark is added for each target element, or indicating that different reference watermarks are added for each target element.
The number of target elements is greater than 1, i.e. a target element refers to a plurality of (i.e. at least two) elements in the first image. The addition mode indication information includes information for indicating that the same reference watermark is added for each target element. And carrying out watermarking processing according to the information, wherein the reference watermarks of all target elements in the finally obtained second image are the same. Or the addition mode indication information includes information for indicating that different reference watermarks are added for respective target elements. And carrying out watermarking processing according to the information, wherein the reference watermarks of all target elements in the finally obtained second image are different. By setting the same or different reference watermarks for each element to be subjected to watermark adding processing, the personalized processing effect of the watermark can be improved.
And (2.3) if the number of the target elements is greater than 1 and each target element belongs to one or more element types, indicating that the same reference watermark is added for the target elements of the same element type, and adding different reference watermarks for the target elements of different element types.
When the number of target elements is greater than 1 and the element types to which each target element belongs include one or more, the addition manner indication information includes information for indicating that the same reference watermark is added for each target element, and information for adding different reference watermarks for target elements of different element types. And then watermark adding processing is carried out according to the information, so that target elements with the same element type in the first image have the same reference watermark after watermark adding, and target elements with different element types have different reference watermarks after watermark adding. Illustratively, a player-controlled virtual game character in the first image may add watermark character AA, while a non-player-controlled virtual game character may add watermark character TT. The indication information indicates different reference watermarks for target elements of different element types through corresponding addition modes, is also personalized watermark addition, and can distinguish the target elements of different element types based on the added watermarks.
In one implementation, the content indicated by the addition manner indication information shown in (2.1) - (2.3) above may be set by a watermarking manner configuration item provided in the first service interface. Illustratively, as shown in FIG. 4e, when the watermarking manner control 450 is clicked, an interface 451 displaying the watermarking manner selection may be triggered, in which an amount of watermarking may be set, by which the amount of watermarking may be indicated as the amount of watermarking added for the target element. It may also be provided that the same or different watermarks are added, further that the same or different watermarks may function in connection with the setting of the added dimension, e.g. if "same" is selected and the added dimension selects "element", the same watermark may be added for different elements in the first image. If "different" is selected and the add dimension selects "element type", then different watermarks may be added for target elements of different element types in the first image.
S203, displaying a second image with a watermark, where the second image is obtained by watermarking a target element of the first image based on watermark instruction information, and the target element is any one or more elements included in the first image.
If the added reference watermark is a digital watermark, the second image is almost indistinguishable from the first image in visual presentation. I.e. the elements contained in the first image, and the same elements in the second image. Whereas in the data dimension the values of the pixels of the second image and the first image in the watermarked area are different. This is particularly true in that the covered pixels in the image area corresponding to the target element of the first image contain information representing the watermark, which is detected by the watermark detection process. The target element of the first image, after being watermarked, may result in a first image comprising the reference watermark, which is output as a result of the watermarking process, i.e. the second image.
In one implementation, the image address of the second image may be output, the second image may be downloaded based on the image address of the second image for display, or the second image may be directly displayed.
The image processing method provided by the embodiment of the application can display the first service interface, and the first service interface can provide the configuration function related to watermark addition and support the configuration on the first service interface: the first image to be added with the watermark and the watermark indication information required by the watermark adding process can better meet the personalized requirement of the watermark adding, and enrich the watermark adding modes. Obtaining watermark indicating information and a first image to be processed, which are configured from a first service interface, further performing watermark adding processing on a target element of the first image based on the watermark indicating information to obtain a second image with watermark, and displaying the second image with watermark; the target element is any one or more elements included in the first image. Therefore, the target element of the first image can be provided with the watermark by adding the watermark to the target element contained in the first image, the target element is one or more elements in the first image and contains relatively valuable image content, and the influence caused by geometric attack can be well resisted by adding the watermark on the target element, so that the use value of the watermark is ensured.
In one embodiment, the computer device may specifically perform the following steps 11-13 when watermarking the target element of the first image based on the watermark indicative information.
Step 11, extracting an added area image corresponding to the target element from the first image, wherein the added area image refers to an area image containing the target element.
The first image is the original image before watermarking. The image area covered by the added area image is a partial image area in the first image compared to the first image, in which the target element is contained, so that the reference watermark is added therein to exert a corresponding protective effect.
In one embodiment, the computer device may specifically perform the following steps (11-1) -step (11-2) when extracting the added area image corresponding to the target element from the first image: and (11-1) calling a target detection model to perform target detection processing on the first image to obtain a detection result. And (11-2) determining a target element from the N elements, and extracting an added area image from the first image according to the position information of the target element in the first image.
The detection result obtained through the target detection processing comprises N elements and corresponding position information (box) of each element in the first image, wherein N is a positive integer. The N elements may refer to all elements detected based on the detection capabilities of the object detection model. The target detection model is a trained detection model, which may illustratively be a depth improvement model based on yolo (you only look once, a target detection model). The class and position of the object in the image can be identified based on yolo model only by browsing once, yolo model does not need to find the image area (region) where the target may exist in advance, but rather predicts the class and position of all elements in the image directly.
For training of the target detection model, a supervised training mode can be adopted. Under the supervised training mode, the model parameters can be adjusted by giving the training image set and the category label and the position label of the target element in each training image contained in the training image set, and based on the predicted category information and the given category label and the predicted position information and the predicted position label, the characteristics of elements with different element types can be learned, and based on the learning of the characteristics of various elements, more accurate detection can be performed in the model reasoning stage.
As a possible implementation, the object detection model includes an encoder, a decoder, and a loss side. The structure and function of each part are described in detail below.
1. Encoder with a plurality of sensors
The encoder comprises a plurality of encoding layers, the encoding layers are connected by adopting a full-cross-layer connection mode, and a local cross-layer fusion mode is supported among the encoding layers.
The full-cross layer connection mode refers to that all the coding layers are communicated arbitrarily, and not just adjacent coding layers. By adopting a full-cross-layer connection mode, cross-layer learning can be applied to the output of each coding layer so as to increase the utilization rate of image features, and the cross-layer connection can accelerate the circulation of information between information and shorten the information path between bottom features and high-layer features, thereby improving the learning efficiency of a target detection model. The local cross-layer fusion mode enables the fusion of different layer coding results to be realized based on cross-layer connection between partial coding layers in the plurality of coding layers.
Each coding layer of the coder is used for coding the first image to obtain a coding result so as to extract the visual information of the first image. To ensure learning and reasoning efficiency, the coding layer may learn visual information in the training image using a shallow depth network. In one implementation, the coding layer may be a CSPNet (Cross STAGE PARTIAL Network, a lightweight Network) based local convergence Network. CSPNet is a cross-phase local network that can respect the variability of gradients by integrating the feature maps of the beginning and end of the network phase. In order to reduce the memory consumption, the coding layer may be an improved CSPNet local fusion network, and the improved CSPNet local fusion network adopts a partial local cross-layer fusion mode, and the balance between the processing precision and the network parameter number is achieved by fusing the output of a small number of coding layers, namely, the network parameters are reduced as much as possible while the processing precision is ensured, so that more efficient calculation is realized. It is understood that the coding layer may be other network structures such as residual network ResNet, depth residual network DenseNet, convolutional neural network CNN, and the like.
2. Decoder
The decoder comprises a plurality of decoding layers, and the decoding layers are connected by adopting a full-cross-layer connection mode; the decoder is used for carrying out fusion processing on the encoding results of all the encoding layers of the encoder to obtain decoding results.
In one implementation, the decoder may be a spatial pyramid pooling structure, and the size of the decoder output may be fixed by using the spatial pyramid pooling structure, and different sizes of the same image may be used as input to obtain pooled features of the same dimension, so that feature vectors of a fixed size may be extracted from the multi-scale features based on the spatial pyramid. Through the full-cross-layer connection mode between the decoding layers, the circulation between information can be quickened, and the information path between the bottom layer characteristics and the high layer characteristics is shortened, so that the learning efficiency of the network is improved.
The decoder can perform fusion processing on the coding results output by different coding layers to obtain decoding results. The decoding result may be a feature vector representing the added region image, and the decoding result may be further processed by the loss side to obtain a final detection result.
3. Loss end
The loss end is used for predicting elements in the first image and corresponding position information of the elements in the first image based on a decoding result of the decoder. The loss end can be used for predicting the category information of the element and the position information of the element in the image in the model training stage, further calculating a loss value based on the corresponding category label and position label, and adjusting model parameters based on the loss value, so that the characteristics of the elements with different element types are learned in the training stage. In the model reasoning stage, the loss end can predict the processed elements in the first image and the corresponding position information of the elements in the first image based on the decoding result output by the decoder, and takes the processed elements and the corresponding position information of the elements in the first image as a final detection result.
Based on a cross-layer connection mode adopted by an encoding layer and a decoding layer in the target detection model, the target detection model has higher detection speed and higher detection precision. Based on the local cross-layer fusion mode, the model parameter and the prediction precision can be balanced well, so that the calculation efficiency is quickened and the prediction accuracy is ensured.
Based on the description of the structure and function of the object detection model described above, an exemplary diagram as shown in fig. 5a may be provided for the flow of the object detection process performed by the object detection model. The first image is an input video frame, the video frame can be input to an encoder of a target detection model, the video frame is subjected to coding processing through coding layers in sequence to obtain a coding result, then the coding result of the last coding layer is input to a first decoding layer of a decoder, the decoding result is sequentially subjected to decoding processing of the decoding layer, the decoding result is output to a loss end, and the loss end predicts to obtain a detection result of the video frame, wherein the detection result comprises a plurality of elements existing in the video frame and position information of each element.
Further, after the detection result is obtained, if the watermark indicating information includes indicating information based on the target element, the computer device may determine the target element from the N elements based on the indicating information of the target element. For example, if the indication information of the target element is an identification of an element of a certain element type, it may be determined that the element of the certain element type is the target element. Target elements include, but are not limited to: game icons, progress bars identifying hero vital values, virtual characters specific to the game, and the like. If the watermark indicating information does not include the indicating information based on the target element, the computer device may determine all the detected N elements as the target element, or determine the target element according to a certain selection policy, for example, an element whose occurrence frequency exceeds a preset frequency threshold is taken as the target element. Since each element in the detection result corresponds to the position information, the computer device may extract a region image containing the target element according to the position information of the target element in the first image, and use the region image as the added region image.
And step 12, adding a reference watermark into the added area image to obtain a target area image.
The target area image is an area image having a reference watermark with respect to the added area image. In one implementation, the computer device, when executing step 12, may specifically execute the following steps (12-1) -step (12-3).
And (12-1) decomposing the added area image to obtain a plurality of subgraphs of the added area image under different image channels.
The plurality of sub-graphs includes a detail sub-graph and an approximation sub-graph. When the computer equipment carries out decomposition processing on the added area image, the decomposition processing can be carried out on the added area image from three image channels respectively, so that a detail subgraph and an approximation subgraph corresponding to the added area image under three different image channels respectively can be obtained. Alternatively, the three image channels for decomposing the added area image may be image channels divided based on RGB colors, or may be image channels determined in other manners, such as image channels divided by YUV colors.
In a specific implementation of the decomposition processing, the computer device may perform preprocessing on the added area image first to obtain a processed added area image, and then perform the decomposition processing on the processed added area image. Wherein the preprocessing includes one or more of color space conversion processing, image resizing processing. The color space conversion process is to convert the added area image X from a first color space to a second color space, so as to obtain a converted added area image X, where the first color space is an original color space where the added area image X is located, for example, an RGB space, and the second color space is a converted color space, for example, a YUV space. By the color space transformation process, the impact of watermarking on image texture can be reduced. The image size adjustment processing is to perform a size adjustment processing on the extracted additional region image X to obtain an adjusted additional region image X, and for example, to obtain a target region image having an image size of H by image size adjustment. Through the image size adjustment processing, the image size can be regulated, and the efficiency of watermark adding processing is further improved.
If the preprocessing includes only the color space conversion processing, the converted added area image X may be used as the processed added area image X, and the decomposition processing may be performed on the basis of this. If the preprocessing includes only the image size adjustment processing, the adjusted added area image X may be used as the processed added area image X, and the decomposition processing may be performed on the basis of this. If the preprocessing includes color space conversion processing and image size adjustment processing, optionally, the image size adjustment processing may be performed on the added area image to obtain an adjusted added area image X, then the adjusted added area image X is converted from the first color space to the second color space to obtain a converted area image, and then the converted area image is used as the processed added area image, and the decomposition processing is performed on the basis of the converted area image.
As one implementation, the computer device may employ a one-level Discrete Wavelet Transform (DWT) process when decomposing the added region image. DWT processing refers to wavelet transformation that performs scaling and translation on a specific subset, and is a signal processing means with multiple resolution capabilities in both time and frequency domains. The frequency domain refers to the frequency of an image, and is an index for representing the intensity of gray level variation in the image, and is the gradient of gray level in plane space. The added area image is processed through a one-level DWT, the wavelet domain of the added area image is divided into four sub-bands, and each sub-band not only comprises the frequency domain component of the added area image, but also comprises the corresponding spatial domain component.
The main information of the added region image under any image channel after primary DWT processing is concentrated in the LL sub-band (i.e. the approximation sub-image), while the other sub-bands (i.e. the detail sub-image) are respectively the HL sub-band (i.e. the vertical detail sub-image), the LH sub-band (horizontal detail sub-image) and the HH sub-band (diagonal detail sub-image), wherein in the image marked 51 in fig. 5a, the HL sub-band, LH sub-band and HH sub-band are not illustrated, only to indicate the presence of these three sub-bands (or detail sub-images) after primary DWT processing. Performing one-level DWT decomposition on the added region image to obtain a detail subgraph of the added region image X after wavelet transformation corresponding to each image channelSum-of-approximation subgraphWherein, approximates the subgraphIs a base layer sub-picture of the restored added area image, wherein more important information in the added area image is reserved.
Illustratively, the calculation mode of performing one-level DWT processing on the added area image X to obtain a detail sub-graph and an approximation sub-graph under each image channel is as follows in formula 1).
Wherein c represents each image channel corresponding to the added area image X, and a corresponding value thereof is used for indicating one image channel. For example, the image channel is a YUV channel, then the value of c may be c=y, u, v.A detail sub-graph under image channel c is shown,Representing an approximation subgraph under image channel c. The approximation subgraph under each image channel is a low-frequency graph under each image channel. For example, the YUV channel includes a Y channel low frequency map, a U channel low frequency map, and a V channel low frequency map.
Step (12-2) adds the reference watermark to any one of the plurality of subgraphs of the added region image under each image channel using the matched watermarking rules, resulting in a watermarked subgraph under each image channel.
The reference watermark may be a carrier of watermark information such as character images input by a user according to requirements, or may be a digital watermark randomly configured by the computer device when the user does not specify the watermark. For each image channel, a sub-graph can be selected to add the reference watermark, so as to obtain the sub-graph with the watermark added under each image channel.
In one possible implementation, the computer device may first obtain watermark matrix elements of the watermark matrix corresponding to the reference watermark. For obtaining the watermark matrix corresponding to the reference watermark, the specific implementation process is as follows: the computer device may perform gray level conversion processing on the reference watermark to obtain a gray level image corresponding to the reference watermark, and then convert the gray level image into a binary matrix W0 according to a preset threshold, where matrix elements included in the binary matrix W0 are 0 or 1. Optionally, the size of the binary matrix W0 may be adjusted to obtain a watermark matrix W matching the size of the added area image processed by the decomposition process, for example: the binary matrix may be resized to obtain a watermark matrix W of H, which is still a binary matrix, i.e. the matrix elements comprise only 0 and 1. Then, the watermark matrix W may be subjected to a blocking process to obtain a plurality of matrix sub-blocks, where each matrix sub-block includes a plurality of matrix elements, and may be used to represent one watermark sub-block. Illustratively, the watermark sub-blocks may be equally divided into n x n. And performing replacement processing on the matrix sub-blocks to obtain a scrambled watermark matrix WL. When the scrambling is performed, the computer device may use a scrambling algorithm (e.g., arnold scrambling algorithm) to obtain a scrambled sub-block queue, where the sub-block queue includes a plurality of matrix sub-blocks that are sequentially arranged, and then rearrange matrix sub-blocks in the sub-block queue according to a new arrangement order, and splice to obtain a scrambled watermark matrix WL, where the scrambled watermark matrix WL may be used to represent the scrambled watermark. The matrix elements included in the scrambled watermark matrix WL are then used as watermark matrix elements. In another implementation, the watermark matrix corresponding to the reference watermark may not be decomposed or scrambled (may be collectively referred to as a block reset process), and the values in the watermark matrix may be directly used to update the values of the pixels of the corresponding sub-image.
After the preparation is made, the computer device may continue to obtain watermarking rules corresponding to the watermark matrix elements and watermarking intensities. And then adopting the matched watermarking rule to update the added object according to the watermarking intensity so as to add the watermark into the added object. The object of addition here refers to: the sub-picture to be watermarked corresponds to an image matrix element in the image matrix. Since the image matrix and the watermark matrix are of the same size, the image matrix elements are the same value as the watermark matrix elements and are in the image matrix.
Wherein the computer device may obtain the watermark addition strength alpha from the watermark indication information. The calculation principle of the watermark adding strength can be shown in the following formula 2).
Wherein, int ensures that the obtained result is an integer, λ 1 is a first hyper-parameter, λ 2 is a second hyper-parameter; mean represents the mean operation, tanH is the activation function, abs represents the absolute value, as the corresponding value of the vector is point multiplied,Representing the object of addition.
The matched watermarking rules may be a first or a second watermarking processing rule, which may be determined based on values in the scrambled watermark matrix WL. Further, the computer device may update the values of the pixels in the corresponding sub-graph according to the watermarking intensity under the matched watermarking processing rule, thereby adding the reference watermark to the approximated sub-graphOr detail subgraphIs a kind of medium. Illustratively, taking the example of reference watermarking to an approximate subgraph, the definition of the watermarking process may be as shown in equation 3) below.
Wherein, Representing the value of the ith pixel in the watermarked approximation subgraph corresponding to image channel c, alpha represents the watermark addition intensity,A first rule of additive processing is indicated, Representing a second additive processing rule,// represents integer division to return a maximum integer no greater than the result. WL i represents watermark information, e.g. values in a scrambled watermark matrix, added to the corresponding position pixels, i representing the position of the corresponding watermark matrix element. The size of the watermark matrix WL after scrambling and the size of the image matrix corresponding to the sub-image are matched, for example, the sizes of the watermark matrix WL and the sub-image are all h×h. Therefore, the above two watermarking processing rules can be adopted according to 0 or 1 represented in the scrambled watermark matrix WL, so as to complete updating of each pixel in the approximated sub-graph, and add the reference watermark into the approximated sub-graph. Reference is made to the schematic diagram of the watermarking process shown in fig. 5b for the above flow. Wherein an image block, labeled 53, in the approximation subgraph may contain one or more pixels; the image marked 500 is the second image obtained after the digital watermark has been added. The second image contains a watermark.
The above process of adding the reference watermark to the approximation subgraph of each image channel can realize the parity remainder correction for the three-channel low-frequency image values. It will be appreciated that if the reference watermark is to be added to the detail subgraph, the computer device may also apply the matched watermark adding processing rule to the detail subgraph under any image channel according to the scrambled watermark matrix according to the watermark adding strengthAnd updating the value of the pixel at the corresponding position in order to finish the watermarking processing, thereby obtaining a detail drawing after watermarking.
And (12-3) carrying out fusion processing according to the subgraphs added with the watermark and the subgraphs not added with the watermark under each image channel to obtain the target area image.
If the watermark-added subgraph is an approximation subgraph added with the watermark and the subgraph not added with the watermark is a detail subgraph, fusion processing can be carried out according to the watermark-added approximation subgraph and the detail subgraph not added with the watermark under each image channel to obtain a target area image, wherein the target area image is the added area image and is the area image with the watermark. Illustratively, in the fusion process, the inverse wavelet transform implementation may be used, and specifically may be performed according to the following definition of equation 4).
Wherein X' represents the watermarked area image, IDWT represents the wavelet inverse transform process,Represents an approximation subgraph after adding a watermark,And c represents each image channel corresponding to the added area image X, and a corresponding value is used for indicating one image channel. For example, the image channel is a YUV channel, then the value of c may be c=y, u, v.
In one implementation, if the decomposition is directly performed based on the added area image X, the watermarked area image X 'obtained here is used as the target area image, and in another implementation, if the decomposition is performed after the preprocessing is performed on the added area image X, after the inverse wavelet transformation is performed to obtain the watermarked area image X', the inverse preprocessing is performed, so as to obtain the target area image. Illustratively, the size of the added area image is adjusted during preprocessing, and the added area image is converted from the RGB space to the YUV space, then after the wavelet inverse transformation processing, the needed X 'is converted from the YUV space to the RGB space, and the X' is reversely resized, so that the target area image is finally obtained, and the target area image and the added area image have the same image size, so that the added area image is replaced later to obtain a new image with the watermark.
And 13, replacing the added area image in the first image by the target area image to obtain a second image.
After the target area image is obtained, the target area image can be used for replacing the added area image in the first image, so that the area image originally containing the target element is transformed into the area image with the watermark, the image content of other areas in the first image is kept unchanged, and a second image with the watermark can be obtained.
It can be understood that if there are multiple target elements in the first image, for the added area image corresponding to each target element, the corresponding target area image can be obtained in the above manner, and then the corresponding added area image is replaced by the target area image, so as to finally obtain the second image with the watermark.
In the watermarking process flow shown in the above steps 11-13, a schematic watermarking flow shown in fig. 5c may be provided. Taking a first image (i.e. an original image frame) as an example of a game image, extracting a target image area (corresponding to an added area image), and then sequentially performing YUV channel conversion and discrete wavelet conversion to obtain a low-frequency image under each channel. The scrambled watermark is combined to carry out parity remainder correction (specific implementation of adding a reference watermark) on the three-channel low-frequency image value to obtain an updated low-frequency image, and then the updated low-frequency image is subjected to discrete wavelet inverse transformation to obtain an image region containing the watermark, and the image region containing the watermark can be used for replacing a target image region in the original image, so that a second image (namely a watermark image frame) is finally obtained.
In the above flow, a reference watermark may be added to any sub-image obtained by decomposing the extracted added image region, and the reference watermark may be added to the sub-image after scrambling, so as to implement watermarking of the target element of the first image. The overall complexity is low, the robustness of the watermark can be enhanced, and the balance of the watermarking speed and the watermarking robustness can be ensured. The digital watermark is adopted for watermark addition, so that the visual effect of the first image on the image before and after the watermark addition can be prevented from being influenced. Because the target element is valuable content (such as conventional logo and indispensable roles) in the image, the watermark is added to the target element in the first image, so that geometric attack can be well resisted, the information quantity and the position of the watermark in the video are ensured, the watermark information in the video is favorably detected, and the better application of the digital watermark in the video is ensured.
If the target element and its position information are included in the detection result, the detection result indicates that a detection target is present in the first image, i.e. the target element is detectable in the first image. Illustratively, the first image is one image frame in the game video, and the detection target is hero character a in the game video. Then the detection result of the image frame is obtained through the target detection process, wherein the detection result of the image frame comprises the category of each hero role and the position of each hero role in the image frame. If the detection result contains hero character A, the detection result indicates that a detection target exists. In the case of a detected object, watermarking of the object element may be achieved in accordance with the watermarking process described in steps 11-13 above. In the case where the detection target is not present, it is conceivable to perform watermarking in other ways to ensure the watermarking effect.
In one implementation, the first image is any frame of image in the game video; the watermark indicating information further comprises interval indicating information for indicating a watermarking time interval in the game video. The computer device may obtain, as the first image, an image frame in the game video to be watermarked according to the adding time interval indicated by the interval indication information. Illustratively, the watermark interval information is 1 second, which means that the video needs to be watermarked every 1 second, and if the frame rate of the video is 30fps, at least one image frame may be randomly acquired from 30 frames per second as the image frame to be watermarked. The image frames obtained from the video can also be called video frames, the designated image frames in the game video can be subjected to watermark adding processing through the interval indication information, each image frame does not need to be added with a watermark, and when the number of the image frames in the game video is relatively large, the watermark adding speed can be increased, so that the watermark adding result is fed back to a user quickly, the user experience is improved, and the use popularity of the watermark adding capability can be improved. If no target element is detected in the first image, indicating that no target element exists in N elements included in the detection result, the reference watermark cannot be added to the element to be added, then the computer device may further perform any one of the following (1) - (2).
(1) A reference watermark will be added in the entire image area of the first image.
Here, the reference watermark may be added to the entire first image, for example by padding the first image with the reference watermark. Even if geometrical attacks such as clipping exist, the reference watermark can be restored by corresponding technical means, so that a better protection effect is achieved.
(2) And determining a time period corresponding to the watermarking time interval to which the first image belongs, and re-selecting one frame of image from the time period in the game video to determine the new first image.
Since the first image is determined based on the interval indication information, the time of the game video can be divided into a plurality of time periods according to the addition time interval indicated by the interval indication information. The period of time corresponding to the watermarking time interval to which the first image belongs in the game video may be described by data in the time (e.g., second) dimension or in the image frame number dimension. Illustratively, the interval indication information is 1 second, which means that one frame of image is selected from the video for watermarking every 1 second interval. The video duration of the game video is 10 seconds and the frame rate is 60fps (i.e., 60 frames of images per second), then one frame of images may be selected from the 1 st second as the first image. If the target element does not exist in the first image, it can be determined that the first image belongs to the first watermarking time interval, and then the corresponding time period is determined to be 0-1 second. Next, one frame image may be newly selected from 60 frame images corresponding to 0-1 second, for example, an image adjacent to the previous first image is selected, or an image spaced by a preset number of frames is used as a new first image, and the computer device may perform watermarking according to the steps described in S201-S203.
If the detection result indicates that the first image does not have the detection target, the computer equipment adds the watermark to the whole image area of the first image, or reselects one image frame in the video, and repeats the steps of the detection processing, so that the images of adjacent frames are selected to carry out watermark addition when the watermark addition cannot be carried out on the image frames originally appointed in the game video, the frame number of the image added with the watermark in the game video is ensured to be in accordance with the expected, and the robustness of the watermark addition of the game video is ensured.
Fig. 6 is a flowchart of another image processing method according to an exemplary embodiment of the present application. The image processing method may be performed by a computer device, such as the computer device 102 in the system shown in fig. 1a, and may comprise the following steps S601-S603.
S601, displaying a second service interface.
The second service interface refers to any interface provided by a service program for performing watermark detection. As a possible implementation, the service for performing watermarking is the same service as the service for performing watermark detection, or is a different service. If the service program is the same service program, one service program can integrate the watermark adding function and the watermark detecting function. If different, the two services may have different functions, such as: the service program AP1 has a watermark adding function, and the service program AP2 has a watermark detecting function. When the service program performs watermarking, a digital watermark may be added to the target element in the first image according to the steps shown in S201-S203 described above. When the service program performs watermark detection, the watermark present in the second image may be detected in accordance with the steps shown in S601 to S603 in the present embodiment.
The service program includes any one of an offline service program and an online service program. If the service is an online service, the service provides online services (e.g., one or more of an online watermarking service and an online watermark detection service), and the service may be deployed at the server side. The service requests of the plurality of terminal devices can be processed through the online service program, and then the data carried by the corresponding service requests are processed, such as watermark detection processing is carried out on the second image carried by the watermark detection request. If the service is an offline service, the service provides offline services, the service is deployed locally on the computer device (e.g., the terminal device), and watermarking/watermark detection may be implemented locally on the computer device.
The computer equipment displays a second service interface and can provide a setting function for multimedia data needing watermark detection. Illustratively, a second service interface is shown in FIG. 7 a. The second service interface 710 includes a file configuration item 701, an address configuration item 702, and a watermark configuration item 703, where a picture or a video to be detected can be uploaded through the file configuration item 701, an address of a file such as a picture or a video can be obtained through the address configuration item 702, and a watermark for comparison can be uploaded through the watermark configuration item 703 or directly selected from watermarks provided by a detection system. Then triggering the control of 'start detection' to obtain detection information and starting watermark detection flow.
S602, acquiring detection information set in the second service interface, where the detection information includes the second image.
The computer device may receive a second image input in the second service interface to obtain a second image; or receiving an image address of a second image input in the second service interface, and acquiring the second image based on the image address of the second image; or receiving a video (or video address) input in the second service interface, wherein the video (or video corresponding to the video address) contains a second image, and the second image can be acquired from the video. In one implementation, after watermarking processing is performed on the second image, the computer device may provide a second service interface to watermark the second image. Optionally, the detection information may also include a reference watermark, so that subsequently detected watermarks may be compared to determine whether they are images or videos containing the reference watermark.
S603, displaying the watermark detected from the second image, wherein the detected watermark is obtained by watermark detection processing on the target element in the second image.
In one embodiment, the computer device may also, prior to performing S603: calculating the similarity between the detected watermarks and the reference watermark; if the similarity is greater than or equal to the preset similarity threshold, the step S603 is performed. Wherein the similarity may be used to reflect the degree of similarity between the detected watermark and the reference watermark added during the watermarking process. The detected watermark and the reference watermark are respectively corresponding to watermark matrixes, and the similarity between the detected watermark and the reference watermark can be calculated based on the values of the two watermark matrixes at the same position. In one implementation, the similarity may be a gray level similarity calculated based on a gray level value, and the preset similarity threshold may be a gray level threshold. When the similarity is larger than a preset similarity threshold, the detected watermark and the reference watermark can be determined to be relatively similar, the detected watermark can be determined to be the reference watermark used in the watermark adding process, and the detected watermark is displayed. If the similarity is less than the predetermined similarity threshold, indicating that the detected watermark differs significantly from the reference watermark, the computer device may not display the watermark and output the similarity between the two. In another embodiment, whether the similarity of the two exceeds a preset similarity threshold or not, the detected watermark can be displayed, so that the detected watermark is returned to the user to confirm whether the detected watermark is the added reference watermark or not.
In one embodiment, the computer device may jointly display the reference watermarks while displaying the watermarks detected from the second image. In this way, the detected watermark and the reference watermark can be displayed together, so that a user can more intuitively contrast the watermark, and further, the user can automatically confirm whether the watermark detected from the second image is the added reference watermark or not through the display of the detected watermark and the reference watermark. Wherein the manner of joint display includes one or more of the following ①-③: ① The detected watermark and the reference watermark are displayed in the same or different interfaces. For example, the detected watermark and the reference watermark may be displayed on different areas of the second service interface, or the detected watermark and the reference watermark may be displayed in different interfaces, and the two interfaces may be switched between each other. As shown in fig. 7b, the detected watermark is displayed in interface 720, the reference watermark is displayed in interface 721, and the two interfaces may be switched to display different watermarks. ② And comparing the detected watermark with the reference watermark for display. In particular, the detected watermark and the reference watermark may be placed in the same interface for comparative display. ③ And simultaneously presenting the detected watermark and the reference watermark, and displaying the similarity between the detected watermark and the reference watermark. The similarity may be a percentage value, the higher the similarity, the more similar the two watermarks are, i.e. the higher the probability that the detected watermark is the reference watermark. Illustratively, a schematic diagram of a joint display is shown in FIG. 7 c. When the computer device compares the detected watermark with the reference watermark, the similarity of the two is also shown to be 95%. The data of the similarity can more intuitively indicate the judging result of the computer equipment on the detected watermark and the reference watermark, and in addition, the similarity can also be used for maintaining and complaining about the copyright of the second image or the video to which the second image belongs.
The above ①-③ describes a method for enriching the joint display between the detected watermark and the reference watermark. Further, the detection result can be more visual through the joint display between the detected watermark and the reference watermark.
It should be noted that, according to the content described in S601-403, watermark detection processing can be performed on any image or video, so as to obtain a watermark detection result, where the watermark detection result may be used to indicate whether the detected image or video includes a watermark. Further, if the watermark includes a signature, the detected image or video may be traced according to the watermark, so as to learn the rights object to which the image or video belongs. And the detection information comprises watermarks for comparison, the computer equipment can also acquire the watermarks in the detection information based on the second service interface, and then calculate the similarity between the watermarks used for comparison and the detected watermarks based on the comparison.
The image processing method provided by the embodiment of the application can display a second service interface, and the second interface supports information required by configuration detection, and comprises the following steps: the image to be detected may also include a watermark for reference comparison. Further, detection information configured in the second service interface may be obtained, watermark detection may be performed on the target element of the second image included in the detection information, a detected watermark may be obtained, and the detected watermark may be displayed. It can be seen that, through watermark detection processing, on one hand, whether the watermark is successfully added can be determined for the image obtained by the watermark adding processing, on the other hand, watermark detection can be performed on any image or video, and the watermark detection result can be quickly obtained for the target element in the corresponding image, and the watermark detection result can be used for indicating whether the watermark for reference comparison is included. Furthermore, the copyright of the image/video can be traced based on the detected watermark, so that better protection of the image/video and other multimedia data is realized.
Illustratively, the reference watermark added in the embodiment of the present application is a digital watermark, so the watermark detection process mentioned in the embodiment of the present application is also the detection of a digital watermark. In one embodiment, the computer device may specifically perform the following steps 21-22 when performing the watermark detection process on the target element of the second image.
Step 21, extracting a detection area image corresponding to the target element from the second image, wherein the detection area image refers to an area image containing the target element.
If the target element includes one element, the computer device may extract a region image containing the target element, and if the target element includes a plurality of elements, the computer device may extract a plurality of detection region images, each containing one element.
In one embodiment, the manner in which the computer device extracts the detection area image corresponding to the target element from the second image is specifically as follows: and performing target detection processing on the second image to obtain a target element and position information of the target element, and extracting a detection area image from the second image based on the position information of the target element. As a possible implementation manner, the computer device may call the target detection model to perform target detection processing on the second image, so as to obtain a detection result, where the detection result includes N elements and location information of each element. The target element may then be determined from the N elements. The above description of the watermarking process for the first image is specifically referred to in this process, and will not be described herein.
And step 22, watermark detection processing is carried out on the detection area image.
In one possible implementation, the computer device may specifically perform the following steps (22-1) -step (22-3) when performing watermark detection processing on the detection area image.
And (22-1) decomposing the detection area image to obtain a plurality of subgraphs of the detection area image under different image channels.
In the decomposing process of the detection area image, the detection area image may be preprocessed as well, and the preprocessing includes: one or more of an image resizing process and a color space transformation process. Illustratively, the size of the detection area image is adjusted to obtain an image area X with H, and the image area X is converted from RGB space to YUV space, so that the influence on the image texture can be reduced by adding the watermark. And then carrying out one-level DWT decomposition on the converted image area X to obtain a detail subgraph and an approximation subgraph of the image area corresponding to wavelet transformation under each image channel. Reference may be specifically made to the above formula 1), that is, the detection area image or the preprocessed detection area image is substituted into the above formula 1), so that a detail sub-image and an approximation sub-image of the detection area image under different image channels may be obtained.
And (22-2) watermark detection is carried out on a plurality of subgraphs of the detection area image in each image channel by adopting the matched detection rule, so as to obtain watermark information.
The computer equipment adopts corresponding detection rules, and watermark detection processing can be carried out on the detail subgraph and the approximation subgraph under each image channel to obtain watermark information representing the watermark in the corresponding subgraph. In another implementation, the processed sub-graph may be marked during the watermarking process, and the corresponding sub-graph may be directly watermark detected according to the mark during the watermark detection process.
In one implementation, the computer device may first determine from the subgraph that the detection object currently being watermark detected is given, and obtain the watermark addition strength used when adding the watermark. And then, adopting the matched detection rule, and carrying out watermark detection on the subgraph containing the watermark based on the watermark adding strength and the detection object to obtain watermark information. The matched detection rules comprise a first detection rule and a second detection rule, and whether the detection rules are matched or not can be determined based on comparison results between the calculated numerical values of the subgraph containing the watermark and the watermark adding intensity and preset numerical values.
Illustratively, the subgraph containing the watermark is an approximated subgraph of the detection region image under each image channel. The computer device may perform watermark detection processing on the approximated sub-graph, and in a specific implementation, may determine watermark information in the approximated sub-graph according to the definition shown in equation 5) below.
Where WL i represents the value of the ith position pixel in the watermark matrix,The value of the approximation subgraph at the ith pixel, representing image channel c, α represents the watermarking intensity.A first rule of detection is indicated and,Representing a second detection rule. Wherein,% represents the remainder operation. It is understood that the process calculated according to equation 5) above can also be understood as a process of summing the image values of the three-channel low frequency map.
Through the above formula 4), a complete watermark matrix WL may be obtained, and then the watermark matrix may be subjected to a block processing to obtain a plurality of matrix sub-blocks. One matrix sub-block contains part of the matrix elements in the watermark matrix, each matrix sub-block representing one watermark sub-block. For example: the watermark matrix WL for H is equally divided into n matrix sub-blocks to indicate that the watermark is divided into n watermark sub-blocks. And performing inverse scrambling on the matrix sub-blocks to obtain an inverse scrambled watermark matrix. In a specific implementation, an inverse scrambling algorithm (for example, an Arnold scrambling inverse algorithm is used to perform inverse scrambling on n×n watermark sub-blocks) to obtain an inverse scrambled sub-block queue, where the inverse scrambled sub-block queue includes matrix sub-blocks that are arranged in sequence, and the inverse scrambled sub-block queue can be arranged according to a new arrangement sequence to obtain an inverse scrambled watermark matrix W. The inversely scrambled watermark matrix is the watermark information. It can be seen that in this process the processing of the reference watermark in watermark detection and watermarking is reciprocal.
And (22-3) performing watermark restoration processing according to the watermark information to obtain the watermark contained in the detection area image.
Because the watermark is subjected to gray level conversion in the watermark adding process to obtain a binarized watermark matrix, the watermark matrix W subjected to inverse scrambling is a binarized watermark matrix, the watermark matrix is further converted into an image space to obtain a final watermark matrix, the watermark can be generated according to the watermark matrix, and the watermark can be visually presented in the form of an image.
If the first image is an image frame in the video to be detected, a probability of whether the video has a watermark or not can be obtained, and the probability can be determined according to the similarity. The probability is higher, indicating that there is a greater likelihood of watermark, and the probability is lower, indicating that there is a lesser likelihood of watermark. The watermark detection result can be used for copyright dimension and complaint of the watermark detection result.
In one possible implementation, a detection area image corresponding to each element may be extracted from the second image, where the detection area image corresponding to one element includes one element. Watermark detection processing can be performed on each detection area image, so that the detection area image with the watermark added can be determined, and finally, the watermark in the corresponding detection area image can be extracted.
Through the above-described steps (22-1) - (22-3), a flow diagram of the watermark detection process shown in fig. 7d can be provided. For example, for an image frame to be detected, a target image area (corresponding to a detection area image) may be extracted, the target image area is transformed to obtain a low-frequency image under each image channel, then the low-frequency image of each image channel is subjected to image numerical value remainder calculation to obtain watermark information contained in the low-frequency image, then the watermarks are combined and reversely blocked and reset, and finally the watermark is detected. Further, a grayscale similarity between the detected watermark and a reference watermark (e.g., watermark pattern) may be calculated, and then compared to a grayscale threshold to determine whether the detected image frame is a watermark frame (i.e., an image containing the watermark).
It can be seen that whether the multimedia data contains a watermark can be obtained by performing watermark detection processing on the image frames in the image/video. In a specific watermark detection process, target detection processing can be performed on an image, so that a target image area is extracted, and transformation, watermark information extraction and other processes can be performed by taking the target image area as a processing object, so that the watermark contained in the image area can be rapidly detected. The detected watermark may be compared in similarity to a reference watermark to determine whether the image to be detected is an image containing the reference watermark.
In connection with the embodiments described above with respect to fig. 2 and 6, a computer device may provide two levels of experience flow for a respective user:
In the first aspect, the user provides the video address to be added with the digital watermark, the reference watermark to be added and the watermark adding interval, the path of the video file and other corresponding fields written in the script are provided, then, only the script is required to be started, the computer equipment can directly send the video address, the digital watermark to be added and the watermark adding interval to the online watermark adding service, and finally, the calculated video watermark adding result address is fed back to the user through the http protocol. The http protocol refers to Hyper Text Transfer Protocol (hypertext transfer protocol) which is a transfer protocol for transferring hypertext from a web server to a local browser.
In the second aspect, the user provides the video address of the watermark to be detected and the reference watermark to be detected, the computer equipment can write two kinds of information into corresponding fields of the script, then only the script is required to be started, the computer equipment can directly send the video address and the reference watermark to be detected to the online watermark detection service, and finally the matching degree between the detected result and the watermark is fed back to the user through an http protocol.
Finally, the user can obtain the video after adding the watermark and the probability of the reference watermark existing in the video to be detected respectively according to the requirement, and after obtaining the video after adding the watermark, the user can directly use the video as an authored piece for own requirement and share the video with other platforms, and the watermark detection result can also be used for maintaining and complaining of own copyright. The service supports the watermark adding and detecting of the universal game video, has low requirement on the computing capacity, supports the quick computing without GPU (graphics processing unit, image processor), and meets the adding requirement and detecting requirement in various scenes.
It can be seen that the watermarking function can be provided by an online watermarking service and the calculated watermarking result or the address of the watermarking result is fed back. The online watermark detection service provides a watermark detection function and feeds back the detection result.
The image processing method provided by the application can support the watermarking and watermark detection of various game videos (games with fixed icons). Based on the above, after the game video is obtained, the watermark adding model (providing the watermark adding service) may be used to watermark the game video, so as to obtain the game video after the watermark is added, as shown in the flow chart of the image processing shown in fig. 8. After the game video with the watermark is subjected to editing, special effect processing and other processing, the game video with the watermark can be released to a corresponding content platform. Further, the computer device may obtain the video (e.g., obtain processed game video from multiple platforms), and further perform watermark detection processing using a watermark detection model (to provide watermark detection service), so as to obtain a watermark detection result. And then, based on the detection result, whether the acquired video is the video containing the watermark can be judged, so that the source of the video material can be traced back based on the watermark, and the detection result of the video is counted into a list.
The method tests a plurality of game videos according to the requirements of users, can achieve good watermark adding effect, and basically does not influence the fidelity ratio of the videos. In the practical test, the untrimmed video has a recall rate of more than 80% after attack, and the trimmed video has a recall rate of more than 70% after attack, so that the untrimmed video has higher robustness to resist geometrical attacks such as clipping. Therefore, the image processing method provided by the application can realize rapid and target detection-based game video digital watermark encryption capable of resisting clipping, can solve the watermark adding of game video in a hidden, visual damage-free, low-cost and efficient manner, can rapidly respond based on common computer equipment, and can realize real-time watermark adding or detection in video.
The image processing apparatus provided by the embodiment of the present application is explained in the following.
Referring to fig. 9, fig. 9 is a schematic diagram of an image processing apparatus according to an exemplary embodiment of the present application. The image processing apparatus may be a computer program (including program code) running on a computer device (e.g. a computer device in an image processing system), for example the image processing apparatus is an application software; the image processing device can be used for executing corresponding steps in the image processing method provided by the embodiment of the application. As shown in fig. 9, the image processing apparatus 900 includes: a display module 901, an acquisition module 902 and a processing module 903. Wherein:
The display module 901 is configured to display a first service interface;
an acquiring module 902, configured to acquire a first image and watermark indicating information configured in a first service interface to be processed;
the display module 901 is further configured to display a second image with a watermark, where the second image is obtained after watermarking a target element of the first image based on watermark indication information, and the target element is any one or more elements included in the first image.
In one embodiment, the obtaining module 902 is specifically configured to: acquiring a first image input in a first service interface; or acquiring a first video imported in the first service interface, extracting a first image from the first video, wherein the first image is any frame image in the first video; or acquiring an image address input in the first service interface, and downloading a first image according to the image address; or acquiring a video address of the second video input in the first service interface, and extracting a first image from the second video according to the video address, wherein the first image is any frame image in the second video.
In one embodiment, the watermark indicating information includes a reference watermark; the acquiring module 902 is specifically configured to: acquiring a reference watermark input in a first service interface; or displaying a watermark list in the first service interface, and acquiring the selected watermark in the watermark list as a reference watermark; or displaying watermark components in the first service interface, and acquiring a reference watermark formed by combining one or more watermark components; wherein the reference watermark comprises text, an image, or a combination of text and an image.
In one embodiment, the watermark indicating information further comprises indicating information of the target element; the acquiring module 902 is specifically configured to: receiving a watermarking area set for the first image, and setting the watermarking area as indication information of a target element, wherein the watermarking area is used for indicating position information of the target element in the first image; or displaying an element list of the first image in the first service interface, and setting the identification of the selected element in the element list as the indication information of the target element; or displaying a type list of the elements of the first image in the first service interface, and setting a selected element type in the type list as indication information of the target element, wherein the selected element type is used for indicating the element type of the target element; or if the first image is a frame of image in the video, displaying a specific element list in the video in the first service interface, and setting the identification of the selected specific element in the specific element list as the indication information of the target element; the specific elements comprise elements with occurrence frequency larger than a preset frequency threshold value in the video; or if the first image is one frame of image in the game video, setting the identification of the virtual game role in the first image as the indication information of the target element.
In one embodiment, the watermark indicating information further includes addition mode indicating information; the addition mode indication information comprises at least one of the following: indicating the number of reference watermarks added for the target element; if the number of the target elements is greater than 1, indicating to add the same reference watermark for each target element; if the number of the target elements is greater than 1, different reference watermarks are indicated to be added for each target element; if the number of the target elements is greater than 1 and each target element belongs to one or more element types, the same reference watermark is indicated to be added for the target elements of the same element type, and different reference watermarks are indicated to be added for the target elements of different element types.
In one embodiment, the display module 901 is further configured to: displaying a second service interface; the obtaining module 902 is further configured to: acquiring detection information set in a second service interface, wherein the detection information comprises a second image; the display module 901 is further configured to: and displaying the watermark detected from the second image, wherein the detected watermark is obtained by watermark detection processing on the target element in the second image.
In one embodiment, the processing module 903 is configured to: calculating the similarity between the detected watermarks and the reference watermark; and if the similarity is greater than or equal to a preset similarity threshold, executing the step of displaying the watermark detected from the second image.
In one embodiment, the display module 901 is specifically configured to: jointly displaying the reference watermarks while displaying the watermarks detected from the second image; wherein: the joint display mode comprises any one or more of the following: displaying the detected watermark and the reference watermark in the same or different interfaces; comparing and displaying the detected watermark with a reference watermark; and presenting the detected watermark and the reference watermark simultaneously, and displaying the similarity between the detected watermark and the reference watermark.
In one embodiment, the first service interface refers to any interface provided by a service program that performs watermarking; the second service interface refers to any interface provided by a service program for executing watermark detection; the service program for executing watermark adding and the service program for executing watermark detection are the same service program or are different service programs; wherein the service program includes any one of an offline service program and an online service program.
In one embodiment, the watermark indicating information includes a reference watermark; the processing module 903 is specifically configured to: extracting an added area image corresponding to the target element from the first image, wherein the added area image refers to an area image containing the target element; adding a reference watermark in the added region image to obtain a target region image; and replacing the added area image in the first image with the target area image to obtain a second image.
In one embodiment, the processing module 903 is specifically configured to: invoking a target detection model to perform target detection processing on the first image to obtain a detection result, wherein the detection result comprises N elements and corresponding position information of each element in the first image; n is a positive integer; and determining target elements from the N elements, and extracting a target area image from the first image according to the position information of the target elements in the first image.
In one embodiment, the object detection model includes an encoder, a decoder, and a loss side; the encoder comprises a plurality of encoding layers, the encoding layers are connected by adopting a full-cross-layer connection mode, and a local cross-layer fusion mode is supported among the encoding layers; each coding layer of the coder is used for coding the first image to obtain a coding result so as to extract visual information of the first image; the decoder comprises a plurality of decoding layers, and the decoding layers are connected by adopting a full-cross-layer connection mode; the decoder is used for carrying out fusion processing on the coding results of all the coding layers of the coder to obtain decoding results; the loss end is used for predicting elements in the first image and corresponding position information of the elements in the first image based on a decoding result of the decoder.
In one embodiment, the processing module 903 is specifically configured to: decomposing the added area image to obtain a plurality of subgraphs of the added area image under different image channels; adding the reference watermark into any one of a plurality of subgraphs of the added area image in each image channel by adopting a matched watermark adding rule to obtain a watermarked subgraph under each image channel; and carrying out fusion processing according to the subgraphs added with the watermark and the subgraphs not added with the watermark under each image channel to obtain the target area image.
In one embodiment, the first image is any frame of image in the game video; the watermark indicating information also comprises interval indicating information, wherein the interval indicating information is used for indicating the watermarking time interval in the game video; a processing module 903 configured to: if the target element is not detected in the first image, adding a reference watermark in the whole image area of the first image; or if the target element is not detected in the first image, determining a time period corresponding to the watermarking time interval to which the first image belongs, and re-selecting one frame of image from the time period in the game video to determine the new first image.
In one embodiment, the processing module 903 is configured to: extracting a detection area image corresponding to the target element from the second image, wherein the detection area image refers to an area image containing the target element; watermark detection processing is carried out on the detection area image.
In one embodiment, the processing module 903 is specifically configured to: decomposing the detection area image to obtain a plurality of subgraphs of the detection area image under each image channel; watermark detection is carried out on a plurality of subgraphs of the detection area image under each image channel by adopting the matched detection rule, so as to obtain watermark information; and performing watermark restoration processing according to the watermark information to obtain the watermark contained in the detection area image.
It may be understood that the functions of each functional module of the image processing apparatus described in the embodiments of the present application may be specifically implemented according to the method in the embodiments of the method, and the specific implementation process may refer to the relevant description of the embodiments of the method and will not be repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
Computer devices provided by embodiments of the present application are described in connection with the following.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application. In one embodiment, the computer device may be a terminal device. As shown in fig. 10, the computer device 1000 may include, in particular, an input device 1001, an output device 1002, a processor 1003, a memory 1004, a network interface 1005, and at least one communication bus 1006. Wherein: the processor 1003 may be a central processing unit (Central Processing Unit, CPU). The processor may further comprise a hardware chip. The hardware chip may be an Application-specific integrated Circuit (ASIC), a programmable logic device (Programmable Logic Device, PLD), or the like. The PLD may be a Field Programmable gate array (Field-Programmable GATE ARRAY, FPGA), general purpose array Logic (GENERIC ARRAY Logic, GAL), or the like.
The Memory 1004 may include Volatile Memory (RAM), such as Random-Access Memory (RAM); the Memory 1004 may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Flash Memory (Flash Memory), a Solid state disk (Solid-state disk-STATE DRIVE, SSD), etc.; the Memory 1004 may be a high-speed RAM Memory or a Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory. The memory 1004 may also optionally be at least one storage device located remotely from the processor 1003. Memory 1004 may also include a combination of the above types of memory. As shown in fig. 10, an operating system, a network communication module, an interface module, and a device control application may be included in the memory 1004, which is one type of computer-readable storage medium.
The network interface 1005 may include a standard wired interface, a wireless interface (e.g., WI-FI interface), as a communication interface, operable to provide data communication functionality; the communication bus 1006 is responsible for connecting the various communication elements; the input device 1001 receives instructions of object input to generate signal inputs related to object settings and function control of a computer device, in one embodiment, the input device 1001 includes one or more of a touch panel, a physical Keyboard or virtual Keyboard (Keyboard), function keys, a mouse, etc.; the output device 1002 is configured to output data information, where in embodiments of the present application, the output device 1002 may be configured to Display a first service interface/a second service interface, display a watermark, and so on, and the output device 1002 may include a Display screen (Display) or other Display device; the processor 1003 is a control center of the computer device, and connects respective parts of the entire computer device by various interfaces and lines, and executes various functions by scheduling execution of a computer program stored in the memory 1004.
The processor 1003 may be used to invoke a computer program in the memory 1004 to perform the following operations: displaying a first service interface; acquiring a first image to be processed and watermark indicating information configured in a first service interface; and displaying a second image with the watermark, wherein the second image is obtained by watermarking a target element of the first image based on the watermark indicating information, and the target element is any one or more elements contained in the first image.
In one embodiment, the processor 1003 is specifically configured to: acquiring a first image input in a first service interface; or acquiring a first video imported in the first service interface, extracting a first image from the first video, wherein the first image is any frame image in the first video; or acquiring an image address input in the first service interface, and downloading a first image according to the image address; or acquiring a video address of the second video input in the first service interface, and extracting a first image from the second video according to the video address, wherein the first image is any frame image in the second video.
In one embodiment, the watermark indicating information includes a reference watermark; the processor 1003 is specifically configured to: acquiring a reference watermark input in a first service interface; or displaying a watermark list in the first service interface, and acquiring the selected watermark in the watermark list as a reference watermark; or displaying watermark components in the first service interface, and acquiring a reference watermark formed by combining one or more watermark components; wherein the reference watermark comprises text, an image, or a combination of text and an image.
In one embodiment, the watermark indicating information further comprises indicating information of the target element; the processor 1003 is specifically configured to: receiving a watermarking area set for the first image, and setting the watermarking area as indication information of a target element, wherein the watermarking area is used for indicating position information of the target element in the first image; or displaying an element list of the first image in the first service interface, and setting the identification of the selected element in the element list as the indication information of the target element; or displaying a type list of the elements of the first image in the first service interface, and setting a selected element type in the type list as indication information of the target element, wherein the selected element type is used for indicating the element type of the target element; or if the first image is a frame of image in the video, displaying a specific element list in the video in the first service interface, and setting the identification of the selected specific element in the specific element list as the indication information of the target element; the specific elements comprise elements with occurrence frequency larger than a preset frequency threshold value in the video; or if the first image is one frame of image in the game video, setting the identification of the virtual game role in the first image as the indication information of the target element.
In one embodiment, the watermark indicating information further includes addition mode indicating information; the addition mode indication information comprises at least one of the following: indicating the number of reference watermarks added for the target element; if the number of the target elements is greater than 1, indicating to add the same reference watermark for each target element; if the number of the target elements is greater than 1, different reference watermarks are indicated to be added for each target element; if the number of the target elements is greater than 1 and each target element belongs to one or more element types, the same reference watermark is indicated to be added for the target elements of the same element type, and different reference watermarks are indicated to be added for the target elements of different element types.
In one embodiment, the processor 1003 is further configured to: displaying a second service interface; acquiring detection information set in a second service interface, wherein the detection information comprises a second image; and displaying the watermark detected from the second image, wherein the detected watermark is obtained by watermark detection processing on the target element in the second image.
In one embodiment, the processor 1003 is configured to: calculating the similarity between the detected watermarks and the reference watermark; and if the similarity is greater than or equal to a preset similarity threshold, executing the step of displaying the watermark detected from the second image.
In one embodiment, the processor 1003 is specifically configured to: jointly displaying the reference watermarks while displaying the watermarks detected from the second image; wherein: the joint display mode comprises any one or more of the following: displaying the detected watermark and the reference watermark in the same or different interfaces; comparing and displaying the detected watermark with a reference watermark; and presenting the detected watermark and the reference watermark simultaneously, and displaying the similarity between the detected watermark and the reference watermark.
In one embodiment, the first service interface refers to any interface provided by a service program that performs watermarking; the second service interface refers to any interface provided by a service program for executing watermark detection; the service program for executing watermark adding and the service program for executing watermark detection are the same service program or are different service programs; wherein the service program includes any one of an offline service program and an online service program.
In one embodiment, the watermark indicating information includes a reference watermark; the processor 1003 is specifically configured to: extracting an added area image corresponding to the target element from the first image, wherein the added area image refers to an area image containing the target element; adding a reference watermark in the added region image to obtain a target region image; and replacing the added area image in the first image with the target area image to obtain a second image.
In one embodiment, the processor 1003 is specifically configured to: invoking a target detection model to perform target detection processing on the first image to obtain a detection result, wherein the detection result comprises N elements and corresponding position information of each element in the first image; n is a positive integer; and determining target elements from the N elements, and extracting a target area image from the first image according to the position information of the target elements in the first image.
In one embodiment, the object detection model includes an encoder, a decoder, and a loss side; the encoder comprises a plurality of encoding layers, the encoding layers are connected by adopting a full-cross-layer connection mode, and a local cross-layer fusion mode is supported among the encoding layers; each coding layer of the coder is used for coding the first image to obtain a coding result so as to extract visual information of the first image; the decoder comprises a plurality of decoding layers, and the decoding layers are connected by adopting a full-cross-layer connection mode; the decoder is used for carrying out fusion processing on the coding results of all the coding layers of the coder to obtain decoding results; the loss end is used for predicting elements in the first image and corresponding position information of the elements in the first image based on a decoding result of the decoder.
In one embodiment, the processor 1003 is specifically configured to: decomposing the added area image to obtain a plurality of subgraphs of the added area image under different image channels; adding the reference watermark into any one of a plurality of subgraphs of the added area image in each image channel by adopting a matched watermark adding rule to obtain a watermarked subgraph under each image channel; and carrying out fusion processing according to the subgraphs added with the watermark and the subgraphs not added with the watermark under each image channel to obtain the target area image.
In one embodiment, the first image is any frame of image in the game video; the watermark indicating information also comprises interval indicating information, wherein the interval indicating information is used for indicating the watermarking time interval in the game video; processor 1003, for: if the target element is not detected in the first image, adding a reference watermark in the whole image area of the first image; or if the target element is not detected in the first image, determining a time period corresponding to the watermarking time interval to which the first image belongs, and re-selecting one frame of image from the time period in the game video to determine the new first image.
In one embodiment, the processor 1003 is configured to: extracting a detection area image corresponding to the target element from the second image, wherein the detection area image refers to an area image containing the target element; watermark detection processing is carried out on the detection area image.
In one embodiment, the processor 1003 is specifically configured to: decomposing the detection area image to obtain a plurality of subgraphs of the detection area image under each image channel; watermark detection is carried out on a plurality of subgraphs of the detection area image under each image channel by adopting the matched detection rule, so as to obtain watermark information; and performing watermark restoration processing according to the watermark information to obtain the watermark contained in the detection area image.
It should be understood that the computer device 1000 described in the embodiment of the present application may perform the description of the image processing method in the embodiment corresponding to the foregoing description, and may also perform the description of the image processing apparatus 900 in the embodiment corresponding to the foregoing description of fig. 9, which is not repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
In addition, it should be noted that, in an exemplary embodiment of the present application, a storage medium is further provided, where a computer program of the foregoing image processing method is stored, where the computer program includes program instructions, and when one or more processors loads and executes the program instructions, descriptions of the image processing method in the embodiment may be implemented, and details of beneficial effects of the same method are not repeated herein, and are not repeated herein. It will be appreciated that the program instructions may be executed on one or more computer devices that are capable of communicating with each other.
The computer readable storage medium may be the image processing apparatus provided in any one of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. that are provided on the computer device. Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
In one aspect of the application, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in an aspect of the embodiment of the present application.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
The above disclosure is only a few examples of the present application, and it is not intended to limit the scope of the present application, but it is understood by those skilled in the art that all or a part of the above embodiments may be implemented and equivalents thereof may be modified according to the scope of the present application.

Claims (20)

1. An image processing method, comprising:
displaying a first service interface;
acquiring a first image to be processed and watermark indicating information configured in the first service interface;
And displaying a second image with a watermark, wherein the second image is obtained by watermarking a target element of the first image based on the watermark indication information, and the target element is any one or more elements contained in the first image.
2. The method of claim 1, wherein acquiring the first image to be processed configured in the first service interface comprises:
acquiring a first image input in the first service interface; or alternatively
Acquiring a first video imported in the first service interface, and extracting a first image from the first video, wherein the first image is any frame image in the first video; or alternatively
Acquiring an image address input in the first service interface, and downloading the first image according to the image address; or alternatively
And acquiring a video address of a second video input in the first service interface, and extracting a first image from the second video according to the video address, wherein the first image is any frame image in the second video.
3. The method of claim 1, wherein the watermark indicating information comprises a reference watermark; obtaining watermark indicating information configured in the first service interface, including:
Acquiring a reference watermark input in the first service interface; or alternatively
Displaying a watermark list in the first service interface, and acquiring the selected watermark in the watermark list as a reference watermark; or alternatively
Displaying watermark components in the first service interface, and acquiring a reference watermark formed by combining one or more watermark components;
Wherein the reference watermark comprises text, an image, or a combination of text and an image.
4. The method of claim 3, wherein the watermark indicating information further comprises indicating information of a target element; the obtaining watermark indicating information configured in the first service interface further includes:
receiving a watermarking area set for the first image, and setting the watermarking area as indication information of the target element, wherein the watermarking area is used for indicating position information of the target element in the first image; or alternatively
Displaying an element list of the first image in the first service interface, and setting the identification of the selected element in the element list as indication information of a target element; or alternatively
Displaying a type list of elements of the first image in the first service interface, and setting a selected element type in the type list as indication information of a target element, wherein the selected element type is used for indicating the element type of the target element; or alternatively
If the first image is a frame of image in the video, displaying a specific element list in the video in the first service interface, and setting the identification of the selected specific element in the specific element list as indication information of a target element; the specific elements comprise elements with occurrence frequency greater than a preset frequency threshold in the video; or alternatively
And if the first image is one frame of image in the game video, setting the identification of the virtual game role in the first image as the indication information of the target element.
5. The method of claim 3, wherein the watermark indicating information further comprises addition mode indicating information; the addition mode indication information comprises at least one of the following:
Indicating a number of reference watermarks added for the target element;
if the number of the target elements is greater than 1, indicating to add the same reference watermark for each target element;
If the number of the target elements is greater than 1, different reference watermarks are indicated to be added for each target element;
If the number of the target elements is greater than 1 and each target element belongs to one or more element types, the same reference watermark is indicated to be added for the target elements of the same element type, and different reference watermarks are indicated to be added for the target elements of different element types.
6. The method of claim 1, wherein the method further comprises:
displaying a second service interface;
acquiring detection information set in the second service interface, wherein the detection information comprises the second image;
displaying the watermark detected from the second image, wherein the detected watermark is obtained by watermark detection processing on the target element in the second image.
7. The method of claim 6, wherein the method further comprises:
calculating the similarity between the detected watermarks and the reference watermark;
And if the similarity is greater than or equal to a preset similarity threshold, executing the step of displaying the watermark detected from the second image.
8. The method of claim 6, wherein the method further comprises:
jointly displaying the reference watermarks while displaying watermarks detected from the second image;
Wherein: the joint display mode comprises any one or more of the following: displaying the detected watermark and the reference watermark in the same or different interfaces; comparing and displaying the detected watermark with the reference watermark; and presenting the detected watermark and the reference watermark simultaneously, and displaying the similarity between the detected watermark and the reference watermark.
9. The method of claim 6, wherein the first service interface is any interface provided by a service program that performs watermarking;
the second service interface refers to any interface provided by a service program for executing watermark detection;
the service program for executing watermark adding and the service program for executing watermark detection are the same service program or different service programs;
Wherein the service program includes any one of an offline service program and an online service program.
10. The method of claim 1, wherein the watermark indicating information comprises a reference watermark; the watermarking process for the target element of the first image based on the watermark indicating information comprises the following steps:
Extracting an added area image corresponding to a target element from the first image, wherein the added area image refers to an area image containing the target element;
adding the reference watermark in the added region image to obtain a target region image;
and replacing the added area image in the first image with the target area image to obtain the second image.
11. The method of claim 10, wherein extracting the added area image corresponding to the target element from the first image comprises:
Invoking a target detection model to perform target detection processing on the first image to obtain a detection result, wherein the detection result comprises N elements and corresponding position information of each element in the first image; n is a positive integer;
and determining a target element from the N elements, and extracting a target area image from the first image according to the position information of the target element in the first image.
12. The method of claim 11, wherein the object detection model comprises an encoder, a decoder, and a loss side;
the encoder comprises a plurality of encoding layers, wherein the encoding layers are connected in a full-cross-layer connection mode, and a local cross-layer fusion mode is supported among the encoding layers; each coding layer of the coder is used for coding the first image to obtain a coding result so as to extract visual information of the first image;
the decoder comprises a plurality of decoding layers, and the decoding layers are connected by adopting a full-cross-layer connection mode; the decoder is used for carrying out fusion processing on the coding results of all the coding layers of the coder to obtain decoding results;
the loss end is used for predicting elements in the first image and corresponding position information of the elements in the first image based on a decoding result of the decoder.
13. The method of claim 10, wherein the adding the reference watermark in the added region image comprises:
Decomposing the added region image to obtain a plurality of subgraphs of the added region image under different image channels;
Adding the reference watermark to any one of a plurality of subgraphs of the added region image in each image channel by adopting a matched watermark adding rule to obtain the watermarked subgraph under each image channel;
And carrying out fusion processing according to the subgraphs added with the watermark and the subgraphs not added with the watermark under each image channel to obtain the target area image.
14. The method of claim 10, wherein the first image is any frame of image in a game video; the watermark indicating information also comprises interval indicating information, wherein the interval indicating information is used for indicating the watermark adding time interval in the game video; the method further comprises the steps of:
If no target element is detected in the first image, adding the reference watermark in the whole image area of the first image; or alternatively
If the target element is not detected in the first image, determining a time period corresponding to a watermark adding time interval to which the first image belongs, and reselecting a frame of image from the time period in the game video to determine the frame of image as a new first image.
15. The method of claim 6, wherein the watermark detection process for the target element in the second image comprises:
extracting a detection area image corresponding to a target element from the second image, wherein the detection area image refers to an area image containing the target element;
and watermark detection processing is carried out on the detection area image.
16. The method of claim 15, wherein watermark detection processing is performed on the detection area image, comprising:
decomposing the detection area image to obtain a plurality of subgraphs of the detection area image under each image channel;
watermark detection is carried out on a plurality of subgraphs of the detection area image under each image channel by adopting the matched detection rule, so as to obtain watermark information;
And performing watermark restoration processing according to the watermark information to obtain the watermark contained in the detection area image.
17. An image processing apparatus, comprising:
The display module is used for displaying the first service interface;
the acquisition module is used for acquiring a first image to be processed and watermark indicating information configured in the first service interface;
the display module is further configured to display a second image with a watermark, where the second image is obtained after watermarking a target element of the first image based on the watermark indication information, and the target element is any one or more elements included in the first image.
18. A computer device, comprising:
A processor adapted to execute a computer program;
a computer readable storage medium having stored therein a computer program which, when executed by the processor, performs the image processing method according to any one of claims 1-16.
19. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, performs the image processing method of any one of claims 1 to 16.
20. A computer program product, characterized in that the computer program product comprises a computer program or computer instructions which, when executed by a processor, is adapted to the image processing method according to any one of claims 1-16.
CN202310576718.4A 2023-05-19 2023-05-19 Image processing method and related equipment Pending CN119011867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310576718.4A CN119011867A (en) 2023-05-19 2023-05-19 Image processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310576718.4A CN119011867A (en) 2023-05-19 2023-05-19 Image processing method and related equipment

Publications (1)

Publication Number Publication Date
CN119011867A true CN119011867A (en) 2024-11-22

Family

ID=93492698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310576718.4A Pending CN119011867A (en) 2023-05-19 2023-05-19 Image processing method and related equipment

Country Status (1)

Country Link
CN (1) CN119011867A (en)

Similar Documents

Publication Publication Date Title
Wu et al. Steganography using reversible texture synthesis
US20150170312A1 (en) System for determining an illegitimate three dimensional video and methods thereof
CA3041791C (en) Process for defining, capturing, assembling, and displaying customized video content
CN109309842B (en) Live broadcast data processing method and device, computer equipment and storage medium
CN106713964A (en) Method of generating video abstract viewpoint graph and apparatus thereof
EP2836953A1 (en) Method and device for generating a code
CN105556574A (en) Rendering apparatus, rendering method thereof, program and recording medium
CN109923543B (en) Method, system, and medium for detecting stereoscopic video by generating fingerprints of portions of video frames
CN110650350B (en) Method and device for displaying coded image and electronic equipment
US20190141028A1 (en) System and Methods for Authentication and/or Identification
CN108134945B (en) AR service processing method, AR service processing device and terminal
CN115190345B (en) Coordinated control method for display media, client device and storage medium
US12074907B2 (en) Systems and methods of detecting anomalous websites
US20240276058A1 (en) Video-based interaction method and apparatus, computer device, and storage medium
CN104281865B (en) A kind of method and apparatus for generating Quick Response Code
CN115527101A (en) Image tampering detection method and processor
WO2021137209A1 (en) System and method for dynamic images virtualisation
CN107301333A (en) Copyright information protection, really power method, device, system and digital equipment
CN107817999A (en) The generation method and terminal of a kind of dynamic wallpaper
CN110381338B (en) Video data processing and analyzing method, device, equipment and medium
CN119011867A (en) Image processing method and related equipment
CN110719415B (en) Video image processing method and device, electronic equipment and computer readable medium
CN107948700A (en) Program commending method, system and computer-readable recording medium
Shirali-Shahreza et al. Collage steganography
CN116208719A (en) Image transmission method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication