[go: up one dir, main page]

CN121098995A - Data generation method, image processing method, data and electronic equipment - Google Patents

Data generation method, image processing method, data and electronic equipment

Info

Publication number
CN121098995A
CN121098995A CN202410741736.8A CN202410741736A CN121098995A CN 121098995 A CN121098995 A CN 121098995A CN 202410741736 A CN202410741736 A CN 202410741736A CN 121098995 A CN121098995 A CN 121098995A
Authority
CN
China
Prior art keywords
image
graphic element
information
tone mapping
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410741736.8A
Other languages
Chinese (zh)
Inventor
徐巍炜
张秀峰
杨长久
余全合
王弋川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202410741736.8A priority Critical patent/CN121098995A/en
Priority to PCT/CN2025/093183 priority patent/WO2025251827A1/en
Publication of CN121098995A publication Critical patent/CN121098995A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

本申请提供了一种数据生成方法、图像处理方法、数据及电子设备,该数据生成方法包括:首先,获取第一图像;接着,获取该第一图像中图形元素的描述信息,该图形元素的描述信息用于对该第一图像中该图形元素所在的区域进行色调映射;随后,基于该第一图像和该图形元素的描述信息,生成目标数据。这样,可以提高将带有图形元素的HDR图像转换为SDR图像时的图像质量。

This application provides a data generation method, an image processing method, and a data and electronic device. The data generation method includes: first, acquiring a first image; then, acquiring descriptive information of graphic elements in the first image, wherein the descriptive information of the graphic elements is used for tone mapping of the region where the graphic element is located in the first image; subsequently, generating target data based on the first image and the descriptive information of the graphic elements. This can improve the image quality when converting an HDR image with graphic elements to an SDR image.

Description

Data generation method, image processing method, data and electronic equipment
Technical Field
The present application relates to the field of multimedia, and in particular, to a data generating method, an image processing method, data and an electronic device.
Background
The prior art method of converting a high dynamic range (HIGH DYNAMIC RANGE, HDR) image into a target image of smaller dynamic range is to tone-map the HDR image with a tone-mapping curve corresponding to the HDR image to obtain a target image, which is a standard dynamic range (STANDARD DYNAMIC RANGE, SDR) image or a second HDR image of smaller dynamic range (hereinafter referred to as an SDR image). Typically, the tone mapping curve corresponding to an HDR image is determined according to the image content of the HDR image, i.e. the tone mapping curves corresponding to different individual HDR images are different for individual HDR images and the tone mapping curves corresponding to HDR images belonging to different scenes are different for HDR video.
If graphic elements such as watermarks, maps, user Interfaces (UIs) and the like are added to the HDR image before the HDR image is converted into the SDR image, since the graphic elements are located in the SDR image layer and tone mapping curves corresponding to the graphic elements are different from tone mapping curves corresponding to the HDR image, if the areas where the graphic elements are located in the HDR image are tone mapped by adopting tone mapping curves corresponding to the HDR image, image quality problems such as detail loss, contrast reduction, color distortion, overexposure, underexposure, or halation effect may occur in the areas where the graphic elements are located in the obtained SDR image.
Disclosure of Invention
In view of the above, the present application provides a data generating method, an image processing method, a data and an electronic device, so as to improve the image quality when converting an HDR image with graphic elements into an SDR image, that is, the image quality problems such as loss of detail, reduced contrast, color distortion, overexposure or underexposure or halation effect, flicker, inconsistent brightness after multiple conversion, excessively dark brightness after conversion, inconsistent brightness after conversion and graphic user interface brightness in the region where the graphic elements are located in the SDR image generated by the image processing method.
In a first aspect, the application provides a data generation method, which comprises the steps of firstly, acquiring a first image, then, acquiring description information of a graphic element in the first image, wherein the description information of the graphic element is used for tone mapping an area where the graphic element is located in the first image, and then, generating target data based on the first image and the description information of the graphic element.
Wherein the target data may be for display, transmission, distribution or storage. When the first image is required to be processed later, the region where the graphic element is located in the first image and the tone mapping process of other regions can be independently separated, tone mapping information (such as tone mapping curve) corresponding to the graphic element can be determined based on the description information of the graphic element in the target data for the region where the graphic element is located in the first image, and then tone mapping is carried out on the region where the graphic element is located in the first image based on the tone mapping information corresponding to the graphic element, so that the processing of the first image is realized. That is, appropriate tone mapping information is adopted for the region where the graphic element is located in the first image, so that quality problems such as loss of details, reduction of contrast, color distortion, overexposure or underexposure, halation effect, flicker, inconsistent brightness after multiple processing, excessively dark brightness after processing or inconsistent brightness after processing and the brightness of the graphical user interface can not occur in the region where the graphic element is located in the processed image.
By way of example, the graphical elements may include elements such as watermarks, logos, or maps that may be used to identify, decorate, copyright protect, or provide information, user Interfaces (UIs), and similar elements related to user interactions.
Illustratively, the first image may include one or more of the graphical elements described above, and each graphical element in the first image may be one or more.
The first image may be an HDR image or an SDR image, for example.
The first image may be an image decoded from the code stream, or a screen shot image obtained by screen shooting, or a photograph obtained by photographing, for example.
For example, the descriptive information of the graphical element in the first image may be received from other electronic devices.
For example, the description information of the graphic element in the first image may be generated based on the first image.
Illustratively, tone mapping of the present application may refer to a process of converting an HDR image into a target image of a smaller dynamic range (e.g., an SDR image or another HDR image of a dynamic range less than the HDR image), or may refer to a process of converting an SDR image into a target image of a larger dynamic range (e.g., an HDR image or another SDR image of a dynamic range greater than the SDR image).
For example, for other regions in the first image (i.e., regions other than the region in which the graphic element is located), tone mapping may be performed in a different tone mapping manner than the region in which the graphic element is located. For example, tone mapping information corresponding to the first image is used to tone map other areas in the first image, wherein the tone mapping information corresponding to the first image is different from the tone mapping information corresponding to the graphic element. For example, tone mapping information corresponding to the first image may be determined based on pixel values of pixel points in other regions in the first image.
In one possible approach, the first image may be encoded to obtain a code stream, and then the description information of the graphic element may be written into the code stream to obtain the first target data. For example, the code stream may be encapsulated according to a video file format to obtain first intermediate data, and then the first intermediate data is encapsulated according to a network transmission protocol to obtain first target data.
In one possible approach, the first image may be encoded to obtain a code stream, then the description information of the code stream and the graphic element is encapsulated according to a video file format to obtain second intermediate data, and then the second intermediate data is encapsulated according to a network transmission protocol to obtain the first target data.
In one possible approach, the first image may be encoded to obtain a code stream, then the code stream is encapsulated in a video file format to obtain third intermediate data, and then the third intermediate data and the description information of the graphic element are encapsulated in a network transmission protocol to obtain first target data.
The first image may be, for example, a frame of an image in a video sequence, and the above-described data generating method may be performed for all or part of the frames of the video sequence to obtain a new video file (i.e., target data).
According to a first aspect, the first image is a high dynamic range HDR image, the first image comprises an HDR layer and a standard dynamic range SDR layer, the SDR layer comprises the graphic element, the HDR layer comprises a second image, the second image is an HDR image, and the dynamic range of the pixel value of the pixel point in the region where the graphic element is located is smaller than the dynamic range of the pixel value of the pixel point in the second image.
According to the first aspect, or any implementation manner of the first aspect, the tone mapping manner of the second image is different from the tone mapping manner of the region where the graphic element is located.
For example, tone mapping information corresponding to the second image may be used to tone map the second image. For example, tone mapping information corresponding to the second image may be determined based on pixel values of pixel points in the second image.
According to the first aspect, or any implementation manner of the first aspect, the graphic element comprises at least one of a watermark, a logo, a user interface, a map.
According to the first aspect, or any implementation manner of the first aspect, the description information of the graphic element includes at least one of category information of the graphic element, region information of a region where the graphic element is located, or tone mapping information corresponding to the graphic element.
The region where the graphic element is located may refer to a region covered by a pixel point included in the graphic element (for example, a region where the watermark "HUAWEImate60" is located may refer to a region covered by a pixel point included in the character string "HUAWEImate") or an circumscribed frame of a region covered by a pixel point included in the graphic element (for example, a region where the watermark "HUAWEImate" is located may refer to a circumscribed frame of a region covered by a pixel point included in the character string "HUAWEImate").
The category information of the graphic element may be a category indicator or a category index, for example.
The tone mapping information corresponding to the graphic element may be a tone mapping table or tone mapping curve, for example. According to the first aspect, or any implementation manner of the first aspect, the area information of the area where the graphic element is located includes at least one of a shape of the area where the graphic element is located, a size of the area where the graphic element is located, a position of the area where the graphic element is located, and pixel position mask information of the area where the graphic element is located.
The shape of the region where the graphic element is located may refer to shape description information of the region where the graphic element is located.
According to a first aspect, or any implementation manner of the first aspect, generating first target data based on the first image and the description information of the graphic element includes encoding the first image to obtain a code stream, and writing the description information of the graphic element into the code stream to obtain the first target data.
Illustratively, the description information of the graphic element may be regarded as metadata, and the description information of the graphic element may be written into the code stream in a coding manner of the metadata.
According to a first aspect or any implementation manner of the first aspect, encoding the first image to obtain a code stream includes tone mapping an area where a graphic element is located in the first image based on description information of the graphic element to obtain a third image, generating enhancement data based on the first image and the third image, where the enhancement data includes enhancement values of all pixels in the first image, adjusting the enhancement values of the pixels in the area where the graphic element is located in the enhancement data to a preset value to obtain adjusted enhancement data, and encoding the third image and the adjusted enhancement data to obtain the code stream. In this way, a terminal device that only supports the display of an SDR image, after receiving the target data, can decode the base layer code stream to obtain an SDR image (i.e., a reconstructed image of the third image) and display it. After receiving the target data, the terminal device supporting the HDR display can decode the base layer code stream to obtain an SDR image (i.e. a reconstructed image of the third image) and decode the enhancement layer code stream to obtain enhancement data, and based on the SDR image and the enhancement data, the HDR (i.e. a reconstructed image of the first image) image can be synthesized and displayed. Furthermore, the compatibility of the target data can be improved.
In addition, "the enhancement value of the pixel point in the region where the graphic element is located in the enhancement data is adjusted to a preset value", the HDR image synthesized based on the SDR image and the enhancement data can be made to be closer to the HDR image before encoding (i.e., the first image).
The third image may be regarded as base data, the third image may be input to an encoder to obtain a base layer bitstream (may also be referred to as a third image bitstream) output from the encoder, and the enhancement data may be input to the encoder to obtain an enhancement layer bitstream (may also be referred to as an enhancement data bitstream) output from the encoder, including a base layer bitstream and an enhancement layer bitstream, may be referred to as a dual layer bitstream.
In one possible way, the first image may be directly encoded to obtain a single-layer structure of the code stream. Specifically, the first image may be input to the encoder, so as to obtain a code stream of the first image output by the encoder, where the code stream of the first image is a code stream of a single-layer structure (it should be understood that the code stream of the single-layer structure is named with respect to the code stream of the double-layer structure).
According to the first aspect or any implementation manner of the first aspect, tone mapping is performed on an area where a graphic element is located in the first image based on description information of the graphic element to obtain a third image, and when the description information of the graphic element is category information of the graphic element or area information of the area where the graphic element is located, tone mapping information corresponding to the graphic element is determined based on the description information of the graphic element, and tone mapping is performed on the area where the graphic element is located in the first image based on the tone mapping information corresponding to the graphic element to obtain the third image. In this case, different kinds of graphic elements may correspond to different tone mapping curves.
According to the first aspect, or any implementation manner of the first aspect, determining tone mapping information corresponding to the graphic element based on the description information of the graphic element includes determining preset tone mapping information as tone mapping information corresponding to the graphic element. In this case, different graphic elements may correspond to the same tone mapping curve, i.e. a preset tone mapping curve, such as a curve corresponding to the function y=x.
According to the first aspect or any implementation manner of the first aspect, when the description information of the graphic element is category information of the graphic element, determining tone mapping information corresponding to the graphic element based on the description information of the graphic element includes obtaining a first mapping relationship, where the first mapping relationship includes correspondence between a plurality of category information and a plurality of tone mapping information, one category information corresponds to one graphic element and one tone mapping information, selecting tone mapping information corresponding to the category information of the graphic element from the plurality of tone mapping information based on the first mapping relationship, and determining tone mapping information corresponding to the category information of the graphic element as tone mapping information corresponding to the graphic element. In this way, when the description information of the graphic element includes only the kind information of the graphic element, tone mapping information corresponding to the graphic element can also be known. And compared with the description information of the graphic element which is the region information of the region where the graphic element is located, when the description information of the graphic element is the type information of the graphic element, the required code rate overhead is lower.
According to the first aspect or any implementation manner of the first aspect, when the description information of the graphic element is the region information of the region where the graphic element is located, determining tone mapping information corresponding to the graphic element based on the description information of the graphic element includes obtaining a second mapping relationship, where the second mapping relationship includes a correspondence between a plurality of region information and a plurality of tone mapping information, one tone mapping information corresponds to one or more region information, selecting tone mapping information corresponding to the region information of the region where the graphic element is located from the plurality of tone mapping information based on the second mapping relationship, and determining tone mapping information corresponding to the region information of the region where the graphic element is located as tone mapping information corresponding to the graphic element. In this way, when the description information of the graphic element includes only the area information of the area in which the graphic element is located, tone mapping information corresponding to the graphic element can also be known. And compared with the description information of the graphic element which is the type information or tone mapping information, when the description information of the graphic element is the region information of the region where the graphic element is located, the region where the graphic element is located can be determined more accurately later, so that the accurate tone mapping is convenient to carry out later, and the quality of an image obtained by tone mapping is ensured.
According to a first aspect, or any implementation manner of the first aspect, the acquiring the description information of the graphic element in the first image includes generating the description information of the graphic element based on the first image.
Thus, whether the graphic element in the first image is a watermark added during photographing, a UI intercepted during screen capturing, a graphic element added based on a history editing operation, or a newly added graphic element, the description information of the graphic element can be generated based on the first image.
Wherein the history editing operation may refer to an editing operation for the first image before the data generating process of the present application is performed. The history editing operation may be performed by the electronic device that performs the data generating method, or may be performed by another electronic device.
According to a first aspect, or any implementation of the first aspect above, the method further comprises receiving an editing operation for the first image, the editing operation for adding the graphical element to the first image. Wherein, after the first image is acquired, an editing operation for the first image may be received; the electronic device may add a graphical element in the first image based on the editing operation.
According to a first aspect or any implementation manner of the first aspect, the method further includes receiving second target data, where the second target data includes the first image and description information of the graphic element, acquiring the first image includes acquiring the first image from the second target data, and acquiring the description information of the graphic element in the first image includes acquiring the description information of the graphic element from the second target data. That is, the description information of the graphic element may be received from other electronic devices, and the electronic device performing the data generating method does not need to generate the description information of the graphic element, so that the efficiency of acquiring the description information of the graphic element may be improved.
According to a first aspect or any implementation manner of the first aspect, generating the description information of the graphic element based on the first image includes determining type information of the graphic element and/or region information of a region where the graphic element is located based on the first image, determining tone mapping information corresponding to the graphic element based on at least one of the type information of the graphic element or region information of the region where the graphic element is located, and generating the description information of the graphic element according to at least one of the type information of the graphic element, region information of the region where the graphic element is located or tone mapping information corresponding to the graphic element.
According to the first aspect or any implementation manner of the first aspect, determining, based on the first image, category information of the graphic element and/or region information of a region where the graphic element is located includes identifying a first region in the first image, where a pixel value of a pixel point in the first region meets a preset condition, and determining, based on at least one of a size of the first region, a shape of the first region, a position of the first region or a pixel value of the pixel point in the first region, and pixel position identification mask information of the region where the first region is located, category information of the graphic element and/or region information of the region where the graphic element is located.
According to a first aspect or any implementation manner of the first aspect, determining, based on the first image, type information of the graphic element and/or area information of an area where the graphic element is located includes comparing the first image with a plurality of reference images in a similarity manner, where one reference image corresponds to one preset graphic element, determining, when a second area with the first reference image similarity greater than a similarity threshold exists in the first image, type information of the preset graphic element corresponding to the first reference image as the type information of the graphic element, and/or determining area information of the second area as area information of the area where the graphic element is located, where the first reference image is any one of the plurality of reference images.
According to the first aspect or any implementation manner of the first aspect, determining, based on the first image, category information of the graphic element and/or region information of a region where the graphic element is located includes determining, based on an editing operation for the first image, category information of the graphic element and/or region information of the region where the graphic element is located. The editing operation may be a current editing operation or a history editing operation.
The application provides an image processing method, which comprises the steps of firstly, obtaining a first image and obtaining description information of graphic elements in the first image, then, determining the region where the graphic elements are located in the first image and tone mapping information corresponding to the graphic elements based on the description information of the graphic elements, and then, tone mapping the region where the graphic elements are located in the first image based on the tone mapping information corresponding to the graphic elements to obtain a second image.
The method comprises the steps of acquiring description information of a graphic element in a first image from received target data, independently separating a tone mapping process of an area where the graphic element in the first image is located from that of other areas, determining tone mapping information (such as a tone mapping curve) corresponding to the graphic element based on the description information of the graphic element in the target data for the area where the graphic element in the first image is located, and tone mapping the area where the graphic element in the first image is located based on the tone mapping information corresponding to the graphic element to realize processing of the first image. That is, tone mapping information which is suitable for and different from other areas in the first image is adopted for the areas where the graphic elements are located in the first image, so that quality problems such as detail loss, contrast reduction, color distortion, overexposure or underexposure, halation effect, inconsistent brightness after multiple processing, excessively dark brightness after processing or inconsistent brightness after processing and the brightness of a graphical user interface can not occur in the areas where the graphic elements are located in the image after the conversion processing.
The image processing described above may be, for example, image conversion.
The first image may be an HDR image or an SDR image, for example.
The first image may be an image decoded from the code stream, or a screen shot image obtained by screen shooting, or a photograph obtained by photographing, for example.
For example, the descriptive information of the graphical element in the first image may be received from other electronic devices.
For example, the description information of the graphic element in the first image may be generated based on the first image.
Illustratively, tone mapping of the present application may refer to a process of converting an HDR image into a target image of a smaller dynamic range (e.g., an SDR image or another HDR image of a dynamic range less than the HDR image), or may refer to a process of converting an SDR image into a target image of a larger dynamic range (e.g., an HDR image or another SDR image of a dynamic range greater than the SDR image).
In a possible manner, tone mapping information corresponding to the graphic element may be used to tone-map an area where the graphic element is located in the first image, and tone mapping information corresponding to the first image may be used to tone-map other areas in the first image, so as to obtain the target image. The tone mapping information corresponding to the graphic element is different from the tone mapping information corresponding to the first image, and it is also understood that the tone mapping manner of the region where the graphic element is located in the first image is different from the tone mapping manner of other regions in the first image.
For example, tone mapping information corresponding to the graphic element may be used to tone map the pixel values of all the pixel points in the region where the graphic element is located in the first image, so as to obtain the first pixel values of all the pixel points in the region where the graphic element is located. In a possible manner, the first pixel values of all the pixels in the region where the graphic element is located may be used as the pixel values of all the pixels in the region where the graphic element is located in the target image. In a possible manner, tone mapping information corresponding to the first image may be further used to perform tone mapping on pixel values of all pixel points in an area where the graphic element is located in the first image, so as to obtain second pixel values of all pixel points in the area where the graphic element is located. And then, carrying out weighted calculation on the first pixel values and the second pixel values of all the pixel points in the region where the graphic element is located in the first image to obtain the pixel values of all the pixel points in the region where the graphic element is located in the target image. In the second way, the connection between the region where the graphic element is located and other regions in the second image can be smoother. The tone mapping information corresponding to the first image may be received from other electronic devices, or may be generated based on pixel values of pixel points in other areas in the first image. In a possible manner, tone mapping information corresponding to the graphic elements may be used to tone map the region of the first image where the graphic elements are located, while tone mapping is not performed on other regions of the first image, so as to obtain the target image. That is, the region in which the graphic element is located in the target image is obtained by tone mapping the region in which the graphic element is located in the first image, and other regions in the target image are the same as other regions in the first image.
According to a second aspect, the first image is a high dynamic range HDR image, the first image comprises an HDR layer and a standard dynamic range SDR layer, the SDR layer comprises the graphic element, the HDR layer comprises a second image, the second image is an HDR image, and the dynamic range of the pixel values of the pixel points in the region where the graphic element is located is smaller than the dynamic range of the pixel values of the pixel points in the second image.
According to a second aspect, or any implementation manner of the second aspect, tone mapping an area where a graphic element is located in the first image based on tone mapping information corresponding to the graphic element to obtain a target image includes tone mapping the SDR layer based on first tone mapping information corresponding to the graphic element, tone mapping the HDR layer based on second tone mapping information corresponding to the second image to obtain the target image, where the first tone mapping information is different from the second tone mapping information.
That is, the SDR layer and the HDR layer of the first image are tone mapped separately, i.e., the sub-layers are tone mapped. In this way, the transition of the connection of the region where the graphic element is located and other regions in the target image can be smoother.
According to a second aspect, or any implementation manner of the second aspect, the first tone mapping information is tone mapping information corresponding to the SDR layer, or the first tone mapping information is generated based on the second tone mapping information and third tone mapping information, and the third tone mapping information is tone mapping information corresponding to the SDR layer.
Illustratively, the tone mapping information corresponding to the SDR layer may be tone mapping information corresponding to the graphic element.
According to a second aspect, or any implementation manner of the second aspect, the acquiring the first image and the description information of the graphic element in the first image includes receiving target data, where the target data includes the first image and the description information of the graphic element in the first image, and acquiring the first image and the description information of the graphic element in the first image from the target data.
According to a second aspect, or any implementation manner of the second aspect, the description information of the graphic element includes at least one of category information of the graphic element, region information of a region where the graphic element is located, or tone mapping information corresponding to the graphic element.
According to a second aspect or any implementation manner of the second aspect, when the description information of the graphic element is category information of the graphic element, determining tone mapping information corresponding to the graphic element based on the description information of the graphic element includes obtaining a first mapping relationship, where the first mapping relationship includes correspondence between a plurality of category information and a plurality of tone mapping information, one category information corresponds to one graphic element and one tone mapping information, selecting tone mapping information corresponding to the category information of the graphic element from the plurality of tone mapping information based on the first mapping relationship, and determining tone mapping information corresponding to the category information of the graphic element as tone mapping information corresponding to the graphic element.
According to a second aspect or any implementation manner of the second aspect, when the description information of the graphic element is the region information of the region where the graphic element is located, determining tone mapping information corresponding to the graphic element based on the description information of the graphic element includes obtaining a second mapping relationship, where the second mapping relationship includes correspondence between a plurality of region information and a plurality of tone mapping information, one tone mapping information corresponds to one or more region information, selecting tone mapping information corresponding to the region information of the region where the graphic element is located from the plurality of tone mapping information based on the second mapping relationship, and determining tone mapping information corresponding to the region information of the region where the graphic element is located as tone mapping information corresponding to the graphic element.
According to a second aspect or any implementation manner of the second aspect, when the description information of the graphic element is the type information of the graphic element, determining the region where the graphic element is located in the first image based on the description information of the graphic element includes obtaining a third mapping relationship, where the third mapping relationship includes a correspondence relationship between a plurality of types of information and a plurality of region information, one type of information corresponds to one type of graphic element, one type of information corresponds to one or more region information, selecting the region information corresponding to the type of graphic element from the plurality of region information based on the third mapping relationship, and determining the region where the graphic element is located in the first image based on the region information corresponding to the type of graphic element.
According to a second aspect or any implementation manner of the second aspect, when the description information of the graphic element is tone mapping information corresponding to the graphic element, determining, based on the description information of the graphic element, an area where the graphic element is located in the first image includes obtaining a fourth mapping relationship, where the fourth mapping relationship includes a correspondence between a plurality of tone mapping information and a plurality of area information, one tone mapping information corresponds to one or more area information, selecting, based on the fourth mapping information, area information corresponding to tone mapping information corresponding to the graphic element from the plurality of area information, and determining, based on the area information corresponding to tone mapping information corresponding to the graphic element, an area where the graphic element is located in the first image.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, the present application provides data comprising a first image and descriptive information of a graphic element in the first image, the descriptive information of the graphic element being used to tone map an area of the first image in which the graphic element is located.
According to a third aspect, the data comprises a code stream comprising encoded data of the first image and encoded data of the description information of the graphic element.
According to a third aspect, the data comprises a code stream comprising encoded data of the first image.
The code stream may be a two-layer code stream or a single-layer code stream, for example. If the code stream is a two-layer structure code stream including a base layer code stream and an enhancement layer code stream, the encoded data of the base data included in the base layer code stream and the encoded data of the enhancement data included in the enhancement layer code stream may be referred to as encoded data of the first image.
In a fourth aspect, the present application provides a data generating apparatus, which may include:
the image acquisition module is used for acquiring a first image;
the description information acquisition module is used for acquiring the description information of the graphic elements in the first image, wherein the description information of the graphic elements is used for tone mapping the region where the graphic elements in the first image are located;
and the data generation module is used for generating first target data based on the first image and the description information of the graphic element.
The above-described data generating apparatus may be used for performing the method of the first aspect or any possible implementation of the first aspect.
In a fifth aspect, the present application provides an image processing apparatus, which may include:
the acquisition module is used for acquiring the first image and the description information of the graphic elements in the first image;
The information determining module is used for determining the region where the graphic element is located in the first image and tone mapping information corresponding to the graphic element based on the description information of the graphic element;
and the tone mapping module is used for tone mapping the region where the graphic element is positioned in the first image based on tone mapping information corresponding to the graphic element so as to obtain a target image.
The image processing apparatus described above may be used for performing the method of the second aspect or any possible implementation of the second aspect.
In a sixth aspect, the application provides an electronic device comprising a memory and a processor, the memory being coupled to the processor, the memory storing program instructions that when executed by the processor cause the electronic device to perform the method of the first aspect or any of the possible implementations of the first aspect or perform the method of the second aspect or any of the possible implementations of the second aspect.
In a seventh aspect, the present application provides a chip comprising one or more interface circuits and one or more processors, the one or more processors receiving or transmitting data via the one or more interface circuits, the one or more processors, when executing computer instructions, causing the chip to perform the method of the first aspect or any of the possible implementations of the first aspect, or to perform the method of the second aspect or any of the possible implementations of the second aspect.
In an eighth aspect, the present application provides a computer readable storage medium storing a computer program which when run on a computer or processor causes the computer or processor to perform the method of the first aspect or any possible implementation of the first aspect or to perform the method of the second aspect or any possible implementation of the second aspect.
In a ninth aspect, the application provides a computer program product comprising computer instructions which, when executed by a computer or processor, cause the computer or processor to perform the method of the first aspect or any possible implementation of the first aspect, or to perform the method of the second aspect or any possible implementation of the second aspect.
In a tenth aspect, the present application provides a computer readable storage medium storing data, the data including a first image and description information of a graphic element in the first image, the description information of the graphic element being used for tone mapping an area in which the graphic element is located in the first image.
In an eleventh aspect, the present application provides an apparatus for storing data, the apparatus comprising a receiver for receiving data and at least one storage medium for storing data, the data being data in any one of the above third aspect and implementations of the third aspect.
In a twelfth aspect, the present application provides an apparatus for transmitting data, where the apparatus includes a transmitter and at least one storage medium, where the at least one storage medium is configured to store data, where the data is data in any implementation manner of the third aspect and the third aspect, and the transmitter is configured to obtain the data from the storage medium and send the data to an end-side device through the transmission medium.
In a thirteenth aspect, the present application provides a system for distributing data, where the system includes at least one storage medium configured to store at least one data, where the at least one data is data in any one of the foregoing third aspect and the third aspect, and a streaming media device configured to obtain target data from the at least one storage medium and send the target data to an end-side device, where the streaming media device includes a content server or a content distribution server.
In a fourteenth aspect, the present application provides a system for displaying, the system comprising at least one storage medium for storing at least one target data, the at least one target data being generated according to the first aspect and any one of the implementations of the first aspect, a display device for obtaining the target data from the at least one storage medium and decoding the target data for display.
The electronic device, the computer readable storage medium, the computer program product, the chip or the codec, the system and the like provided in this embodiment are used to execute the corresponding method provided above, so that the benefits achieved by the electronic device, the computer readable storage medium, the computer program product, the chip or the codec, the system and the like can refer to the benefits in the corresponding method provided above.
Drawings
Fig. 1A is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 1B is a schematic diagram of another application scenario according to an embodiment of the present application;
FIG. 1C is a schematic diagram of another application scenario according to an embodiment of the present application;
FIG. 1D is a schematic diagram of another application scenario according to an embodiment of the present application;
FIG. 1E is a schematic diagram of yet another application scenario according to an embodiment of the present application;
FIG. 1F is a block diagram of a system 100 according to an embodiment of the application;
FIG. 2 is a schematic diagram of a data generation process 200 according to an embodiment of the present application;
FIG. 3A is a schematic diagram of another data generation process 300 according to an embodiment of the present application;
FIG. 3B is a schematic diagram of a data generation process according to an embodiment of the present application;
FIG. 3C is a schematic diagram of a data generation process according to an embodiment of the present application;
FIG. 4A is a schematic diagram of yet another data generation process 400 according to an embodiment of the present application;
FIG. 4B is a schematic diagram of a data generation process according to an embodiment of the present application;
FIG. 4C is a schematic diagram of a data generation process according to an embodiment of the present application;
FIG. 5A is a schematic diagram of yet another data generation process 500 according to an embodiment of the present application;
FIG. 5B is a schematic diagram of a data generation process according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an image processing process 600 according to an embodiment of the application;
FIG. 7 is a schematic diagram of another image processing procedure 700 according to an embodiment of the application;
Fig. 8 is a schematic diagram of a data generating apparatus 800 according to an embodiment of the present application;
fig. 9 is a schematic diagram of an image processing apparatus 900 according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean that a exists alone, while a and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, a plurality of processing units refers to two or more processing units, and a plurality of systems refers to two or more systems.
In embodiments of the present application, the modules/components shown in the frame diagrams (or the structure diagrams or the system diagrams) are only one example of the present application, and an actual frame (or structure or system) may include more or fewer modules/components than those shown in the diagrams, or may have different component configurations. And the various components/modules shown in the schematic diagrams may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
Fig. 1A is a schematic diagram of an application scenario according to an embodiment of the present application. The application scene shown in fig. 1A is an image editing scene, specifically, editing an image in the image editing interface 101.
The image editing interface 101 in FIG. 1A may include, but is not limited to, elements of a crop option, a map option 102, a filter option, a watermark option, a completion option 103, and so forth. The user may edit the HDR image, for example, by clicking on the map option 102 to add a map 104 to the HDR image. After the user finishes editing the HDR image, the user can click on the finishing option 103, and the mobile phone can respond to the operation behavior of the user to generate target data by adopting the data generation method provided by the application. The target data may include the HDR image after adding the map and description information of the map, and the description information of the map may be used to tone map an area corresponding to the map in the HDR image. When other electronic equipment has the requirement of converting the HDR image added with the mapping into the SDR image, tone mapping different from other areas in the HDR image can be carried out on the areas corresponding to the mapping in the HDR image added with the mapping by utilizing the description information of the mapping obtained from the target data so as to obtain the SDR image.
Fig. 1B is a schematic diagram of another application scenario according to an embodiment of the present application. The application scene shown in fig. 1B is a photographing scene.
The photo interface 105 in fig. 1B (1) may include, but is not limited to, elements of a photo frame, a photo option 106, an image preview interface, and the like. The user clicks the photographing option 106, and the mobile phone can take a photograph in response to the user's operation, save the photograph in the album, and display a thumbnail of the photograph on the image preview interface 107, as shown in fig. 1B (2). Illustratively, after a portion of the mobile phone is photographed, a watermark (e.g., a mobile phone model) is added to the photograph to obtain a photograph with the watermark, where the watermark is shown as 108 in fig. 1B (2).
For example, after obtaining a photo with a watermark, the mobile phone may generate target data by using the data generation method of the present application. The target data may include a photo (HDR image) after watermarking and description information of the watermark, where the description information of the watermark may be used to tone map an area corresponding to the watermark in the photo after watermarking. When the other electronic equipment has the requirement of converting the photo added with the watermark into an SDR image, tone mapping different from other areas in the HDR image can be carried out on the area corresponding to the watermark in the photo added with the watermark by utilizing the description information of the watermark obtained from the target data so as to obtain the SDR image.
Fig. 1C is a schematic diagram of another application scenario according to an embodiment of the present application. The application scene shown in fig. 1C is a screen capturing scene.
The video playback interface 109 in FIG. 1C may include, but is not limited to, video frames, UI elements (e.g., pause/play buttons, progress bars, next set buttons, last set buttons, fast forward buttons, fast reverse buttons, etc.), and the like. In the process of video playing, a user can execute a screen capturing operation, at this time, the mobile phone can respond to the operation behavior of the user to capture a screen to obtain a screen capturing image 110, wherein the screen capturing image 110 comprises a UI element 111. After the screen capturing image 110 is obtained, the mobile phone can generate the target data by adopting the data generation method provided by the application. In this case, the target data may include description information of the screen shot image 110 (HDR image) and the UI element 111, and the description information of the UI element may be used to tone-map the region corresponding to the UI element 111 in the screen shot image 110. In this way, when other electronic devices have the need of converting the screen capturing image 110 into an SDR image, the description information of the UI element obtained from the target data can be adopted to perform tone mapping on the region corresponding to the UI element in the screen capturing image 110, which is different from other regions in the HDR image, so as to obtain the SDR image.
Fig. 1D is a schematic diagram of another application scenario according to an embodiment of the present application. The application scenario illustrated in fig. 1D is an image forwarding scenario.
Image preview interface 112 in FIG. 1D may include, but is not limited to, photos 113, sharing options 114, favorites, editing options, delete options, and the like. When the user desires to share the photo 113, the user can click the sharing option 114, and at this time, the mobile phone can respond to the operation behavior of the user, generate the target data by adopting the data generation method provided by the application, and send the target data to other electronic devices. In this case, the target data is similar to the target data in fig. 1B, and a watermark (such as a mobile phone model) may be added to the photo, so that the photo with the watermark will not be described herein. Thus, when there is a need for converting the photo 113 into an SDR image in other electronic devices, the description information of the watermark obtained from the target data may be used to tone-map the region corresponding to the watermark in the photo 113, so as to obtain the SDR image.
Fig. 1E is a schematic diagram of another application scenario according to an embodiment of the present application. The application scenario illustrated in fig. 1E is an image processing scenario. For example, the image processing scene may be an image conversion scene. Image conversion may refer to converting an image from one format to another, e.g., converting an HDR image to an SDR image.
Illustratively, the personal computer 11 transmits target data (target data includes a code stream obtained by directly encoding an HDR image and description information of graphic elements) to a plurality of terminal devices, such as a mobile phone 13, VR (Virtual Reality)/AR (Augmented Reality) glasses 14, a personal computer 15, a tablet computer 16, and the like, through a cloud server.
In a possible manner, for a terminal device (such as the mobile phone 13 and the tablet computer 16) only supporting displaying an SDR image, the image processing method provided by the application may be executed, that is, the description information of the graphic element and the HDR image are obtained from the target data, then, tone mapping is performed on the region where the graphic element is located in the HDR image based on the description information of the graphic element, and tone mapping is performed on other regions in the HDR image based on tone mapping information corresponding to the HDR image, so as to obtain and display the SDR image.
In a possible manner, for a terminal device (such as the AR/VR glasses 14 and the personal computer 15) supporting displaying an HDR image and simultaneously displaying an SDR image, the image processing method provided by the present application may also be executed, that is, the description information of the graphic element and the HDR image are obtained from the target data, then, based on the description information of the graphic element, tone mapping is performed on the area where the graphic element is located in the HDR image, and tone mapping is not performed on other areas in the HDR image (that is, other areas in the HDR image are not converted into the SDR image), so as to obtain the target image and display the target image. In this case, the region where the graphic element is located in the target image is an SDR image, and the other regions are HDR images.
It should be understood that the present application may also include other application scenarios, for example, a scenario in which a third party platform adds a platform logo to a video or image (in which target data (including description information of the video or image and the platform logo after adding the platform logo) may also be generated), which is not limited by the present application.
It should be understood that the application scenario described above is illustrated by taking an image as an example, and the present application may also include application scenarios for video, such as a video editing scenario, a video recording scenario, a recording screen scenario, a video forwarding scenario, a video processing scenario, and the like.
Fig. 1F is a block diagram of a system 100 according to an embodiment of the application.
The system 100 in fig. 1F may include a terminal device 11, a terminal device 12, a terminal device 1N, a server, a terminal device 21, a terminal device 22, a terminal device 2M. Wherein N and M are positive integers.
In a possible manner, the data generating method of the present application may be performed by any one of the terminal devices 11, 12, 1N to generate target data, and then the target data is transmitted to a server, which distributes the target data to one or more of the terminal devices 21, 22, 2M.
It should be understood that, after any one of the terminal devices 11, 12, 1N executes the data generation method of the present application to generate the target data, the target data may be directly transmitted to one or more of the terminal devices 21, 22, 2M by bluetooth, hua-to-share, or the like.
In a possible manner, the first image may be transmitted to the server by any one of the terminal apparatuses 11, 12, 1N, and the data generating method of the present application is performed by the server to generate the target data (including the first image), and then the target data is distributed to one or more of the terminal apparatuses 21, 22, 2M by the server.
Illustratively, after any one of the terminal apparatuses 21, 22, 2M receives the target data, the image processing method of the present application may be performed to obtain a target image, and then the target image may be displayed.
By way of example, servers may include, but are not limited to, cloud servers, physical (stand alone) servers, site group servers, and the like, as the application is not limited in this regard.
By way of example, the terminal device may include, but is not limited to, a personal computer, a computer workstation, a smart phone, a tablet computer, a smart camera, a smart car or other type of cellular telephone, a media consumption device, a wearable device, a set-top box, a game console, and the like.
The data generation method and the image processing method according to the present application will be described below by taking one image as an example.
Fig. 2 is a schematic diagram of a data generation process 200 according to an embodiment of the application. The data generation process 200 may be performed by any one of the terminal devices 11, 12, 1N in fig. 1F, or may be performed by the server in fig. 1F.
S201, acquiring a first image.
Illustratively, the first image may be an image resulting from decoding a bitstream (where the bitstream may also be referred to as a bitstream or encoded bitstream), or a screenshot image resulting from a screenshot, or a photograph resulting from taking a photograph.
The application does not limit whether the first image is an HDR image or an SDR image.
S202, acquiring description information of graphic elements in a first image, wherein the description information of the graphic elements is used for tone mapping of an area where the graphic elements in the first image are located.
Illustratively, the graphical elements may include elements for identifying, decorating, copyright protecting, or providing information, such as watermarks, logos, maps, and the like.
Watermark (WATERMARK) may refer to a translucent mark created on an image or video, typically used to prove ownership, prevent copying, or as a copyright statement.
LOGO, which is a visual identification of a company, brand, organization, or product. It is typically composed of text, graphics, or a combination of both, intended to convey specific information and establish brand recognition.
A map refers to visual elements (e.g., text, graphics, etc.) for addition to an image or video during video or image editing.
Illustratively, the graphical elements may also include User Interfaces (UIs) and similar UI elements related to user interactions.
UI elements refer to graphical components and controls for interacting with a user in software, applications, websites, or other interactive systems, such as progress bars, fast forward buttons, fast reverse buttons, search boxes, notification bars, and the like.
Illustratively, the first image may include one or more of the graphical elements described above, and each graphical element in the first image may be one or more.
Illustratively, tone mapping of the present application may refer to a process of converting an HDR image into a target image of a smaller dynamic range (e.g., an SDR image or another HDR image of a dynamic range less than the HDR image), or may refer to a process of converting an SDR image into a target image of a larger dynamic range (e.g., an HDR image or another SDR image of a dynamic range greater than the SDR image).
Illustratively, the description information of the graphic element includes at least one of category information of the graphic element, region information of a region in which the graphic element is located, or tone mapping information corresponding to the graphic element.
Tone mapping may be performed for other regions in the first image (i.e., regions other than the region in which the graphic element is located) in a different tone mapping manner than the region in which the graphic element is located. For example, tone mapping information corresponding to the first image is used to tone map other areas in the first image, wherein the tone mapping information corresponding to the first image is different from the tone mapping information corresponding to the graphic element. For example, tone mapping information corresponding to the first image may be determined based on pixel values of pixel points in other regions in the first image.
S203, generating first target data based on the description information of the first image and the graphic element.
In one possible approach, the first image may be encoded to obtain a code stream, and then the description information of the graphic element may be written into the code stream to obtain the first target data. For example, the code stream may be encapsulated according to a video file format to obtain first intermediate data, and then the first intermediate data is encapsulated according to a network transmission protocol to obtain first target data.
In one possible approach, the first image may be encoded to obtain a code stream, then the description information of the code stream and the graphic element is encapsulated according to a video file format to obtain second intermediate data, and then the second intermediate data is encapsulated according to a network transmission protocol to obtain the first target data.
In one possible approach, the first image may be encoded to obtain a code stream, then the code stream is encapsulated in a video file format to obtain third intermediate data, and then the third intermediate data and the description information of the graphic element are encapsulated in a network transmission protocol to obtain first target data.
The first target data generated in S203 may be used for displaying, transmitting, distributing or storing, so that when the first image needs to be processed later, the tone mapping process of the region where the graphic element is located in the first image and other regions may be independently separated, and for the region where the graphic element is located in the first image, tone mapping information (such as tone mapping curve) corresponding to the graphic element may be determined based on the description information of the graphic element in the target data, and then tone mapping is performed on the region where the graphic element is located in the first image based on the tone mapping information corresponding to the graphic element, so as to implement the processing of the first image. That is, appropriate tone mapping information is adopted for the region where the graphic element is located in the first image, so that quality problems such as loss of details, reduction of contrast, color distortion, overexposure or underexposure, halation effect, flicker, inconsistent brightness after multiple processing, excessively dark brightness after processing or inconsistent brightness after processing and the brightness of the graphical user interface can not occur in the region where the graphic element is located in the processed image.
Illustratively, the first image may be an HDR image, the first image may include an HDR layer and an SDR layer, the SDR layer may include a graphic element, the HDR layer may include a second image, the second image is an HDR image, and a dynamic range of pixel values of pixel points in an area where the graphic element is located is smaller than a dynamic range of pixel values of pixel points in the second image. The first image will be described below as an example of an HDR image.
Fig. 3A is a schematic diagram of another data generation process 300 according to an embodiment of the application. The data generation process 300 describes a process of acquiring the description information of the graphic element in the first image, on the basis of the data generation process 200 described above, in a case where the acquired first image contains the graphic element, but the description information of the graphic element is not received from the other electronic device. The data generation process 300 may be performed by any one of the terminal devices 11, 12, 1N in fig. 1F, or may be performed by the server in fig. 1F.
S301, acquiring a first image.
Illustratively, the first image acquired in S301 is an HDR image.
S302, based on the first image, descriptive information of the graphic element is generated.
For example, it may be determined whether the first image includes a graphic element, and description information of the graphic element may be reproduced when it is determined that the first image includes the graphic element.
Illustratively, the graphic element in the first image may be a watermark added at the time of photographing, or a UI intercepted at the time of screen capturing, or a graphic element added based on a history editing operation. Wherein the history editing operation may refer to an editing operation for the first image before the data generating process of the present application is performed. The history editing operation may be performed by the electronic device that performs the data generation process, or may be performed by another electronic device. Description information for generating one graphic element is described below as an example.
Illustratively, one way of determining whether the first image contains a graphical element may be as follows:
First, a first region in a first image satisfying a preset condition may be identified. In one possible manner, the preset condition may be that brightness values of all pixel points in the area are the same. In one possible manner, the preset condition may be that a difference between a maximum luminance value and a minimum luminance value in the region is smaller than a difference threshold. Wherein the difference threshold may be set as desired.
Next, it may be determined whether the first image includes a graphic element based on at least one of a size of the first region, a shape of the first region, a position of the first region, or a pixel value of a pixel point in the first region.
In one possible manner, it is determined whether the first image contains a graphic element based on the size of the first region. For example, it may be determined whether the size of the first region is equal to a preset size, and when the size of the first region is equal to the preset size, it may be determined that the first image contains a graphic element. The preset size may be set according to the size of preset graphic elements (the graphic elements included in the first image may be one or more preset graphic elements) by way of example, and the preset graphic elements may also include at least one of watermarks, marks, UIs, and stickers, and each of the preset graphic elements may be one or more. For example, the preset graphic elements may include a watermark of a mobile phone model a, a watermark of a mobile phone model B, a station logo of a television station a, a station logo of a television station B, a search box, a smiling face map, and the like. For example, if the preset size is set according to the size of the logo, for example, the size of the logo of the television station a is 12×15, the preset size may be set to 12×15. For another example, the preset size may be set to 5×70 if the size of the progress bar is set to 5×70 according to the size of the UI.
In one possible approach, it is determined whether the first image includes a graphic element based on the shape of the first region. For example, it may be determined whether the shape of the first region is a preset shape, and when the shape of the first region is the preset shape, it may be determined that the first image contains a graphic element. For example, the preset shape may be set in accordance with the shape of the preset graphic element. For example, the preset shape may be set according to the shape of the logo, e.g., if the logo of the television station B is triangular, the preset shape may be set as a triangle. For another example, the preset shape may be set according to the shape of the UI, for example, the shape of the search box may be rectangular.
In one possible manner, it is determined whether the first image contains a graphic element based on the position of the first region. For example, it may be determined whether the position of the first region is a preset position, and when the position of the first region is the preset position, it may be determined that the first image contains a graphic element. For example, the preset position may be set at a position in the image where the occurrence frequency of the preset graphic element is relatively high. For example, a station caption of a television station in an image will generally appear in the upper left corner, and thus the upper left corner may be set to a preset position. For another example, a watermark of a phone model in a photo will typically appear in the lower right corner, so the lower right corner may be set to a preset position.
In one possible manner, it is determined whether the first image includes a graphic element based on pixel values of pixel points in the first region. For example, it may be determined whether the brightness of the pixel points in the first region satisfies the brightness condition, and when the brightness of the pixel points in the first region satisfies the brightness condition, it may be determined that the first image contains a graphic element.
For example, the luminance condition is that the luminance average value (or the maximum luminance value or the minimum luminance value) of all pixel points in the first region is greater than the first luminance threshold value (the first luminance threshold value may be a linear value (the value of the Y component) or a nonlinear value (such as a value between (0, 1)).
For example, the luminance condition is that the luminance average value (or the maximum luminance value or the minimum luminance value) of all pixel points in the first region is equal to the second luminance threshold value (the second luminance threshold value may be a linear value (the value of the Y component) or a nonlinear value (such as a value between (0, 1)). Wherein the second luminance threshold may be a system reference white.
For example, the luminance condition is that the luminance average value (or the maximum luminance value or the minimum luminance value) of all pixel points in the first region is smaller than the third luminance threshold value (the third luminance threshold value may be a linear value (the value of the Y component) or a nonlinear value (such as a value between (0, 1)).
Wherein the first luminance threshold is greater than the second luminance threshold, and the second luminance threshold is greater than the third luminance threshold. It should be understood that other brightness conditions may be included, as the application is not limited in this regard.
It should be appreciated that at least two of the above-described ways may also be combined to determine whether the first image contains a graphical element.
By way of example, one way of determining whether the first image contains a graphic element may be to compare the first image with a plurality of reference images in terms of similarity, one reference image corresponding to one preset graphic element, and determine that the first image contains a graphic element when there is a second region in the first image having a similarity to the first reference image greater than a similarity threshold.
For example, for each preset graphic element, a corresponding reference image may be generated. Then, a window (the size of the window is the same as the size of the reference image) may be slid on the first image according to a preset step size to intercept multiple alignment areas. The preset step size may include a lateral step size, which may refer to a step size of the window that is moved laterally on the first image each time, and a longitudinal step size, which may be a step size of the window that is moved longitudinally on the first image each time, for example. Next, for one reference image, the reference image may be successively subjected to similarity comparison with a plurality of comparison regions. Specifically, the similarity between each of the plurality of comparison regions and the reference image may be determined, and when the similarity between a certain comparison region and the reference image is greater than a similarity threshold, the comparison region may be referred to as a second region, and the reference image may be referred to as a first reference image. The similarity threshold may be set as desired, for example, 0.9, which is not limited by the present application.
The electronic device may record editing information corresponding to the editing operation during editing the first image, for example, the editing information may include a processing manner (such as adding a map, deleting a map, adding a filter, etc.), a processing position, type information of a processing object, etc., so that it is possible to know where to add what graphic element in the first image according to the editing information. Further, one way of determining whether the first image contains a graphic element may be to receive history edit information of the first image from another electronic device or to acquire history edit information recorded by an electronic device executing the above-described data generating method for history edit operation of the first image, and then, to determine whether the first image contains a graphic element based on the history edit information.
It should be appreciated that other implementations of determining whether the first image includes a graphical element are possible, as the application is not limited in this regard.
For example, when it is determined that the first image contains a graphic element, the following steps S3021 to S3023 may be performed to realize generating the description information of the graphic element.
S3021, determining, based on the first image, type information of the graphic element and/or area information of an area in which the graphic element is located.
In a possible manner, the type information of the graphic element may be determined based on at least one of a size of the first region, a shape of the first region, a position of the first region, a pixel value of a pixel point in the first region, or a pixel mask (mask) information of the first region.
In one possible manner, the type information of the graphic element is determined based on the size of the first region. For example, the first preset relationship may be established based on the preset graphic element and the size of the preset graphic element. Wherein a preset graphic element may correspond to one or more sizes. In this way, the kind information of the graphic element can be determined based on the size of the first region and the first preset relationship.
In one possible manner, the type information of the graphic element is determined based on the shape of the first region. For example, the second preset relationship may be established based on the preset graphic element and the shape of the preset graphic element. Wherein, a preset graphic element can correspond to one or more shapes. In this way, the kind information of the graphic element can be determined according to the shape of the first region and the second preset relationship.
In one possible manner, the type information of the graphic element is determined based on the position of the first region. For example, the third preset relationship may be established based on the preset graphic element and a position in the image where the occurrence frequency of the preset graphic element is relatively high. One of the preset graphic elements may correspond to one or more positions in the image where the occurrence frequency of the preset graphic element is high. In this way, the kind information of the graphic element can be determined according to the position of the first area and the third preset relationship.
In a possible manner, the type information of the graphic element is determined based on the pixel values of the pixel points in the first area. Illustratively, each preset graphic element typically has a fixed luminance characteristic, e.g., the average value (or maximum luminance value or minimum luminance value) of the pixel values of all the pixel points in the station logo of the television station A is the preset luminance value. Accordingly, the fourth preset relationship may be established based on the preset graphic elements and the luminance characteristics of the preset graphic elements. Wherein a predetermined graphic element may correspond to a luminance feature. In this way, when it is determined that the first image includes the graphic element based on the pixel values of the pixel points in the first region, the luminance feature of the first region may be determined according to the pixel values of the pixel points in the first region, and then, the category information of the graphic element may be determined according to the luminance feature of the first region and the fourth preset relationship.
In one possible manner, the type information of the graphic element is determined based on pixel mask (mask) information of the first region. The pixel mask information of the first area may be used to indicate a position of a pixel point included in the first area in the first image, or indicate a position of a pixel point of a graphic element in the first area in the first image. For example, a fifth preset relationship may be established based on the preset graphic element and the corresponding pixel mask information. Wherein one preset graphic element may correspond to one or more pixel mask information. In this way, when it is determined that the first image contains a graphic element, the kind information of the graphic element can be determined according to the pixel mask information of the first area and the fifth preset relationship.
It should be appreciated that at least two of the above-described various ways may also be combined to determine the type information of the graphic element.
In one possible manner, the region information of the region in which the graphic element is located may be determined based on at least one of the size of the first region, the shape of the first region, the position of the first region, the pixel value of the pixel point in the first region, or the pixel mask information of the first region.
For example, the area information of the area where the graphic element is located may be determined based on the position of the first area. In one possible manner, the position of the first region (i.e., the position information) may be used as the region information of the region in which the graphic element is located.
For example, the area information of the area where the graphic element is located may be determined based on the shape of the first area. For example, the shape description information may be generated based on the shape of the first region and the size of the first region, and the shape description information may be used as region information of the region where the graphic element is located.
The method for determining the area information of the area where the graphic element is located based on the size of the first area, the pixel value of the pixel point in the first area, or the pixel mask information of the first area is similar to the method for determining the type information of the graphic element based on the size of the first area, the pixel value of the pixel point in the first area, or the pixel mask information of the first area, that is, the preset relationship between the size and the area information, or the brightness characteristic and the area information, or the preset relationship between the pixel mask information and the area information is established in advance, and then the area information of the area where the graphic element is located is determined according to the preset relationship and the size of the first area, or the pixel value of the pixel point in the first area, or the pixel mask information of the first area, which is not repeated herein.
In a possible manner, the type information of the preset graphic element corresponding to the first reference image may be determined as the type information of the graphic element, and/or the area information of the area where the graphic element is located may be determined based on the position of the second area.
In a possible manner, the type information of the graphic element and/or the area information of the area where the graphic element is located may be determined according to the history editing information.
S3022, determining tone mapping information corresponding to the graphic element based on at least one of the first image, the type information of the graphic element, and the area information of the area in which the graphic element is located.
In one possible approach, tone mapping information corresponding to the graphic element may be determined based on the first image. Specifically, the pixel values of the pixel points in the region where the graphic element is located in the first image may be used to generate tone mapping information corresponding to the graphic element. In this case, the tone mapping information corresponding to the graphic element may be a tone mapping curve or a tone mapping table.
In one possible manner, tone mapping information corresponding to the graphic element may be determined based on the kind information of the graphic element. Specifically, a first mapping relationship may be obtained, where the first mapping relationship includes correspondence between a plurality of kinds of information and a plurality of tone mapping information, and one kind of information corresponds to one graphic element and one tone mapping information. And then, selecting tone mapping information corresponding to the type information of the graphic element from the plurality of tone mapping information based on the first mapping relation, and determining the tone mapping information corresponding to the type information of the graphic element as the tone mapping information corresponding to the graphic element.
For example, the tone mapping curve corresponding to the watermark in the first mapping relationship is a curve corresponding to the function y=x, the tone mapping curve corresponding to the mark in the first mapping relationship is a curve corresponding to the function y=0.9x, the tone mapping curve corresponding to the UI in the first mapping relationship is a curve corresponding to the function y=x, and the tone mapping curve corresponding to the map in the first mapping relationship is a curve corresponding to the function y=0.8x.
In one possible manner, tone mapping information corresponding to the graphic element may be determined based on region information of a region in which the graphic element is located. Specifically, a second mapping relationship may be obtained, where the second mapping relationship includes correspondence between a plurality of region information and a plurality of tone mapping information, and one tone mapping information corresponds to one or more region information. And then, based on the second mapping relation, selecting tone mapping information corresponding to the region information of the region where the graphic element is located from the plurality of tone mapping information, and determining the tone mapping information corresponding to the region information of the region where the graphic element is located as tone mapping information corresponding to the graphic element.
For example, the tone-mapped curve corresponding to the upper left corner (region information of the upper left corner can be understood as) in the second mapping relationship is a curve corresponding to the function y=x, the tone-mapped curve corresponding to the upper right corner (region information of the upper right corner can be understood as) in the second mapping relationship is a curve corresponding to the function y=x, the tone-mapped curve corresponding to the lower left corner (region information of the lower left corner can be understood as) in the second mapping relationship is a curve corresponding to the function y=0.7x, and the tone-mapped curve corresponding to the lower right corner (region information of the lower right corner) in the second mapping relationship is a curve corresponding to the function y=0.9x.
In a possible manner, tone mapping information corresponding to different preset graphic elements is preset tone mapping information, and at this time, tone mapping information corresponding to the graphic elements is determined based on the type information of the graphic elements and/or the area information of the area where the graphic elements are located, and the preset tone mapping information can be used as tone mapping information corresponding to the graphic elements. The preset tone mapping information may refer to a preset tone mapping curve, such as a curve corresponding to y=x.
S3023, generating description information of the graphic element based on at least one of the type information of the graphic element, the region information of the region in which the graphic element is located, or tone mapping information corresponding to the graphic element.
It should be appreciated that S3022 may be an optional step when S3021 is the necessary step. When S3022 is the necessary step, S3021 may be an optional step, where "determining the kind information of the graphic element based on the first image" in S3021 is an optional step, or "determining the area information of the area where the graphic element is located based on the first image" is an optional step.
The region where the graphic element is located may refer to a region covered by a pixel point included in the graphic element (for example, a region where the watermark "HUAWEImate60" is located may refer to a region covered by a pixel point included in the character string "HUAWEImate") or an circumscribed frame of a region covered by a pixel point included in the graphic element (for example, a region where the watermark "HUAWEImate" is located may refer to a circumscribed frame of a region covered by a pixel point included in the character string "HUAWEImate").
The type information of the graphic element may be expressed as follows:
In one possible way, the type information of the graphic element may be an identification. For example, the graphic element is 1, and not the graphic element is 0.
In one possible way, the category information of the graphic element may be a category indicator. For example, the type indicator corresponding to the UI is UI, the type indicator corresponding to the map is MP, the type indicator corresponding to the watermark is WM, the type indicator corresponding to the mark type is LO, and the type indicator corresponding to the other types of graphic elements is O.
In one possible way, the category information of the graphic element may be a category index. A category list may be pre-established, which may include names (or category indicators) of a plurality of preset graphic elements and corresponding category indexes. For example, the category index corresponding to the UI is 0, the category index corresponding to the map is 1, the category index corresponding to the watermark is2, the category index corresponding to the mark is 3, and the category index corresponding to the other types of graphic elements is 4. Or names (or category indicators) of a plurality of preset graphic elements in the category list are sequentially arranged from bottom to top, and the positions of the various preset graphic elements in the category list are adopted as indexes corresponding to the various preset graphic elements.
In a possible manner, the type information of the graphic element may be implicitly transferred through the enhancement layer data of the dual-layer code stream, where the enhancement layer data of the pixels of the region where the graphic element is located is a preset value (for example, 0 or 1 indicates that the HDR data of the current region does not change in brightness or value relative to the SDR data), and a continuous region where the enhancement layer data is a preset value is identified at the receiving end, and the number of pixels of the continuous region is greater than a preset threshold (0 or other preset value), that is, the continuous regions are set as the graphic element type.
The representation of the region information of the region in which the graphic element is located may be as follows:
In a possible manner, the area information of the area where the graphic element is located may include a first size and a first mask (mask) map, where the first size is a size of the first image, and the size of the first mask map is equal to the first size. The first mask map may be used to indicate a location of an area where the graphic element is located. For example, in the first mask, the pixel value of the pixel point at the position corresponding to the region where the graphic element is located in the first image is 1, and the pixel values of the other pixel points are 0.
In a possible manner, the area information of the area where the graphic element is located may include a second size and a second mask map, where the second size is a size (scaling of equal length and width in equal proportion) obtained by scaling the first image by a preset multiple, and the size of the second mask map is equal to the second size. The second mask map may be used to indicate a position of an area where the graphic element is located in the first image scaled according to the preset speed.
In a possible manner, the area information of the area where the graphic element is located may include a first size and first transparency information, where the first size is a size of the first image, and the first transparency information includes transparency of all pixel points in the first image. The first transparency information may be used to indicate a location of an area where the graphic element is located. For example, in the first transparency information, the transparency of the pixel point at the position corresponding to the region where the graphic element is located is 1, and the transparency of the other pixels is 0.
In a possible manner, the area information of the area where the graphic element is located may include a second size and second transparency information, where the second size is a size obtained by scaling the size of the first image by a preset multiple (scaling in equal proportion to the length and the width), and the second transparency information includes transparency of all pixel points in the first image scaled by the preset multiple. The second transparency may be used to indicate a position of an area where the graphic element is located in the first image scaled according to the preset speed.
In a possible manner, the area information of the area where the graphic element is located may include a first size and first weight information, where the first size is a size of the first image, and the first weight information includes weight values of all pixel points in the first image. The first weight information may be used to indicate a location of an area where the graphic element is located. For example, in the first weight information, the weight value of the pixel point at the position corresponding to the region where the graphic element is located is 1, and the weight values of the other pixels are 0.
In a possible manner, the area information of the area where the graphic element is located may include a second size and second weight information, where the second size is a size (scaling of a length and a width by equal proportions) obtained by scaling the first image by a preset multiple, and the second weight information includes weight values of all pixel points in the first image scaled by the preset multiple. The second weight information may be used to indicate a position of an area where the graphic element is located in the first image scaled according to the preset speed.
In a possible manner, the area information of the area where the graphic element is located may include shape description information of the area where the graphic element is located. For example, the area in which the graphic element is located is a rectangular area, and the area information of the area in which the graphic element is located may include the upper left corner coordinates of the rectangular area, the length of the rectangular area, and the width of the rectangular area. For example, the region where the graphic element is located is a circular region, and the region information of the region where the graphic element is located may include coordinates of a center of the circular region and a radius of the circular region.
In a possible manner, the region information of the region where the graphic element is located may include shape description information of the region where the graphic element is located and a third mask map, where the third mask map may be used to indicate a position of a pixel point of the graphic element.
In a possible manner, in the case where the local tone mapping is performed on other areas in the first image (typically the first image is divided into a plurality of blocks, a corresponding local tone mapping curve is determined for each block; the subsequent tone mapping is performed using the corresponding local tone mapping curve for each block), the area information of the area where the graphic element is located may be the block information divided by the local tone mapping (local tone mapping, LTM). For example, the region information of the region in which the graphic element is located may be a block index. In this case, one block is not a region where the graphic element is located, but is not a region including a region other than the region where the graphic element is located, that is, one block is tone-mapped without tone-mapping information corresponding to the graphic element, and tone-mapped without using a local tone-mapping curve corresponding to the block.
In a possible manner, the region information of the region where the graphic element is located may be implicitly transferred by using the enhancement layer data of the dual-layer code stream, where the enhancement layer data of the pixels of the region where the graphic element is located is a preset value (for example, 0 or 1 indicates that the HDR data of the current region does not change in brightness or in numerical value relative to the SDR data), and a continuous region where the enhancement layer data is a preset value is identified at the receiving end, where the number of pixels of the continuous region is greater than a preset threshold, that is, the information of the continuous region is taken as the region information of the region where the graphic element is located.
The representation of tone mapping information corresponding to the graphic element may be as follows:
in one possible manner, the tone mapping information corresponding to the graphic element is a tone mapping table (lut).
In a possible manner, the tone mapping information corresponding to the graphic element is a tone mapping function corresponding to a tone mapping curve.
In one possible manner, the tone mapping information corresponding to the graphic element is an index of a tone mapping curve.
In a possible manner, the tone mapping information corresponding to the graphic element is a parameter for determining a tone mapping curve.
For example, the local tone mapping information of the LTM may be multiplexed, and the local tone mapping information of one or more blocks included in the region where the graphic element is located may be adjusted to tone mapping information corresponding to the graphic element.
S303, encoding the first image to obtain a code stream.
In a possible manner, S303 may refer to the following steps S3031 to S3034:
S3031, tone mapping is carried out on the region where the graphic element is located in the first image based on the description information of the graphic element so as to obtain a third image.
The method includes determining tone mapping information corresponding to a graphic element based on description information of the graphic element, and tone mapping an area where the graphic element is located in a first image based on the tone mapping information corresponding to the graphic element to obtain a third image.
S3032, based on the first image and the third image, enhancement data is generated, where the enhancement data includes enhancement values of all pixels in the first image.
In one possible way, the difference between the pixel values of the pixels in the corresponding positions in the first image and the third image may be calculated, resulting in enhancement data. The pixel value of the pixel point at the corresponding position in the third image may be subtracted from the pixel value of the pixel point in the first image to obtain the enhancement data.
In one possible approach, the enhancement data may be obtained by calculating the quotient between the pixel values of the corresponding position pixels in the first image and the third image. The enhancement data may be obtained by dividing a pixel value of a pixel point in the first image by a pixel value of a pixel point in a corresponding position in the third image.
S3033, the enhancement value of the pixel point in the region where the graphic element in the enhancement data is located is adjusted to be a preset value, so as to obtain the adjusted enhancement data.
In general, the decoding end is obtained by multiplying or adding pixel values of the pixel points at corresponding positions in the third image and the enhancement data, and when the first image does not contain graphic elements (i.e. all areas of the first image are tone-mapped by tone-mapping information corresponding to the first image to obtain the third image), the difference between the reconstructed image of the first image and the first image is smaller. However, in the present application, the region where the graphic element is located in the first image is tone-mapped according to tone-mapping information corresponding to the graphic element (other regions are mapped by tone-mapping information corresponding to the first image), so in order to make the region where the graphic element is located in the reconstructed image of the first image closer to the region where the graphic element is located in the first image, the enhancement value of the pixel point in the region where the graphic element is located in the enhancement data may be adjusted to a preset value, so as to obtain the enhancement data after adjustment. The preset value may be determined according to tone mapping information corresponding to the graphic element, for example, the tone mapping information corresponding to the graphic element is a curve corresponding to the function y=x, and the preset value may be 1.0. For example, if the tone mapping information corresponding to the graphic element is a curve corresponding to the function y=0.9x, the preset value may be 0.9.
S3034, the third image and the adjusted enhancement data are encoded to obtain a code stream.
The third image may be regarded as base data, the third image may be input to an encoder to obtain a base layer bitstream (may also be referred to as a third image bitstream) output from the encoder, and the enhancement data may be input to the encoder to obtain an enhancement layer bitstream (may also be referred to as an enhancement data bitstream) output from the encoder, including a base layer bitstream and an enhancement layer bitstream, may be referred to as a dual layer bitstream.
In one possible way, the first image may be directly encoded to obtain a single-layer structure of the code stream. Specifically, the first image may be input to an encoder, resulting in a code stream of the encoder outputting the HDR image, the code stream of the HDR image being a single layer structure code stream (it should be understood that the single layer structure code stream is named with respect to the double layer structure code stream).
S304, writing the description information of the graphic elements into the code stream to obtain first target data.
Illustratively, the graphic element may be regarded as metadata, and written into the code stream in a coding manner of the metadata. Metadata refers to data describing key information and features required in video or image processing.
Fig. 3B and 3C below are illustrations of one implementation of S303 and S304 in the data generation process 300 of fig. 3A, taking the first image as an HDR image as an example.
Fig. 3B is a schematic diagram of yet another data generating process according to an embodiment of the present application.
The first image in fig. 3B is an HDR image, which may be obtained by photographing, screen capturing, or decoding from a code stream. The first target data generated according to the above-described data generation process 300 may include a single-layer structure of a code stream including a code stream of an HDR image and encoded data of metadata (encoded data including description information of graphic elements in the HDR image).
Fig. 3C is a schematic diagram of yet another data generating process according to an embodiment of the present application.
The first image in fig. 3C is an HDR image, which may be obtained by photographing, screen capturing, or decoding from a code stream. The first target data generated according to the above-described data generation process 300 may include a dual-layer structured code stream including an enhancement layer code stream, a base layer code stream, and encoded data of metadata (encoded data including description information of graphic elements in the HDR image).
For example, after the first image is acquired, the first image may be edited to newly add the graphic element in the first image, and the first target data may be generated with reference to the following data generation process 400.
Fig. 4A is a schematic diagram of yet another data generation process 400 according to an embodiment of the present application. The data generation process 400 describes a process of acquiring description information of a graphic element in a first image after the graphic element is newly added in the first image on the basis of the data generation process 200 described above. The data generation process 400 may be executed by any one of the terminal devices 11, 12, 1N in fig. 1F, or may be executed by the server in fig. 1F.
S401, acquiring a first image.
S402, receiving an editing operation for the first image, wherein the editing operation is used for adding graphic elements in the first image.
For example, editing operations for the first image may include adding watermarks, maps, LOGO, and the like in the first image.
Illustratively, after receiving the editing operation for the first image, a graphic element may be newly added in the first image, to obtain an edited first image.
S403, generating description information of the newly added graphic element based on the edited first image.
Illustratively, the newly added graphic element refers to a graphic element added in the first image based on the editing operation.
In a possible manner, the manner of determining the type information of the graphic element and/or the area information of the area where the graphic element is located "based on at least one of the size of the first area, the shape of the first area, the position of the first area, the pixel value of the pixel point in the first area, or the pixel mask information of the first area" and the manner of determining the tone mapping information corresponding to the graphic element "based on at least one of the first image, the type information of the graphic element, or the area information of the area where the graphic element is located" in S302 may be referred to, and will not be repeated here.
In a possible manner, the type information of the graphic element and/or the area information of the area in which the graphic element is located may be determined based on the editing operation for the first image. For example, upon receiving an editing operation for a first image, the mobile phone may perform the editing operation and record corresponding editing information in response to an operation behavior of the user. The editing information may include processing modes (such as adding a map, deleting a map, adding a filter, etc.), processing positions, and type information of a processing object, and further, according to editing information corresponding to an editing operation, it is possible to know where graphics elements are added in the first image, that is, according to editing information corresponding to the editing operation, type information of the graphics elements and/or region information of a region where the graphics elements are located may be determined.
S404, encoding the edited first image to obtain a code stream.
S405, writing the description information of the graphic element into the code stream to obtain first target data.
For example, S404 to S405 may refer to the descriptions of S303 to S304, which are not described herein.
Fig. 4B and 4C below are illustrations of one implementation of S402-S405 in the data generation process 400 of fig. 4A, taking the first image as an HDR image.
Fig. 4B is a schematic diagram of yet another data generating process according to an embodiment of the present application.
The first image in fig. 4B is HDR image 1, where HDR image 1 may be obtained by photographing, screen capturing, or decoding from a code stream, and editing HDR image 1 (including a new graphic element), and the obtained image may be referred to as HDR image 2 (i.e. the edited first image). The first target data generated according to the above-described data generation process 400 may include a single-layer structure of the code stream including the code stream of the HDR image 2 and the encoded data of the metadata (encoded data including the description information of the newly added graphic element).
Fig. 4C is a schematic diagram of yet another data generating process according to an embodiment of the present application.
The first image in fig. 4C is HDR image 1, and HDR image 1 may be obtained by photographing, screen capturing, or decoding from a code stream. The HDR image 1 is edited (including the newly added graphic elements), and the resulting image may be referred to as an HDR image 2 (i.e., an edited first image). The first target data generated according to the above-described data generation process 400 may include a dual-layer structured code stream including an enhancement layer code stream, a base layer code stream, and encoded data of metadata (encoded data including description information of newly added graphic elements).
Fig. 5A is a schematic diagram of yet another data generation process 500 according to an embodiment of the present application. The data generation process 500 describes, on the basis of the data generation process 200 described above, a process of acquiring the description information of the graphic element in the first image in a case where the acquired first image contains the graphic element and the description information of the graphic element is received from the other electronic device. The data generation process 500 may be performed by any one of the terminal devices 11, 12, 1N in fig. 1F, or may be performed by the server in fig. 1F.
S501, receiving second target data, wherein the second target data comprises a first image and description information of graphic elements in the first image.
The second target data is illustratively generated in the same manner as the first target data described above.
In a possible manner, the second target data includes a dual-layer structure of the code stream, and the dual-layer structure of the code stream includes the enhancement layer code stream, the base layer code stream, and the coding information of the description information of the graphic element.
In a possible manner, the second target data includes a single-layer structure of the code stream, and the single-layer structure of the code stream includes the code stream of the first image and the encoded information of the description information of the graphic element.
S502, acquiring a first image from second target data.
Fig. 5B is a schematic diagram of a manner of acquiring description information of the first image and the graphic element from the second target data in the data generating process 500 of fig. 5A, and a schematic diagram of an implementation manner of S504 to S505 in the data generating process 500 of fig. 5A, taking the code stream in which the first target data includes a two-layer structure as an example.
Fig. 5B is a schematic diagram of yet another data generation process according to an embodiment of the present application.
In fig. 5B, the second target data includes a code stream of a double-layer structure. The reconstructed data of the second enhancement data may be decoded from the second enhancement layer bitstream and the reconstructed data of the second base data may be decoded from the second base layer bitstream, after which an HDR image (i.e. a first image, where the first image is essentially a reconstructed image of the first image in the second target data, is generated based on the reconstructed data of the second enhancement data and the reconstructed data of the second base data, without the application distinguishing in name between the image before encoding and the decoded image). Specifically, the reconstructed data of the second enhancement data may be added or multiplied with the pixel value of the pixel point at the position corresponding to the reconstructed data of the second base data, to obtain the first image.
For example, when the second target data includes a single-layer structure of the code stream, the first image may be decoded from the code stream of the first image.
S503, acquiring the description information of the graphic element from the second target data.
In fig. 5B, the encoded data of the second metadata in the bitstream with the double-layer structure may be decoded to obtain the description information of the graphic element (here, the description information of the graphic element is essentially the reconstructed data of the description information of the graphic element in the second target data), and the description information of the graphic element before encoding is not distinguished from the description information of the graphic element obtained by decoding by name in the present application.
Similarly, for a single-layer code stream, the encoded data of metadata in the single-layer code stream may also be decoded to obtain the description information of the graphic element.
S504, the first image is encoded to obtain a code stream.
S505, writing the description information of the graphic element into the code stream to obtain first target data.
For example, S404 to S405 may refer to the descriptions of S303 to S304, which are not described herein.
The manner of implementing S504 in fig. 5B may refer to the descriptions of S3031 to S3034, which are not described herein.
It should be noted that, the data generating process 300, the data generating process 400, and the manner of acquiring the description information of the graphic element in the data generating process 500 may be combined.
Fig. 6 is a schematic diagram of an image processing process 600 according to an embodiment of the application. The image processing procedure 600 may be performed by any one of the terminal apparatuses 21, 22, 2M in fig. 1F described above.
S601, the first image and description information of graphic elements in the first image are acquired.
The first image may be an image decoded from a code stream, or a screen shot image obtained by screen shot, or a photograph obtained by photographing, for example.
In one possible approach, descriptive information of the graphical element in the first image may be received from other electronic devices.
In one possible manner, the description information of the graphic element in the first image may be generated based on the first image.
S602, determining the region where the graphic element is located in the first image and tone mapping information corresponding to the graphic element based on the description information of the graphic element.
Illustratively, the description information of the graphic element includes at least one of category information of the graphic element, region information of a region in which the graphic element is located, or tone mapping information corresponding to the graphic element.
When the description information of the graphic element is type information of the graphic element or tone mapping information corresponding to the graphic element, an area where the graphic element is located in the first image may be determined according to a preset mapping relationship and the description information of the graphic element is type information of the graphic element (or tone mapping information corresponding to the graphic element). When the description information of the graphic element is the region information of the region where the graphic element is located, the region where the graphic element is located in the first image may be determined directly according to the region information of the region where the graphic element is located.
When the description information of the graphic element is category information of the graphic element or region information of a region where the graphic element is located, tone mapping information corresponding to the graphic element in the first image may be determined according to preset tone mapping information. The preset tone mapping information may be a preset tone mapping curve, such as a curve corresponding to the function y=x.
When the description information of the graphic element is type information of the graphic element or region information of a region where the graphic element is located, tone mapping information corresponding to the graphic element may be determined according to a preset mapping relationship and type information of the graphic element (or region information of the region where the graphic element is located).
When the description information of the graphic element is tone mapping information corresponding to the graphic element, the description information of the graphic element may be directly determined as tone mapping information corresponding to the graphic element.
S603, tone mapping is conducted on the region where the graphic element is located in the first image based on tone mapping information corresponding to the graphic element, so that a target image is obtained.
In a possible manner, tone mapping information corresponding to the graphic element may be used to tone-map an area where the graphic element is located in the first image, and tone mapping information corresponding to the first image may be used to tone-map other areas in the first image, so as to obtain the target image. The tone mapping information corresponding to the graphic element is different from the tone mapping information corresponding to the first image, and it is also understood that the tone mapping manner of the region where the graphic element is located in the first image is different from the tone mapping manner of other regions in the first image.
The tone mapping information corresponding to the first image may be received from other electronic devices, or may be generated based on pixel values of pixel points in other areas in the first image.
For example, tone mapping information corresponding to the graphic element may be used to tone map the pixel values of all the pixel points in the region where the graphic element is located in the first image, so as to obtain the first pixel values of all the pixel points in the region where the graphic element is located. In a possible manner, the first pixel values of all the pixels in the region where the graphic element is located may be used as the pixel values of all the pixels in the region where the graphic element is located in the target image. In a possible manner, tone mapping information corresponding to the first image may be further used to perform tone mapping on pixel values of all pixel points in an area where the graphic element is located in the first image, so as to obtain second pixel values of all pixel points in the area where the graphic element is located. And then, carrying out weighted calculation on the first pixel values and the second pixel values of all the pixel points in the region where the graphic element is located in the first image to obtain the pixel values of all the pixel points in the region where the graphic element is located in the target image. In the second way, the connection between the region where the graphic element is located and other regions in the second image can be smoother.
In a possible manner, tone mapping information corresponding to the graphic elements may be used to tone map the region of the first image where the graphic elements are located, while tone mapping is not performed on other regions of the first image, so as to obtain the target image. That is, the region in which the graphic element is located in the target image is obtained by tone mapping the region in which the graphic element is located in the first image, and other regions in the target image are the same as other regions in the first image.
Fig. 7 is a schematic diagram of another image processing procedure 700 according to an embodiment of the application. The image processing procedure 700 describes a procedure of determining an area where a graphic element is located in a first image, and a procedure of determining tone mapping information corresponding to the graphic element, on the basis of the image processing procedure 600. The image processing procedure 700 may be performed by any one of the terminal apparatuses 21, 22, 2M of fig. 1F described above.
S701, receiving target data, where the target data includes a first image and description information of graphic elements in the first image.
S702, acquiring a first image and description information of graphic elements in the first image from target data.
For example, S701 to S702 may refer to the descriptions of S501 to S502, which are not described herein.
Illustratively, the description information of the graphic element includes at least one of category information of the graphic element, region information of a region in which the graphic element is located, or tone mapping information corresponding to the graphic element.
S703, determining the region where the graphic element is located in the first image based on the description information of the graphic element.
For example, when the description information of the graphic element is the category information of the graphic element, S703 may include the following steps S7031 to S7033:
s7031, a third mapping relationship is acquired, where the third mapping relationship includes correspondence between a plurality of kinds of information and a plurality of area information, one kind of information corresponds to one graphic element, and one kind of information corresponds to one or more area information.
For example, the third mapping relationship may be constructed in advance. Specifically, for each preset graphic element in a plurality of preset graphic elements, the region information corresponding to the type of the preset graphic element can be determined, wherein one or more regions corresponding to one graphic element are determined. For example, the region information corresponding to the watermark is at least one of region information of an upper left corner region, region information of a lower right corner region, and region information of a center region of the image. For another example, the area information corresponding to the UI is area information of an edge area. Then, a third mapping relation is generated based on the mapping relation between the type information of each preset graphic element in the plurality of preset graphic elements and the region information corresponding to each preset graphic element.
S7032, region information corresponding to the type information of the graphic element is selected from the plurality of region information based on the third mapping relation.
S7033, the area where the graphic element is located in the first image is determined based on the area information corresponding to the type information of the graphic element.
When the area information corresponding to the type information of the graphic element is one, the area where the graphic element is located in the first image is determined directly based on the area information corresponding to the type information of the graphic element.
When the area information corresponding to the type information of the graphic element is a plurality of, one area information can be selected (e.g. randomly selected) from the plurality of area information corresponding to the type information of the graphic element, and then, the area where the graphic element is located in the first image is determined based on the area information.
When the area information corresponding to the type information of the graphic element is multiple, an area where the graphic element is located in the first image can be determined based on each piece of area information corresponding to the type information of the graphic element, so that multiple areas where the graphic element is located in the first image, or that is, areas where each of the graphic elements is located in the multiple graphic elements in the first image can be obtained.
For example, when the description information of the graphic element is tone mapping information corresponding to the graphic element, S703 may include the following steps S7034 to S7036:
s7034, a fourth mapping relationship is acquired, where the fourth mapping relationship includes correspondence between a plurality of tone mapping information and a plurality of region information, and one tone mapping information corresponds to one or more region information.
For example, the fourth mapping relationship may be constructed in advance. The method comprises the steps of determining tone mapping information and region information corresponding to the type of each preset graphic element in a plurality of preset graphic elements, wherein one or more region information corresponding to one preset graphic element corresponds to one tone mapping information, and one tone mapping information corresponds to one or more region information. For example, the region information corresponding to the watermark is at least one of the region information of the upper left corner region, the region information of the lower right corner region, and the region information of the image center region, and the tone mapping information corresponding to the watermark is tone mapping information a. For another example, the region information corresponding to the UI is region information of an edge region, and the tone mapping information corresponding to the UI is tone mapping information B. Next, a fourth mapping relationship is generated based on the mapping relationship between the region information and the tone mapping information corresponding to each of the plurality of graphic elements.
S7035, based on the fourth mapping information, region information corresponding to tone mapping information corresponding to the graphic element is selected from the plurality of region information.
S7036, a region in which the graphic element is located in the first image is determined based on the region information corresponding to the tone mapping information corresponding to the graphic element.
For example, S7036 may refer to the description of S7033, which is not repeated herein.
S704, based on the description information of the graphic element, tone mapping information corresponding to the graphic element is determined.
For example, when the description information of the graphic element is the category information of the graphic element, S704 may include the following steps S7041 to S7043:
s7041, a first mapping relationship is obtained, where the first mapping relationship includes correspondence between a plurality of kinds of information and a plurality of tone mapping information, and one kind of information corresponds to one graphic element and one tone mapping information.
For example, the first mapping relationship may be constructed in advance. Specifically, tone mapping information corresponding to the type of each preset graphic element can be determined for each preset graphic element in a plurality of preset graphic elements, wherein one preset graphic element corresponds to one tone mapping information. For example, tone mapping information corresponding to the watermark is tone mapping information a. For another example, tone mapping information corresponding to the UI is tone mapping information B. Then, a first mapping relationship may be generated based on the category information of each of the plurality of preset graphic elements and tone mapping information corresponding to each of the preset graphic elements.
S7042, tone mapping information corresponding to the type information of the graphic element is selected from the plurality of tone mapping information based on the first mapping relation.
S7043, tone mapping information corresponding to the type information of the graphic element is determined as tone mapping information corresponding to the graphic element.
For example, when the description information of the graphic element is the area information of the area where the graphic element is located, S704 may include the following steps S7044 to S7046:
S7044, a second mapping relationship is obtained, where the second mapping relationship includes correspondence between a plurality of region information and a plurality of tone mapping information, and one tone mapping information corresponds to one or more region information.
For example, the second mapping relationship may be constructed in advance. The process of generating the second mapping relationship is similar to that of the fourth mapping relationship, and will not be described herein.
Illustratively, the second mapping relationship is a fourth mapping relationship.
S7045, tone mapping information corresponding to the region information of the region in which the graphic element is located is selected from the plurality of tone mapping information based on the second mapping relationship.
S7046, tone mapping information corresponding to the region information of the region where the graphic element is located is determined as tone mapping information corresponding to the graphic element.
S705, tone mapping is carried out on the region where the graphic element is located in the first image based on tone mapping information corresponding to the graphic element, and tone mapping is carried out on other regions in the first image based on tone mapping information corresponding to the first image, so as to obtain a second image.
In the above image processing 600 and image processing 700, the first image is divided into the region where the graphic element is located and the other regions for tone mapping, that is, the sub-regions are tone mapped. In a possible manner, the first image is a high dynamic range HDR image, the first image includes an HDR layer and a standard dynamic range SDR layer, the SDR layer includes a graphic element, the HDR layer includes a second image, the second image is an HDR image, and a dynamic range of pixel values of pixel points in an area where the graphic element is located is smaller than a dynamic range of pixel values of pixel points in the second image. Thus, the SDR layer and the HDR layer of the first image may be tone mapped separately, i.e. tone mapped by separate layers.
In a possible manner, the SDR layer may be tone-mapped based on first tone mapping information corresponding to the graphic element, and the HDR layer may be tone-mapped based on second tone mapping information corresponding to the second image, so as to obtain the target image, where the first tone mapping information is different from the second tone mapping information. That is, the tone mapping manner of the graphic element is different from that of the second image. In this way, the transition of the connection of the region where the graphic element is located and other regions in the target image can be smoother.
In one possible manner, the first tone mapping information is tone mapping information corresponding to an SDR layer. The tone mapping information corresponding to the SDR layer may be regarded as tone mapping information corresponding to the graphic element.
In a possible manner, the first tone mapping information is generated based on the second tone mapping information and third tone mapping information, the third tone mapping information being tone mapping information corresponding to the SDR layer.
Fig. 8 is a schematic diagram of a data generating apparatus 800 according to an embodiment of the application. The data generating apparatus 800 may be used to perform the method of the foregoing embodiment, so that the advantages achieved by the method may refer to the advantages of the corresponding method provided above, and will not be described herein. The data generating apparatus 800 may be a terminal device or a server, or may be one of the terminal device or the server (as shown in fig. 8).
The data generating apparatus 800 may include:
an image acquisition module 801, configured to acquire a first image;
a description information obtaining module 802, configured to obtain description information of a graphic element in the first image, where the description information of the graphic element is used to tone-map an area where the graphic element in the first image is located;
the data generating module 803 is configured to generate first target data based on the description information of the first image and the graphic element.
Illustratively, the first image is a high dynamic range HDR image, the first image comprises an HDR layer and a standard dynamic range SDR layer, the SDR layer comprises a graphic element, the HDR layer comprises a second image, the second image is an HDR image, and the dynamic range of the pixel values of the pixel points in the region where the graphic element is located is smaller than the dynamic range of the pixel values of the pixel points in the second image.
Illustratively, the tone mapping of the second image is different from the tone mapping of the region in which the graphic element is located.
Illustratively, the graphical element includes at least one of a watermark, a logo, a user interface, a map.
Illustratively, the description information of the graphic element includes at least one of category information of the graphic element, region information of a region in which the graphic element is located, or tone mapping information corresponding to the graphic element.
The region information of the region in which the graphic element is located includes at least one of a shape of the region in which the graphic element is located, a size of the region in which the graphic element is located, a position of the region in which the graphic element is located, and pixel position mask information of the region in which the graphic element is located, by way of example.
The data generating module 803 is specifically configured to encode the first image to obtain a code stream, and write the description information of the graphic element into the code stream to obtain the first target data.
The data generating module 803 is specifically configured to tone-map an area where the graphic element is located in the first image based on the description information of the graphic element to obtain a third image, generate enhancement data based on the first image and the third image, where the enhancement data includes enhancement values of all pixels in the first image, adjust the enhancement values of the pixels in the area where the graphic element is located in the enhancement data to a preset value to obtain adjusted enhancement data, and encode the third image and the adjusted enhancement data to obtain a code stream.
The data generating module 803 is specifically configured to determine tone mapping information corresponding to the graphic element based on the description information of the graphic element when the description information of the graphic element is category information of the graphic element or region information of a region where the graphic element is located, and tone map the region where the graphic element is located in the first image based on the tone mapping information corresponding to the graphic element to obtain a third image.
The data generating module 803 is specifically configured to determine preset tone mapping information as tone mapping information corresponding to the graphic element.
The data generating module 803 is specifically configured to determine tone mapping information corresponding to a graphic element based on description information of the graphic element when the description information of the graphic element is category information of the graphic element, and includes obtaining a first mapping relationship, where the first mapping relationship includes correspondence between a plurality of category information and a plurality of tone mapping information, one category information corresponds to one graphic element and one tone mapping information, selecting tone mapping information corresponding to the category information of the graphic element from the plurality of tone mapping information based on the first mapping relationship, and determining tone mapping information corresponding to the category information of the graphic element as tone mapping information corresponding to the graphic element.
The data generating module 803 is specifically configured to obtain a second mapping relationship, where the second mapping relationship includes a correspondence between a plurality of region information and a plurality of tone mapping information, one tone mapping information corresponds to one or more region information, select tone mapping information corresponding to region information of a region where the graphic element is located from the plurality of tone mapping information based on the second mapping relationship, and determine tone mapping information corresponding to region information of the region where the graphic element is located as tone mapping information corresponding to the graphic element.
Illustratively, the description information obtaining module 802 is specifically configured to generate description information of the graphic element based on the first image.
The apparatus also includes an interaction module for receiving an editing operation for the first image, the editing operation for adding the graphical element to the first image.
Illustratively, the apparatus further comprises a communication module for receiving second target data, the second target data comprising descriptive information of the first image and the graphical element;
the image obtaining module 801 is specifically configured to obtain a first image from the second target data;
The description information obtaining module 802 is specifically configured to obtain description information of the graphic element from the second target data.
Fig. 9 is a schematic diagram of an image processing apparatus 900 according to an embodiment of the application. The image processing apparatus 900 may be used to perform the method of the foregoing embodiment, so the advantages achieved by the method may refer to the advantages of the corresponding method provided above, and will not be described herein. The image processing apparatus 900 may be a terminal device or may be an apparatus in a terminal device (as shown in fig. 9).
The image processing apparatus 900 may include:
An acquisition module 901, configured to acquire a first image and description information of graphic elements in the first image;
an information determining module 902, configured to determine, based on the description information of the graphic element, an area where the graphic element is located in the first image and tone mapping information corresponding to the graphic element;
The tone mapping module 903 is configured to tone-map an area where the graphic element is located in the first image based on tone mapping information corresponding to the graphic element, so as to obtain a target image.
Illustratively, the first image is a high dynamic range HDR image, the first image comprises an HDR layer and a standard dynamic range SDR layer, the SDR layer comprises a graphic element, the HDR layer comprises a second image, the second image is an HDR image, and the dynamic range of the pixel values of the pixel points in the region where the graphic element is located is smaller than the dynamic range of the pixel values of the pixel points in the second image.
Illustratively, the tone mapping module 902 is specifically configured to tone-map the SDR layer based on first tone mapping information corresponding to the graphic element, and tone-map the HDR layer based on second tone mapping information corresponding to the second image, so as to obtain the target image, where the first tone mapping information is different from the second tone mapping information.
The first tone mapping information is tone mapping information corresponding to an SDR layer, or the first tone mapping information is generated based on the second tone mapping information and third tone mapping information, which is tone mapping information corresponding to an SDR layer.
The obtaining module 901 is specifically configured to receive target data, where the target data includes a first image and description information of graphic elements in the first image, and obtain the first image and the description information of the graphic elements in the first image from the target data.
Illustratively, the description information of the graphic element includes at least one of category information of the graphic element, region information of a region in which the graphic element is located, or tone mapping information corresponding to the graphic element.
The tone mapping module 903 is specifically configured to obtain a first mapping relationship when the description information of the graphic element is type information of the graphic element, where the first mapping relationship includes correspondence between a plurality of types of information and a plurality of tone mapping information, one type of information corresponds to one type of graphic element and one tone mapping information, select tone mapping information corresponding to the type information of the graphic element from the plurality of tone mapping information based on the first mapping relationship, and determine tone mapping information corresponding to the type information of the graphic element as tone mapping information corresponding to the graphic element.
The tone mapping module 903 is specifically configured to obtain a second mapping relationship when the description information of the graphic element is the region information of the region where the graphic element is located, where the second mapping relationship includes a correspondence between a plurality of region information and a plurality of tone mapping information, one tone mapping information corresponds to one or more region information, select tone mapping information corresponding to the region information of the region where the graphic element is located from the plurality of tone mapping information based on the second mapping relationship, and determine tone mapping information corresponding to the region information of the region where the graphic element is located as tone mapping information corresponding to the graphic element.
The obtaining module 902 is specifically configured to obtain a third mapping relationship when the description information of the graphic element is type information of the graphic element, where the third mapping relationship includes a correspondence between a plurality of types of information and a plurality of area information, one type of information corresponds to one type of graphic element, one type of information corresponds to one or more area information, select, based on the third mapping relationship, area information corresponding to the type of information of the graphic element from the plurality of area information, and determine, based on the area information corresponding to the type of information of the graphic element, an area in which the graphic element is located in the first image.
The obtaining module 902 is specifically configured to obtain a fourth mapping relationship when the description information of the graphic element is tone mapping information corresponding to the graphic element, where the fourth mapping relationship includes a correspondence between a plurality of tone mapping information and a plurality of region information, one tone mapping information corresponds to one or more region information, select, based on the fourth mapping information, region information corresponding to tone mapping information corresponding to the graphic element from the plurality of region information, and determine, based on the region information corresponding to tone mapping information corresponding to the graphic element, a region in which the graphic element is located in the first image.
In one example, FIG. 10 shows a schematic block diagram of an apparatus 1000 of an embodiment of the application. The apparatus 1000 may include a processor 1001 and a transceiver 1002 and optionally a memory 1003.
The various components of device 1000 are coupled together by bus 1004, where bus 1004 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are referred to in the figures as bus 1004.
Optionally, the memory 1003 may be used to store instructions in the foregoing method embodiments. The processor 1001 may be configured to execute instructions in the memory 1003 and to control the transceiver 1002 to receive signals and to control the transceiver 1002 to transmit signals.
The apparatus 1000 may be a terminal device (or server) or a chip of a terminal device (or server) in the above-described method embodiment.
All relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The embodiment of the application also provides a chip comprising one or more interface circuits and one or more processors, wherein the one or more processors receive or transmit data through the one or more interface circuits, and when the one or more processors execute computer instructions, the steps of the related method for realizing the method in the embodiment are executed. Wherein the interface circuit is a transceiver 1002.
The present embodiment also provides a computer-readable storage medium having stored therein computer instructions which, when executed on an electronic device, cause the electronic device to perform the related method steps described above to implement the method in the above embodiments. By way of example, computer readable storage media includes USB flash drives, removable hard disks, read-only memory (ROM), random access memory (random access memory, RAM), magnetic or optical disks, and the like.
The present embodiment also provides a computer program product comprising computer instructions which, when executed by a computer or processor, cause the computer to perform the above-described related steps to implement the method of the above-described embodiments. By way of example, a computer program product may be stored in random access memory (random access memory, RAM), flash memory, ROM, erasable programmable read-only memory (erasable programmable ROM, EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of storage medium known in the art.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Any of the various embodiments of the application, as well as any of the same embodiments, may be freely combined. Any combination of the above is within the scope of the application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (32)

1.一种数据生成方法,其特征在于,所述方法包括:1. A data generation method, characterized in that the method comprises: 获取第一图像;Get the first image; 获取所述第一图像中图形元素的描述信息,所述图形元素的描述信息用于对所述第一图像中所述图形元素所在的区域进行色调映射;Obtain description information of graphic elements in the first image, and use the description information of graphic elements to perform tone mapping on the region where the graphic elements are located in the first image; 基于所述第一图像和所述图形元素的描述信息,生成第一目标数据。First target data is generated based on the first image and the description information of the graphic elements. 2.根据权利要求1所述的方法,其特征在于,所述第一图像为高动态范围HDR图像,所述第一图像包括HDR图层和标准动态范围SDR图层,所述SDR图层包括所述图形元素,所述HDR图层包括第二图像,所述第二图像为HDR图像,所述图形元素所在的区域中像素点的像素值的动态范围小于所述第二图像中像素点的像素值的动态范围。2. The method according to claim 1, wherein the first image is a high dynamic range (HDR) image, the first image includes an HDR layer and a standard dynamic range (SDR) layer, the SDR layer includes the graphic element, the HDR layer includes a second image, the second image is an HDR image, and the dynamic range of the pixel values of the pixels in the region where the graphic element is located is smaller than the dynamic range of the pixel values of the pixels in the second image. 3.根据权利要求2所述的方法,其特征在于,所述第二图像的色调映射方式与所述图形元素所在的区域的色调映射方式不同。3. The method according to claim 2, wherein the tone mapping method of the second image is different from the tone mapping method of the region where the graphic element is located. 4.根据权利要求1至3任一项所述的方法,其特征在于,所述图形元素包括以下至少一种:水印、标志、用户界面、贴图。4. The method according to any one of claims 1 to 3, wherein the graphic element includes at least one of the following: watermark, logo, user interface, and sticker. 5.根据权利要求1至4任一项所述的方法,其特征在于,所述图形元素的描述信息包括以下至少一项:所述图形元素的种类信息、所述图形元素所在的区域的区域信息或所述图形元素对应的色调映射信息。5. The method according to any one of claims 1 to 4, wherein the description information of the graphic element includes at least one of the following: the type information of the graphic element, the area information of the region where the graphic element is located, or the tone mapping information corresponding to the graphic element. 6.根据权利要求5所述的方法,其特征在于,所述图形元素所在的区域的区域信息包括以下至少一项:所述图形元素所在的区域的形状、所述图形元素所在的区域的尺寸、所述图形元素所在的区域的位置、所述图形元素所在的区域的像素位置掩码信息。6. The method according to claim 5, wherein the region information of the region where the graphic element is located includes at least one of the following: the shape of the region where the graphic element is located, the size of the region where the graphic element is located, the position of the region where the graphic element is located, and the pixel position mask information of the region where the graphic element is located. 7.根据权利要求1至6任一项所述的方法,其特征在于,所述基于所述第一图像和所述图形元素的描述信息,生成第一目标数据,包括:7. The method according to any one of claims 1 to 6, characterized in that generating the first target data based on the first image and the description information of the graphic elements includes: 编码所述第一图像,以得到码流;Encode the first image to obtain a bitstream; 将所述图形元素的描述信息写入所述码流,以得到所述第一目标数据。The description information of the graphic element is written into the code stream to obtain the first target data. 8.根据权利要求7所述的方法,其特征在于,所述编码所述第一图像,以得到码流,包括:8. The method according to claim 7, wherein encoding the first image to obtain a bitstream comprises: 基于所述图形元素的描述信息对所述第一图像中图形元素所在的区域进行色调映射,以得到第三图像;Based on the description information of the graphic elements, the region where the graphic elements are located in the first image is tone-mapped to obtain the third image; 基于所述第一图像和所述第三图像,生成增强数据,所述增强数据包括所述第一图像中所有像素点的增强值;Based on the first image and the third image, enhancement data is generated, the enhancement data including the enhancement values of all pixels in the first image; 将所述增强数据中所述图形元素所在的区域中像素点的增强值调整为预设值,以得到调整后的增强数据;The enhancement values of the pixels in the region where the graphic element is located in the enhanced data are adjusted to preset values to obtain the adjusted enhanced data; 编码所述第三图像和所述调整后的增强数据,以得到所述码流。The third image and the adjusted enhanced data are encoded to obtain the bitstream. 9.根据权利要求8所述的方法,其特征在于,所述基于所述图形元素的描述信息对所述第一图像中图形元素所在的区域进行色调映射,以得到第三图像,包括:9. The method according to claim 8, characterized in that, the step of performing tone mapping on the region where the graphic element is located in the first image based on the description information of the graphic element to obtain the third image includes: 当所述图形元素的描述信息为所述图形元素的种类信息或所述图形元素所在的区域的区域信息时,基于所述图形元素的描述信息,确定所述图形元素对应的色调映射信息;When the description information of the graphic element is the type information of the graphic element or the area information of the region where the graphic element is located, the tone mapping information corresponding to the graphic element is determined based on the description information of the graphic element. 基于所述图形元素对应的色调映射信息,对所述第一图像中图形元素所在的区域进行色调映射,以得到所述第三图像。Based on the tone mapping information corresponding to the graphic elements, tone mapping is performed on the region where the graphic elements are located in the first image to obtain the third image. 10.根据权利要求9所述的方法,其特征在于,所述基于所述图形元素的描述信息,确定所述图形元素对应的色调映射信息,包括:10. The method according to claim 9, wherein determining the tone mapping information corresponding to the graphic element based on the description information of the graphic element includes: 将预设色调映射信息,确定为所述图形元素对应的色调映射信息。The preset tone mapping information is determined as the tone mapping information corresponding to the graphic element. 11.根据权利要求9所述的方法,其特征在于,当所述图形元素的描述信息为所述图形元素的种类信息时,所述基于所述图形元素的描述信息,确定所述图形元素对应的色调映射信息,包括:11. The method according to claim 9, characterized in that, when the description information of the graphic element is the type information of the graphic element, determining the tone mapping information corresponding to the graphic element based on the description information of the graphic element includes: 获取第一映射关系,所述第一映射关系包括多个种类信息与多个色调映射信息的对应关系,一个种类信息对应一种图形元素和一个色调映射信息;Obtain a first mapping relationship, which includes the correspondence between multiple categories of information and multiple tone mapping information, where one category of information corresponds to one graphic element and one tone mapping information. 基于所述第一映射关系,从所述多个色调映射信息中选取所述图形元素的种类信息对应的色调映射信息;Based on the first mapping relationship, select the tone mapping information corresponding to the type information of the graphic element from the plurality of tone mapping information; 将所述图形元素的种类信息对应的色调映射信息,确定为所述图形元素对应的色调映射信息。The tone mapping information corresponding to the type information of the graphic element is determined as the tone mapping information corresponding to the graphic element. 12.根据权利要求9所述的方法,其特征在于,当所述图形元素的描述信息为所述图形元素所在的区域的区域信息时,所述基于所述图形元素的描述信息,确定所述图形元素对应的色调映射信息,包括:12. The method according to claim 9, wherein when the description information of the graphic element is the area information of the region where the graphic element is located, determining the tone mapping information corresponding to the graphic element based on the description information of the graphic element includes: 获取第二映射关系,所述第二映射关系包括多个区域信息与多个色调映射信息的对应关系,一个色调映射信息对应一个或多个区域信息;Obtain a second mapping relationship, which includes the correspondence between multiple region information and multiple tone mapping information, where one tone mapping information corresponds to one or more region information; 基于所述第二映射关系,从所述多个色调映射信息中选取所述图形元素所在的区域的区域信息对应的色调映射信息;Based on the second mapping relationship, select the tone mapping information corresponding to the region information of the area where the graphic element is located from the plurality of tone mapping information; 将所述图形元素所在的区域的区域信息对应的色调映射信息,确定为所述图形元素对应的色调映射信息。The tone mapping information corresponding to the region information of the area where the graphic element is located is determined as the tone mapping information corresponding to the graphic element. 13.根据权利要求1至12任一项所述的方法,其特征在于,所述获取所述第一图像中图形元素的描述信息,包括:13. The method according to any one of claims 1 to 12, wherein obtaining the descriptive information of the graphic elements in the first image comprises: 基于所述第一图像,生成所述图形元素的描述信息。Based on the first image, description information of the graphic elements is generated. 14.根据权利要求1至13任一项所述的方法,其特征在于,所述方法还包括:14. The method according to any one of claims 1 to 13, characterized in that the method further comprises: 接收针对所述第一图像的编辑操作,所述编辑操作用于将所述图形元素添加至所述第一图像中。The system receives an editing operation on the first image, the editing operation being used to add the graphic element to the first image. 15.根据权利要求1至14任一项所述的方法,其特征在于,所述方法还包括:15. The method according to any one of claims 1 to 14, characterized in that the method further comprises: 接收第二目标数据,所述第二目标数据包括所述第一图像和所述图形元素的描述信息;Receive second target data, the second target data including the first image and the description information of the graphic elements; 所述获取第一图像,包括:The acquisition of the first image includes: 从所述第二目标数据中,获取所述第一图像;Obtain the first image from the second target data; 所述获取所述第一图像中图形元素的描述信息,包括:The step of obtaining the descriptive information of the graphic elements in the first image includes: 从所述第二目标数据中,获取所述图形元素的描述信息。Descriptive information of the graphic elements is obtained from the second target data. 16.一种图像处理方法,其特征在于,所述方法包括:16. An image processing method, characterized in that the method comprises: 获取第一图像和所述第一图像中图形元素的描述信息;Obtain the first image and the descriptive information of the graphic elements in the first image; 基于所述图形元素的描述信息,确定所述第一图像中图形元素所在的区域和所述图形元素对应的色调映射信息;Based on the description information of the graphic elements, the region where the graphic elements are located in the first image and the tone mapping information corresponding to the graphic elements are determined. 基于所述图形元素对应的色调映射信息对所述第一图像中图形元素所在的区域进行色调映射,以得到目标图像。Based on the tone mapping information corresponding to the graphic elements, tone mapping is performed on the regions where the graphic elements are located in the first image to obtain the target image. 17.根据权利要求16所述的方法,其特征在于,所述第一图像为高动态范围HDR图像,所述第一图像包括HDR图层和标准动态范围SDR图层,所述SDR图层包括所述图形元素,所述HDR图层包括第二图像,所述第二图像为HDR图像,所述图形元素所在的区域中像素点的像素值的动态范围小于所述第二图像中像素点的像素值的动态范围。17. The method according to claim 16, wherein the first image is a high dynamic range (HDR) image, the first image includes an HDR layer and a standard dynamic range (SDR) layer, the SDR layer includes the graphic element, the HDR layer includes a second image, the second image is an HDR image, and the dynamic range of pixel values in the region where the graphic element is located is smaller than the dynamic range of pixel values in the second image. 18.根据权利要求17所述的方法,其特征在于,所述基于所述图形元素对应的色调映射信息对所述第一图像中图形元素所在的区域进行色调映射,以得到目标图像包括:18. The method according to claim 17, wherein the step of performing tone mapping on the region where the graphic element is located in the first image based on the tone mapping information corresponding to the graphic element to obtain the target image includes: 基于所述图形元素对应的第一色调映射信息对所述SDR图层进行色调映射,基于所述第二图像对应的第二色调映射信息对所述HDR图层进行色调映射,以得到所述目标图像,所述第一色调映射信息与所述第二色调映射信息不同。The SDR layer is tone-mapped based on the first tone mapping information corresponding to the graphic element, and the HDR layer is tone-mapped based on the second tone mapping information corresponding to the second image to obtain the target image. The first tone mapping information and the second tone mapping information are different. 19.根据权利要求18所述的方法,其特征在于,19. The method according to claim 18, characterized in that, 所述第一色调映射信息为所述SDR图层对应的色调映射信息;或The first tone mapping information is the tone mapping information corresponding to the SDR layer; or 所述第一色调映射信息是基于所述第二色调映射信息和第三色调映射信息生成的,所述第三色调映射信息为所述SDR图层对应的色调映射信息。The first tone mapping information is generated based on the second tone mapping information and the third tone mapping information, wherein the third tone mapping information is the tone mapping information corresponding to the SDR layer. 20.根据权利要求16至19中任一项所述的方法,其特征在于,所述获取第一图像和所述第一图像中图形元素的描述信息,包括:20. The method according to any one of claims 16 to 19, characterized in that, obtaining the first image and the descriptive information of the graphic elements in the first image includes: 接收目标数据,所述目标数据包括所述第一图像和所述第一图像中图形元素的描述信息;Receive target data, the target data including the first image and descriptive information of the graphic elements in the first image; 从所述目标数据中,获取所述第一图像和所述第一图像中图形元素的描述信息。From the target data, obtain the first image and the descriptive information of the graphic elements in the first image. 21.根据权利要求16至20中任一项所述的方法,其特征在于,所述图形元素的描述信息包括以下至少一项:所述图形元素的种类信息、所述图形元素所在的区域的区域信息或所述图形元素对应的色调映射信息。21. The method according to any one of claims 16 to 20, wherein the description information of the graphic element includes at least one of the following: the type information of the graphic element, the area information of the region where the graphic element is located, or the tone mapping information corresponding to the graphic element. 22.根据权利要求16至21任一项所述的方法,其特征在于,当所述图形元素的描述信息为所述图形元素的种类信息时,所述基于所述图形元素的描述信息,确定所述图形元素对应的色调映射信息,包括:22. The method according to any one of claims 16 to 21, characterized in that, when the description information of the graphic element is the type information of the graphic element, determining the tone mapping information corresponding to the graphic element based on the description information of the graphic element includes: 获取第一映射关系,所述第一映射关系包括多个种类信息与多个色调映射信息的对应关系,一个种类信息对应一种图形元素和一个色调映射信息;Obtain a first mapping relationship, which includes the correspondence between multiple categories of information and multiple tone mapping information, where one category of information corresponds to one graphic element and one tone mapping information. 基于所述第一映射关系,从所述多个色调映射信息中选取所述图形元素的种类信息对应的色调映射信息;Based on the first mapping relationship, select the tone mapping information corresponding to the type information of the graphic element from the plurality of tone mapping information; 将所述图形元素的种类信息对应的色调映射信息,确定为所述图形元素对应的色调映射信息。The tone mapping information corresponding to the type information of the graphic element is determined as the tone mapping information corresponding to the graphic element. 23.根据权利要求16至22任一项所述的方法,其特征在于,当所述图形元素的描述信息为所述图形元素所在的区域的区域信息时,所述基于所述图形元素的描述信息,确定所述图形元素对应的色调映射信息,包括:23. The method according to any one of claims 16 to 22, characterized in that, when the description information of the graphic element is the area information of the region where the graphic element is located, determining the tone mapping information corresponding to the graphic element based on the description information of the graphic element includes: 获取第二映射关系,所述第二映射关系包括多个区域信息与多个色调映射信息的对应关系,一个色调映射信息对应一个或多个区域信息;Obtain a second mapping relationship, which includes a correspondence between multiple region information and multiple tone mapping information, wherein one tone mapping information corresponds to one or more region information; 基于所述第二映射关系,从所述多个色调映射信息中选取所述图形元素所在的区域的区域信息对应的色调映射信息;Based on the second mapping relationship, select the tone mapping information corresponding to the region information of the area where the graphic element is located from the plurality of tone mapping information; 将所述图形元素所在的区域的区域信息对应的色调映射信息,确定为所述图形元素对应的色调映射信息。The tone mapping information corresponding to the region information of the area where the graphic element is located is determined as the tone mapping information corresponding to the graphic element. 24.根据权利要求16至23任一项所述的方法,其特征在于,当所述图形元素的描述信息为所述图形元素的种类信息时,所述基于所述图形元素的描述信息,确定所述第一图像中图形元素所在的区域,包括:24. The method according to any one of claims 16 to 23, characterized in that, when the description information of the graphic element is the type information of the graphic element, determining the region where the graphic element is located in the first image based on the description information of the graphic element includes: 获取第三映射关系,所述第三映射关系包括多个种类信息与多个区域信息之间的对应关系,一个种类信息对应一种图形元素,所述一个种类信息对应与一个或多个区域信息对应;Obtain a third mapping relationship, which includes the correspondence between multiple types of information and multiple regions of information, where one type of information corresponds to one type of graphic element, and one type of information corresponds to one or more regions of information. 基于所述第三映射关系,从所述多个区域信息中选取所述图形元素的种类信息对应的区域信息;Based on the third mapping relationship, select the region information corresponding to the type information of the graphic element from the multiple region information; 基于所述图形元素的种类信息对应的区域信息,确定所述第一图像中图形元素所在的区域。Based on the region information corresponding to the type information of the graphic element, the region where the graphic element is located in the first image is determined. 25.根据权利要求16至24任一项所述的方法,其特征在于,当所述图形元素的描述信息为所述图形元素对应的色调映射信息时,所述基于所述图形元素的描述信息,确定所述第一图像中图形元素所在的区域,包括:25. The method according to any one of claims 16 to 24, characterized in that, when the description information of the graphic element is the tone mapping information corresponding to the graphic element, determining the region where the graphic element is located in the first image based on the description information of the graphic element includes: 获取第四映射关系,所述第四映射关系包括多个色调映射信息与多个区域信息之间的对应关系,一个色调映射信息对应一个或多个区域信息;Obtain a fourth mapping relationship, which includes the correspondence between multiple tone mapping information and multiple region information, where one tone mapping information corresponds to one or more region information; 基于所述第四映射信息,从所述多个区域信息中选取所述图形元素对应的色调映射信息所对应的区域信息;Based on the fourth mapping information, select the region information corresponding to the tone mapping information of the graphic element from the plurality of region information; 基于所述图形元素对应的色调映射信息所对应的区域信息,确定所述第一图像中图形元素所在的区域。Based on the region information corresponding to the tone mapping information of the graphic element, the region where the graphic element is located in the first image is determined. 26.一种数据,其特征在于,所述数据包括第一图像和所述第一图像中图形元素的描述信息,所述图形元素的描述信息用于对所述第一图像中所述图形元素所在的区域进行色调映射。26. A type of data, characterized in that the data includes a first image and descriptive information of graphic elements in the first image, wherein the descriptive information of the graphic elements is used to perform tone mapping on the region where the graphic elements are located in the first image. 27.根据权利要求26所述的数据,其特征在于,所述数据包括码流,所述码流包括所述第一图像的编码数据和所述图形元素的描述信息的编码数据。27. The data according to claim 26, wherein the data includes a bitstream, the bitstream including encoded data of the first image and encoded data of the description information of the graphic elements. 28.根据权利要求26所述的数据,其特征在于,所述数据包括码流,所述码流包括所述第一图像的编码数据。28. The data according to claim 26, wherein the data includes a bitstream, and the bitstream includes encoded data of the first image. 29.一种电子设备,其特征在于,包括:29. An electronic device, characterized in that it comprises: 存储器和处理器,所述存储器与所述处理器耦合;A memory and a processor, wherein the memory is coupled to the processor; 所述存储器存储有程序指令,当所述程序指令由所述处理器执行时,使得所述电子设备执行如权利要求1至权利要求25中任一项所述的方法。The memory stores program instructions that, when executed by the processor, cause the electronic device to perform the method as described in any one of claims 1 to 25. 30.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序运行在计算机或处理器上时,使得所述计算机或所述处理器执行如权利要求1至25任一项所述的方法。30. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program that, when the computer program is run on a computer or processor, causes the computer or processor to perform the method as described in any one of claims 1 to 25. 31.一种计算机程序产品,其特征在于,所述计算机程序产品包含计算机指令,当所述计算机指令被计算机或处理器执行时,使得如权利要求1至25任一项所述的方法的步骤被执行。31. A computer program product, characterized in that the computer program product comprises computer instructions that, when executed by a computer or processor, cause the steps of the method as described in any one of claims 1 to 25 to be performed. 32.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有数据,所述数据包括第一图像和所述第一图像中图形元素的描述信息,所述图形元素的描述信息用于对所述第一图像中所述图形元素所在的区域进行色调映射。32. A computer-readable storage medium, characterized in that the computer-readable storage medium stores data, the data including a first image and descriptive information of graphic elements in the first image, the descriptive information of the graphic elements being used for tone mapping of the area where the graphic elements are located in the first image.
CN202410741736.8A 2024-06-07 2024-06-07 Data generation method, image processing method, data and electronic equipment Pending CN121098995A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202410741736.8A CN121098995A (en) 2024-06-07 2024-06-07 Data generation method, image processing method, data and electronic equipment
PCT/CN2025/093183 WO2025251827A1 (en) 2024-06-07 2025-05-07 Data generation method, image processing method, data and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410741736.8A CN121098995A (en) 2024-06-07 2024-06-07 Data generation method, image processing method, data and electronic equipment

Publications (1)

Publication Number Publication Date
CN121098995A true CN121098995A (en) 2025-12-09

Family

ID=97894527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410741736.8A Pending CN121098995A (en) 2024-06-07 2024-06-07 Data generation method, image processing method, data and electronic equipment

Country Status (2)

Country Link
CN (1) CN121098995A (en)
WO (1) WO2025251827A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9973723B2 (en) * 2014-02-24 2018-05-15 Apple Inc. User interface and graphics composition with high dynamic range video
EP3451677A1 (en) * 2017-09-05 2019-03-06 Koninklijke Philips N.V. Graphics-safe hdr image luminance re-grading
US11049228B2 (en) * 2019-07-25 2021-06-29 Microsoft Technology Licensing, Llc Controlling display brightness when rendering composed scene-referred and output-referred content
CN115564659B (en) * 2022-02-28 2024-04-05 荣耀终端有限公司 Video processing method and device
CN117376713A (en) * 2022-06-27 2024-01-09 华为技术有限公司 HDR image editing method, device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2025251827A1 (en) 2025-12-11

Similar Documents

Publication Publication Date Title
US10440329B2 (en) Hybrid media viewing application including a region of interest within a wide field of view
WO2017107758A1 (en) Ar display system and method applied to image or video
CN108282612B (en) Video processing method, computer storage medium and terminal
JP7511026B2 (en) Image data encoding method and device, display method and device, and electronic device
US20090262136A1 (en) Methods, Systems, and Products for Transforming and Rendering Media Data
US9165605B1 (en) System and method for personal floating video
CN116962743A (en) Video image coding and matting method and device and live broadcast system
CN103548343A (en) Device and method for converting 2D content into 3D content and computer-readable storage medium thereof
WO2025152573A1 (en) Information display method based on dynamic digital human avatar, and electronic device
CN112019906A (en) Live broadcast method, computer equipment and readable storage medium
CN117830077A (en) Image processing method, device and electronic equipment
WO2024067461A1 (en) Image processing method and apparatus, and computer device and storage medium
US9460544B2 (en) Device, method and computer program for generating a synthesized image from input images representing differing views
JP7672835B2 (en) Information processing device, information processing method, and program
CN113538601B (en) Image processing method, device, computer equipment and storage medium
CN121098995A (en) Data generation method, image processing method, data and electronic equipment
CN117097983A (en) Image processing methods and devices
CN109151568B (en) Video processing method and related product
CN113784169A (en) Video recording method and device with bullet screen
US12462320B2 (en) Systems and methods for extending selectable object capability to a captured image
US20260038075A1 (en) Systems and methods for extending selectable object capability to a captured image
CN118537418A (en) Image editing rollback method and device, electronic equipment and medium
JP2020187528A (en) Image processing apparatus, image processing system, image generation method, and program
JP2001008231A (en) Method and system for transmitting multi-viewpoint image of object in three-dimensional space
HK40076039B (en) Image processing method and apparatus, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication