CN118266009A - Coloring method and image processor - Google Patents
Coloring method and image processor Download PDFInfo
- Publication number
- CN118266009A CN118266009A CN202280076969.XA CN202280076969A CN118266009A CN 118266009 A CN118266009 A CN 118266009A CN 202280076969 A CN202280076969 A CN 202280076969A CN 118266009 A CN118266009 A CN 118266009A
- Authority
- CN
- China
- Prior art keywords
- illumination
- pixel
- point
- reference point
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000004040 coloring Methods 0.000 title claims abstract description 40
- 238000005286 illumination Methods 0.000 claims abstract description 324
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000009877 rendering Methods 0.000 claims description 90
- 230000008569 process Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 abstract description 16
- 230000000694 effects Effects 0.000 description 18
- 230000006870 function Effects 0.000 description 9
- 239000008186 active pharmaceutical agent Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000002156 mixing Methods 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
The application provides a coloring method and an image processor. The coloring method is applied to an image processor, wherein the image processor is used for processing a first image, and the first image comprises a plurality of preset illumination areas; the method comprises the following steps: acquiring position information of a preset reference point in a first illumination area and position information of a plurality of pixel points in the first illumination area, wherein the first illumination area is any one of the plurality of illumination areas; acquiring illumination data of the reference point according to the position information of the reference point; and acquiring illumination data of the plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points. The application can save a great amount of calculation force (about 50 percent), reduce power consumption and control errors.
Description
The present application relates to image processor technology, and more particularly, to a shading method and an image processor.
Since 2001, a Pixel Shader (PS) has become a standard for image processors (graphics processor unit, GPU), and in order to support parallel processing, PS is designed to be independent of each other, and there is no data exchange and dependency between pixels.
With the advent of variable resolution rendering (VRS) technology, GPUs were provided with the ability to employ different rendering resolutions in different areas. But the rendering effect and rendering efficiency of VRS technology have yet to be improved.
Disclosure of Invention
The application provides a coloring method and an image processor, which can save a great amount of calculation force (about 50 percent), reduce power consumption and control errors.
In a first aspect, the present application provides a shading method applied to an image processor for processing a first image, the first image comprising a plurality of illuminated areas for indicating areas where light acts on the first image; the method comprises the following steps: acquiring position information of a preset reference point in a first illumination area and position information of a plurality of pixel points in the first illumination area, wherein the first illumination area is any one of the plurality of illumination areas; acquiring illumination data of the reference point according to the position information of the reference point; and acquiring illumination data of the plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points.
In the embodiment of the application, the illumination data of each pixel point illumination area in the first illumination area is obtained by calculating the illumination data of the reference point and according to the correlation between the reference point and the position of each pixel point in the first illumination area. Compared with the prior art, the processing mode can save a large amount of calculation force (about 50 percent) and reduce power consumption by independently calculating and acquiring the illumination data of each pixel point in the first illumination area, and can control errors by acquiring the illumination data of a plurality of pixel points in a small range (the illumination area is selected to be smaller in size) in a correlative manner. The illumination data is used for reflecting the change of the color data of the image under illumination, and can be represented by a pixel value mode, and mainly comprises RGB values.
In the embodiment of the present application, the coloring modes may include a region mode (quad mode) and a pixel mode (pixel mode), and the selection of the coloring modes may be according to the characteristics of the image, and reference may be made to the embodiment shown in fig. 2, which is not described herein.
A reference point may be selected in the first illumination area according to a preset rule. Alternatively, the reference point may be a center point of the first illumination area, or the reference point may be any one pixel point in the first illumination area. For example, the size of the first illumination area is 2×2, and includes 4 pixels, where the pixel at the top left, bottom left, top right, or bottom right may be selected as the reference point, and the intersection point of the 4 pixels (i.e., the center point of the first illumination area) may be selected as the reference point.
Optionally, position information of a plurality of vertices of a preset first rendering area may be acquired, where the first rendering area includes a first illumination area; and acquiring the position information of the reference point and the position information of the plurality of pixel points according to the position information of the plurality of vertexes and the reference point.
The first rendering area comprises a first illumination area, i.e. the first illumination area is a sub-area of the first rendering area. Alternatively, the first rendering area may be a triangle area.
In an embodiment of the application, the location information comprises normals and/or coordinates. Wherein both the normal and the coordinates can be referred to as attribute data of the point, and the normal (normal) is a line perpendicular to the tangent of the curve at the point, and can be calculated from the slope of the curve at the point. The coordinates may be represented by coordinates of the horizontal and vertical axes for marking the position of the point.
The position information of the reference point and the position information of the plurality of pixel points can be obtained by interpolation.
The first rendering area comprises an illumination area, i.e. the illumination area is a sub-area of the first rendering area. Typically, the attribute data may include normals, positions (e.g., abscissa and ordinate), texture information, color data (e.g., RGB values, which refers to initial color data, for which illumination factors have not been considered), and the like.
Any point in the first rendering area can be interpolated from the position information of a plurality of vertices of the first rendering area according to the position of the point.
For example, setting weights of corresponding vertexes according to distances between the point and the vertexes of the first rendering area respectively, and then carrying out weighted summation on normals of the vertexes to obtain normals of the point; or setting the weight of the corresponding vertex according to the area of the triangle formed by the point and any two adjacent vertexes, and then carrying out weighted summation on the normals of the vertexes to obtain the normals of the point.
For another example, the first rendering area or the first image is rasterized, so that coordinates of any point (including a reference point and a plurality of pixel points) in the first illumination area are obtained.
It should be noted that, other setting methods may be adopted for the weights of the vertices of the first rendering area, which is not specifically limited in the embodiment of the present application. In addition, the embodiment of the application can acquire the normal line of any point by adopting other methods besides interpolation, and the embodiment of the application is not particularly limited.
In the present application, other data of any point in the rendering area may be obtained by the above method, and the data may include texture information, initial color data (for example, RGB values, which refers to initial color data, and illumination factors have not been considered) and the like in addition to the normal line and the coordinates. For example, initial color data of a plurality of vertices may be acquired, and interpolation may be performed based on the initial color data of the plurality of vertices to acquire initial color data of a plurality of pixel points. The method for obtaining the attribute data may be described above, and will not be described herein.
In the above example, the reference point is a center point of the first illumination area, and the embodiment of the present application may also select one pixel point in the first illumination area as the reference point, for example, the upper left corner pixel point of the square area of 2×2. In this case, the attribute data of the reference point may still be obtained by the above method, which is not described in detail.
In one possible implementation, the GPU may obtain the first illumination area according to a preset area map of the first image. The first image may be divided into a plurality of image blocks, for example, into a plurality of 32×32 image blocks, and a region size (quad size) may be set for each image block, for example, the region size may be set according to the content of the image block, the contained object, the texture, or the like. During the shading process, the first image may be traversed according to a predetermined order, and the image location (or image pixel) currently being shaded may be referred to as the current shading location. The region size corresponding to the current coloring position can be obtained from the region map according to the image block to which the current coloring position belongs, and then the region with the same size as the region size is obtained at the current coloring position as the first illumination region. The size of the first illumination area is 2×2 or 4×4.
The pixels on an image can be traversed, typically for one image, in a certain order to achieve the coloring, so that the position of the coloring varies in real time across the image. The above-described image position to be processed indicates an image position to be processed at the present moment. The region map (quad map) may be derived from a pipeline (pipeline), and is preset, and the quad map includes a plurality of regions and a region size (quad size) corresponding to each region. The GPU may find the region to which the image position to be processed belongs in the quad map, and then find the region size corresponding to the region, for example, the region size corresponding to the image position to be processed is 2×2. Based on this, the GPU obtains a2×2 region at the image position to be processed as an illumination region to be processed, the illumination region including 4 pixels.
The GPU may generate a region list (quad list) that includes image locations and region sizes corresponding to the image locations. The GPU thereafter groups all pixels included in the illumination area into a group (group), processes a plurality of pixels in units of the group, and outputs a plurality of color data. The illumination area may be a square area or a rectangular area, which is not particularly limited. Alternatively, the size of the illumination area to be treated is 2×2 or 4×4.
In the embodiment of the application, the illumination data of the reference point can be obtained by adopting any illumination formula, for example, a Phong illumination model is adopted. The input of the illumination formula may comprise attribute data of the reference point, which illumination formula is programmable, typically when processing the same region in the first image, the same illumination formula may be used, whereas for different regions in the first image, e.g. representing different objects in the first image, different illumination formulas may be used, based on which illumination formulas may be preset. The application does not limit the illumination formula in detail.
In one possible implementation, the GPU obtains a first parameter corresponding to location information of a reference point; acquiring illumination data of a first pixel point according to the first parameter, the relative position relation between the first pixel point and the reference point and the illumination data of the reference point, wherein the first pixel point is any one of a plurality of pixel points; the relative position relation between the first pixel point and the reference point is determined through the position information of the first pixel point and the position information of the reference point.
In one possible implementation, the location information refers to a normal. The normal line is one of the attribute data, and can be obtained by the interpolation unit. The normal (normal) is a line perpendicular to the tangent of the curve at a point and can be calculated from the slope of the curve at the point.
In one possible implementation, the location information refers to coordinates. The coordinates are one of the attribute data, and can be obtained by the interpolation unit described above.
In one possible implementation manner, after the illumination data of a plurality of pixel points are acquired, when the second pixel point meets a preset condition, setting a mask of the second pixel point to be 1, and indicating that the illumination data of the second pixel point is valid; after the illumination data of the plurality of pixel points are obtained, when the second pixel point does not meet the preset condition, setting a mask of the second pixel point to 0 for indicating that the illumination data of the second pixel point is invalid; the second pixel point is any one of the plurality of pixel points, and the second pixel point may be the first pixel point above or may not be the first pixel point above.
The preset condition may be, for example: whether the second pixel is within the range of the currently drawn object. If yes, the color data representing the second pixel point is valid, the mask is set to 1, and if not, the color data representing the second pixel point is invalid, and the mask is set to 0. It should be noted that, the preset conditions are not specifically limited in the embodiment of the present application.
In addition, the GPU may further obtain target color data of the plurality of pixels according to the initial color data of the plurality of pixels obtained by the interpolation unit and the illumination data of the plurality of pixels obtained by the operation unit. For example, the initial color data and the illumination data of a pixel are added to obtain the target color data of the pixel.
In one possible implementation, the GPU may further obtain illumination data of the pixel point to be processed when the coloring mode is a pixel mode. The above related techniques for coloring the single pixel point may be adopted at this time, and will not be described here.
In a second aspect, the present application provides an image processor for processing a first image, the first image comprising a preset plurality of illumination areas; the image processor includes: an interpolation unit and an operation unit; the interpolation unit is used for acquiring position information of a preset reference point in a first illumination area and position information of a plurality of pixel points in the first illumination area, wherein the first illumination area is any one of the plurality of illumination areas; the operation unit is used for acquiring illumination data of the reference point according to the position information of the reference point; and acquiring illumination data of the plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points.
In a possible implementation manner, the operation unit is specifically configured to: acquiring a first parameter corresponding to the position information of the reference point; acquiring illumination data of a first pixel point according to the first parameter, the relative position relation between the first pixel point and the reference point and the illumination data of the reference point, wherein the first pixel point is any one of the plurality of pixel points; the relative position relation between the first pixel point and the reference point is determined through the position information of the first pixel point and the position information of the reference point.
In a possible implementation manner, the interpolation unit is specifically configured to: acquiring the position information of a plurality of vertexes of a first rendering area, wherein the first rendering area comprises the first illumination area; and acquiring the position information of the reference point and the position information of the plurality of pixel points according to the position information of the plurality of vertexes and the preset reference point.
In a possible implementation manner, the interpolation unit is further configured to: acquiring initial color data of the plurality of vertexes; acquiring initial color data of the plurality of pixel points according to the initial color data of the plurality of vertexes; the operation unit is further configured to: and after the illumination data of the plurality of pixel points are obtained, obtaining target color data of the plurality of pixel points according to the initial color data of the plurality of pixel points and the illumination data of the plurality of pixel points.
In one possible implementation, the first rendering area is a triangle area.
In one possible implementation, the location information includes normals and/or coordinates.
In a possible implementation manner, the operation unit is further configured to: after the illumination data of the plurality of pixel points are obtained, when a second pixel point meets a preset condition, setting a mask of the second pixel point to be 1 for indicating that the illumination data of the second pixel point is effective; after the illumination data of the plurality of pixel points are obtained, when the second pixel point does not meet the preset condition, setting a mask of the second pixel point to 0 for indicating that the illumination data of the second pixel point is invalid; wherein the second pixel point is any one of the plurality of pixel points.
In one possible implementation, the size of the first illumination area is 2×2 or 4×4.
In a possible implementation manner, the reference point is a center point of the first illumination area; or the reference point is any pixel point in the first illumination area.
In a third aspect, the present application provides an electronic device comprising: an image processor; a memory for storing one or more programs; the one or more programs, when executed by the image processor, cause the image processor to implement the method of any of the first aspects described above.
In a fourth aspect, the present application provides a computer readable storage medium comprising a computer program which, when executed on a computer, causes the computer to perform the method of any of the first aspects above.
In a fifth aspect, the present application provides a computer program for performing the method of any one of the first aspects above, when the computer program is executed by a computer.
FIG. 1 is a schematic diagram of a conventional rendering pipeline;
FIG. 2 is a schematic diagram of a rendering pipeline supporting a region shader (quad loader);
FIG. 3 is a block diagram of an image processor including a quad loader according to an embodiment of the present application;
FIG. 4 is a schematic view of an illumination area;
FIG. 5 is a flow chart of a process 500 of a shading method according to an embodiment of the present application;
FIGS. 6a and 6b are schematic views of rendering effects;
Fig. 7a and 7b are schematic views of rendering effects.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," and the like in the description and in the claims and drawings are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a series of steps or elements. The method, system, article, or apparatus is not necessarily limited to those explicitly listed but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
Since 2001, a Pixel Shader (PS) has become a standard for image processors (graphics processor unit, GPU), and in order to support parallel processing, PS is designed to be independent of each other, and there is no data exchange and dependency between pixels. Fig. 1 is a schematic diagram of a conventional rendering pipeline, as shown in fig. 1, in which pixel SHADER STAGE performs illumination and mapping processing on each rasterized pixel (pixel), so as to calculate color data of the pixel under illumination. The core idea of pixel loader is that each pixel point is independent, and there is no data exchange and dependence between pixel points.
Accordingly, if the pixel loader is very complex, a large amount of illumination and mapping occurs in the pixel loader stage, and each pixel calculates color data separately, which becomes a bottleneck. If the pixel loader is very simple, the pixel loader of the partial rendering channel (RENDER PASS) is very simple, and at this time, the starting and destroying of the pixel loader thread becomes a consumption big head. Given that there is a large number of repetition or reference places for calculation between adjacent pixels, if the calculation can be shared between pixels in a small range, performance can be greatly improved, power consumption can be reduced, and overhead for thread maintenance can be reduced.
The advent of variable resolution rendering (VRS) technology provides GPUs with the ability to employ different rendering resolutions in different areas. However, the VRS technology outputs only one color data for a pixel point within one image block (block), resulting in insufficient smoothness of rendering effect.
In order to solve the above technical problems, embodiments of the present application provide an image processor and a coloring method, which are described below by way of embodiments.
FIG. 2 is a schematic diagram of a rendering pipeline supporting a region shader (quad loader), as shown in FIG. 2, having two branches after the image rasterization process, one of which is a quad loader and the other of which is a pixel loader.
The quad loader can form a group (group) of adjacent pixels, process a plurality of pixels in a loader by taking the group as a unit and output a plurality of color data, and the pixels in the group form the current illumination area. The output of the QUAD loader may be represented as a color data set out vec4 my_ FragColor [ QUAD_SIZE_X ] [ QUAD_SIZE_Y ], QUAD_SIZE_X representing the width of the current illumination area, QUAD_SIZE_Y representing the height of the current illumination area. Illustratively, QUAD_SIZE_X may be 2 or 4, and QUAD_SIZE_Y may be 2 or 4. It should be noted that the embodiments of the present application are not limited to the values of QUAD_SIZE_X and QUAD_SIZE_Y.
The pixel loader adopts a related technology, the color data is calculated for each pixel point independently, the pixels are mutually independent, and data exchange and dependence are not needed between the pixels. The output of the pixel loader may be represented as color data out vec4my_ FragColor for the current pixel point.
In the embodiment of the application, whether the quad loader or the pixel loader is selected can be determined according to the characteristics of the currently processed image position, for example, the pixel loader can be used for processing the image position with more details and more complicated textures, and the quad loader can be used for processing the image position with simpler textures and background. And in the image position of selecting the quad shader, the quad shader with coarse granularity or fine granularity can be selected for processing according to the image characteristics. The user may preset rules for selecting the shader, which is not particularly limited in the embodiment of the present application.
Alternatively, a rendering area may be divided in advance in the image, and the rendering area may be a regular polygon (e.g., square, rectangle, etc.), or may be an irregular shape (e.g., region of interest, person region, background region, etc.), which is not particularly limited. Further, the illumination area is divided in the rendering area, and the illumination area may be used to indicate an area where light acts on the image, and is a processing object of the coloring method according to the embodiment of the present application, that is, each coloring is to group a plurality of pixels included in the illumination area, perform coloring processing in units of groups, and output color data of the plurality of pixels. The different illumination areas may have the same area size (quad size), or may have different area sizes, which is not particularly limited. A region map (quad map) may be preset, and the quad map includes a plurality of regions and quad size corresponding to each region. It should be noted that the quad map may be represented by a list, a record, or the like, which is not particularly limited in the embodiment of the present application.
The output color data of the quad or pixel loader is input to a blending unit (blending) to perform subsequent rendering processing.
Therefore, the rendering pipeline of the embodiment of the application can support the quad loader provided by the embodiment of the application, can be compatible with the existing pixel loader, and can achieve the optimal coloring effect by compensating for the difference between the quad loader and the existing pixel loader, and realizes various flexible coloring schemes.
It should be noted that fig. 2 is only an example of a rendering pipeline, and the structure of the rendering pipeline is not specifically limited in the embodiment of the present application.
In the embodiment of the present application, the quad loader may be a function that is perceived by an application developer, so that an interface may be exposed to the application developer through an application programming interface (application programming interface, API) protocol (e.g., vulkan, openGL ES, etc.), and an API interface needs to be extended. It should be understood that embodiments of the present application are not limited to APIs. The application developer uses the ability of the quad loader through these interfaces. The following is an example of a Vulkan major modification to the API:
A new SHADER STAGE is added to the API and the new stage has the ability to handle multiple pixels.
Several ways to specify quad size in the API:
1. The quad size of the different regions is specified by the quad map.
2. The quad size passed through pipeline.
3. Setting q by vkcmdSetQuadSize
uad size。
When multiple rules are simultaneously validated, that is, the API adopts multiple modes (any two or all of the three modes) to define the quad size, the priority of the multiple modes needs to be defined, and finally only the quad size determined by the mode with the highest priority is selected, so that multiple quad sizes are not defined for the same area in multiple modes, and confusion is caused.
Defining special inputs and outputs of the quad loader:
Input: attribute data quad in vec2TexCoord of the center point of the illumination area;
Or attribute data pixel vec2 texCoord [ QUAD_SIZE_X ] [ QUAD_SIZE_Y ] of one pixel in the illumination area;
And (3) outputting: color data out vec4 my_ FragColor [ QUAD_SIZE_X ] [ QUAD_SIZE_Y ] of a plurality of pixel points contained in the illumination area;
Fig. 3 is a frame diagram of an image processor including a quad loader according to an embodiment of the present application, where the image processor is configured to process a first image, and the first image includes a plurality of illumination areas, and any one of the illumination areas includes a plurality of pixels. As shown in fig. 3, an image processor including a quad loader may include a rasterizing unit, an interpolation unit, an operation unit and a mixing unit, wherein,
And a rasterization unit (master) for acquiring a first illumination area, wherein the first illumination area is any one of a plurality of preset illumination areas.
The rasterizing unit may acquire the first illumination area according to a preset area map of the first image. The first image may be divided into a plurality of image blocks, for example, into a plurality of 32×32 image blocks, and a region size (quad size) may be set for each image block, for example, the region size may be set according to the content of the image block, the contained object, the texture, or the like. During the shading process, the first image may be traversed according to a predetermined order, and the image location (or image pixel) currently being shaded may be referred to as the current shading location. The region size corresponding to the current coloring position can be obtained from the region map according to the image block to which the current coloring position belongs, and then the region with the same size as the region size is obtained at the current coloring position as the first illumination region.
The pixels on an image can be traversed, typically for one image, in a certain order to achieve the coloring, so that the position of the coloring varies in real time across the image. The above-described image position to be processed indicates an image position to be processed at the present moment. The region map (quad map) may be derived from a pipeline (pipeline), and is preset, and includes a correspondence relationship between the positions of a plurality of regions and the region sizes (quad sizes) of the respective regions in the quad map. The rasterizing unit may find the region to which the image position to be processed belongs in the quad map, and then find the region size corresponding to the region, for example, the region size corresponding to the image position to be processed is 2×2. Based on this, the rasterizing unit acquires a 2×2 region at the image position to be processed as an illumination region to be processed, the illumination region including 4 pixel points.
The rasterizing unit may generate a region list (quad list) including the image position and the region size corresponding to the image position. The unit thereafter groups all the pixels included in the illumination area into one group (group), processes a plurality of pixels in units of groups, and outputs a plurality of illumination data. The illumination area may be a square area or a rectangular area, which is not particularly limited. Optionally, the size of the first illumination area is 2×2 or 4×4.
An interpolation unit (Interpolator) for acquiring position information of a preset reference point in the first illumination area and position information of a plurality of pixel points in the first illumination area.
A reference point may be selected in the first illumination area according to a preset rule. Alternatively, the reference point may be a center point of the first illumination area, or the reference point may be any one pixel point in the first illumination area. For example, the size of the first illumination area is 2×2, and includes 4 pixels, where the pixel at the top left, bottom left, top right, or bottom right may be selected as the reference point, and the intersection point of the 4 pixels (i.e., the center point of the first illumination area) may be selected as the reference point.
In one possible implementation manner, the interpolation unit may acquire position information of a plurality of vertices of a first rendering area, where the first rendering area includes a first illumination area; and acquiring the position information of the reference point and the position information of the plurality of pixel points according to the position information of the plurality of vertexes and the reference point.
The first rendering area comprises a first illumination area, i.e. the first illumination area is a sub-area of the rendering area. Alternatively, the first rendering area may be a triangle area.
In an embodiment of the application, the location information comprises normals and/or coordinates. Wherein both the normal and the coordinates can be referred to as attribute data of the point, and the normal (normal) is a line perpendicular to the tangent of the curve at the point, and can be calculated from the slope of the curve at the point. The coordinates may be represented by coordinates of the horizontal and vertical axes for marking the position of the point.
The position information of the reference point and the position information of the plurality of pixel points can be obtained by interpolation.
Any point in the first rendering area can be interpolated from the position information of a plurality of vertices of the first rendering area according to the position of the point.
For example, setting weights of corresponding vertexes according to distances between the point and the vertexes of the first rendering area respectively, and then carrying out weighted summation on normals of the vertexes to obtain normals of the point; or setting the weight of the corresponding vertex according to the area of the triangle formed by the point and any two adjacent vertexes, and then carrying out weighted summation on the normals of the vertexes to obtain the normals of the point.
For another example, the first rendering area or the first image is rasterized, so that coordinates of any point (including a reference point and a plurality of pixel points) in the first illumination area are obtained.
It should be noted that, other setting methods may be adopted for the weights of the vertices of the first rendering area, which is not specifically limited in the embodiment of the present application. In addition, the embodiment of the application can acquire the position information of any point by adopting other methods besides interpolation, and the embodiment of the application is not particularly limited.
Fig. 4 is a schematic view of an illumination area, as shown in fig. 4, in one image, a first rendering area is a triangle area, and position information of 3 vertices (a, b, and c) of the first rendering area can be obtained. The first illumination area is a2×2 square area in the first rendering area, and includes 4 pixel points (one square represents one pixel point in fig. 4), and the reference point is a center point of the first illumination area.
In a possible implementation, the interpolation unit may interpolate the normals of the reference points from the normals of the vertices (a, b, and c). For example, the reference point is at a distance D1 from vertex a, the reference point is at a distance D2 from vertex b, the reference point is at a distance D3 from vertex c, and the weight of vertex a can be obtained asThe weight of vertex b isThe weight of vertex c isAnd then calculate the reference point
For example, the triangle area formed by the reference point, the vertex a and the vertex b is S1, the triangle area formed by the reference point, the vertex b and the vertex c is S2, the triangle area formed by the reference point, the vertex a and the vertex c is S3, and the weight of the vertex a is obtainedThe weight of vertex b isThe weight of vertex c isAnd then calculate to get
The method can also be used for obtaining the normals of 4 pixel points in the illumination area, and the principle is the same and is not repeated here.
In the present application, the interpolation unit may further obtain other data of any point in the first rendering area by the above method, where the data may include texture information, initial color data (for example, RGB values, which refers to initial color data, and illumination factors have not been considered) and the like in addition to the normal and coordinates. For example, the interpolation unit may acquire initial color data of a plurality of vertices, interpolate according to the initial color data of the plurality of vertices to acquire initial color data of a plurality of pixel points. The method for obtaining the attribute data may be described above, and will not be described herein.
In the above example, the reference point is the center point of the illumination area, and the embodiment of the present application may also select one pixel point in the illumination area as the reference point, for example, the upper left corner pixel point of the square area of 2×2. In this case, the attribute data of the reference point may still be obtained by the above method, which is not described in detail.
The computing unit is used for acquiring illumination data of the reference point according to the position information of the reference point; and acquiring illumination data of the plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points.
The arithmetic unit may input an instruction set (instruction set architecture, ISA) of EU operation, and the instructions in the instruction set may include position information of a reference point, illumination data of the reference point, and position information of a plurality of pixel points. The operation unit may output an array of illumination data, where the array includes illumination data of a plurality of pixels in the first illumination area, and the illumination data refers to color data of the pixels under illumination. Usually, the initial color data represents the original color of the pixel, the illumination factor is not considered yet, but the influence of the illumination of the surrounding environment on the image is considered, the illumination factor can be added on the basis of the initial color data of the pixel, so as to obtain the color data with illumination of the pixel, the color data can be called illumination data, and the illumination data can be used for subsequent rendering processing so as to be more in line with the rendering effect under the actual illumination.
The arithmetic unit is divided into two steps: firstly, acquiring illumination data of a reference point through an illumination formula according to the position information. In the embodiment of the application, the illumination data of the reference point can be obtained by adopting any illumination formula, for example, a Phong illumination model is adopted. The input of the illumination formula may comprise attribute data of the reference point, which may comprise texture information, initial color data (e.g. RGB values, which refers to the initial color data, for which the illumination factor has not been taken into account) and the like, in addition to the above mentioned normals and coordinates. The illumination formula is programmable, and generally the same illumination formula may be used when the same region in the first image is processed, and different illumination formulas may be used for different regions in the first image, for example, regions representing different objects in the first image, so the illumination formula is not particularly limited in the present application.
And secondly, acquiring illumination data of a plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points. The embodiment of the application can respectively acquire the illumination data of a plurality of pixels contained in the illumination area by adopting the method described below, and the target pixel point represents any one of the plurality of pixels contained in the illumination area.
Therefore, the computing unit firstly acquires the illumination data of the reference point, then acquires the illumination data of the pixel points adjacent to the reference point based on the illumination data of the reference point by utilizing the correlation between the adjacent pixel points, so that the illumination data of all the pixel points in the illumination area can be acquired, a great amount of computing force (about 50 percent) can be saved, the power consumption is reduced, and the illumination data of a plurality of pixel points can be acquired in a small range (the illumination area is selected to have smaller size) in a correlation manner compared with the method of directly acquiring the illumination data of each pixel point by adopting the illumination formula in the first step, so that the error can be controlled.
The embodiment of the application can adopt a processing mode based on gradient.
In one possible implementation manner, the operation unit is specifically configured to obtain a first parameter corresponding to the location information of the reference point; acquiring illumination data of a first pixel point according to the first parameter, the relative position relation between the first pixel point and the reference point and the illumination data of the reference point, wherein the first pixel point is any one of a plurality of pixel points; the relative position relation between the first pixel point and the reference point is determined through the position information of the first pixel point and the position information of the reference point.
In one possible implementation, the location information refers to a normal. The normal line is one of the attribute data, and can be obtained by the interpolation unit. The normal (normal) is a line perpendicular to the tangent of the curve at a point and can be calculated from the slope of the curve at the point.
Obtaining illumination data of a target pixel point according to the following formula, wherein the target pixel point is one of the pixel points:
S(n 1)=S(n 0)+S′(n 0)(n 1-n 0)
Wherein S (n) is a function of calculating illumination data by normal (normal); s (n 1) represents illumination data of the target pixel point; s (n 0) represents illumination data of a reference point; s' (n 0) represents a gradient value of illumination data of the reference point with respect to a normal line of the reference point, i.e., a first parameter; n 0 represents the normal to the reference point; n 1 denotes the normal line of the target pixel point.
In one possible implementation, the location information refers to coordinates. The coordinates are one of the attribute data, and can be obtained by the interpolation unit described above.
Obtaining illumination data of a target pixel point according to the following formula, wherein the target pixel point is one of the pixel points:
S(p 1)=S(p 0)+S′(p 0)(x 1-x 0)
Wherein S (p) is a function of calculating illumination data by coordinates; s (p 1) represents illumination data of the target pixel point; s (p 0) represents illumination data of a reference point; s' (p 0) represents a gradient value of illumination data of the reference point with respect to coordinates of the reference point, i.e., a first parameter; p 0 denotes the coordinates of the reference point; p 1 represents the coordinates of the target pixel point; x 0 represents the x-axis coordinates of the reference point; x 1 represents the x-axis coordinates of the target pixel point.
Alternatively, S () in the above two formulas may be replaced with the following formula:
B(x,θ)=∫L(x,ω)ρ(x,θ,ω)V(x,ω)cosωdω
Wherein x represents the coordinates of the point; θ represents the direction of the observer; the normal n is used as a hiding parameter and is obtained by calculating the slope of a curve at the position of the point; l represents incident light; ρ () represents a bi-directional reflection distribution function; v represents visibility, determining whether the incident light is blocked; b () represents a function of calculating the radiation intensity (radiance), i.e. the light intensity seen from the viewpoint direction.
The gradient value calculation may be as follows:
wherein n represents a normal; x represents the abscissa.
For example, a gradient rendering Phong model may be used. The Phong model is one of the simplest illumination models, and this embodiment uses Phong model to describe the effect of applying gradient rendering on the quad loader.
Specular component calculation of Phong model:
Specular=(dot(Reflect(L,N),V)) α
Wherein L represents the direction of incident light; n represents the current normal vector; v represents the direction of the viewpoint; alpha represents a highlight factor and determines the degree of highlight aggregation.
The gradient of Specular was calculated as:
Wherein, The gradient of speclar with respect to normal. Gradient calculations with respect to normal.y and normal.z are similar.
It should be noted that, the above formula exemplarily describes the calculation performed by the calculation unit, but the embodiment of the present application is not limited to this, that is, the calculation method is not specifically limited thereto.
In addition, after the computing unit is further configured to obtain illumination data of a plurality of pixel points, when the second pixel point meets a preset condition, a mask of the second pixel point is set to 1, and the computing unit is used for indicating that the illumination data of the second pixel point is valid; after the illumination data of the plurality of pixel points are obtained, when the second pixel point does not meet the preset condition, setting a mask of the second pixel point to 0 for indicating that the illumination data of the second pixel point is invalid; the second pixel point is any one of the plurality of pixel points, and the second pixel point may be the first pixel point above or may not be the first pixel point above.
The preset condition may be, for example: whether the second pixel is within the range of the currently drawn object. If yes, the illumination data representing the second pixel point is valid, the mask is set to 1, and if not, the illumination data representing the second pixel point is invalid, and the mask is set to 0. It should be noted that, the preset conditions are not specifically limited in the embodiment of the present application.
In addition, the computing unit may further obtain target color data of the plurality of pixels according to the initial color data of the plurality of pixels obtained by the interpolation unit and the illumination data of the plurality of pixels obtained by the computing unit. For example, the initial color data and the illumination data of a pixel are added to obtain the target color data of the pixel.
A blending unit (blending) for rendering according to the target color data of the second pixel point when the mask of the second pixel point is 1; and discarding the target color data of the second pixel point when the mask of the second pixel point is 0.
Based on the determination of the operation unit, the mixing unit may discard the failed target color data, which may improve the rendering efficiency and optimize the rendering effect.
In the embodiment of the application, the illumination data of each pixel point illumination area in the first illumination area is obtained by calculating the illumination data of the reference point and according to the correlation between the reference point and the position of each pixel point in the first illumination area. Compared with the prior art, the processing mode can save a large amount of calculation force (about 50 percent) and reduce power consumption by independently calculating and acquiring the illumination data of each pixel point in the first illumination area, and can control errors by acquiring the illumination data of a plurality of pixel points in a small range (the illumination area is selected to be smaller in size) in a correlative manner. The illumination data is used for reflecting the change of the color data of the image under illumination, and can be represented by a pixel value mode, and mainly comprises RGB values.
FIG. 5 is a flow chart of a process 500 of a coloring method according to an embodiment of the present application. Process 500 may be performed by the GPU shown in fig. 3. Process 500 is described as a series of steps or operations, it being understood that process 500 may be performed in various orders and/or concurrently, and is not limited to the order of execution as depicted in fig. 5. The GPU with quad loader is configured to process a first image, the first image including a plurality of illumination areas, and any one of the illumination areas including a plurality of pixels 500 may include:
Step 501, acquiring position information of a preset reference point in a first illumination area and position information of a plurality of pixel points in the first illumination area, wherein the first illumination area is any one of a plurality of preset illumination areas.
In the embodiment of the present application, the coloring modes may include a region mode (quad mode) and a pixel mode (pixel mode), and the selection of the coloring modes may be according to the characteristics of the image, and reference may be made to the embodiment shown in fig. 2, which is not described herein.
A reference point may be selected in the first illumination area according to a preset rule. Alternatively, the reference point may be a center point of the first illumination area, or the reference point may be any one pixel point in the first illumination area. For example, the size of the first illumination area is 2×2, and includes 4 pixels, where the pixel at the top left, bottom left, top right, or bottom right may be selected as the reference point, and the intersection point of the 4 pixels (i.e., the center point of the first illumination area) may be selected as the reference point.
Optionally, position information of a plurality of vertices of a preset first rendering area may be acquired, where the first rendering area includes a first illumination area; and acquiring the position information of the reference point and the position information of the plurality of pixel points according to the position information of the plurality of vertexes and the reference point.
The first rendering area comprises a first illumination area, i.e. the first illumination area is a sub-area of the first rendering area. Alternatively, the first rendering area may be a triangle area.
In an embodiment of the application, the location information comprises normals and/or coordinates. Wherein both the normal and the coordinates can be referred to as attribute data of the point, and the normal (normal) is a line perpendicular to the tangent of the curve at the point, and can be calculated from the slope of the curve at the point. The coordinates may be represented by coordinates of the horizontal and vertical axes for marking the position of the point.
The position information of the reference point and the position information of the plurality of pixel points can be obtained by interpolation.
Any point in the first rendering area can be interpolated from the position information of a plurality of vertices of the first rendering area according to the position of the point.
For example, setting weights of corresponding vertexes according to distances between the point and the vertexes of the first rendering area respectively, and then carrying out weighted summation on normals of the vertexes to obtain normals of the point; or setting the weight of the corresponding vertex according to the area of the triangle formed by the point and any two adjacent vertexes, and then carrying out weighted summation on the normals of the vertexes to obtain the normals of the point.
For another example, the first rendering area or the first image is rasterized, so that coordinates of any point (including a reference point and a plurality of pixel points) in the first illumination area are obtained.
It should be noted that, other setting methods may be adopted for the weights of the vertices of the rendering area, which is not specifically limited in the embodiment of the present application. In addition, the embodiment of the application can acquire the position information of any point by adopting other methods besides interpolation, and the embodiment of the application is not particularly limited.
Fig. 4 is a schematic view of an illumination area, as shown in fig. 4, in one image, a first rendering area is a triangle area, and position information of 3 vertices (a, b, and c) of the first rendering area can be obtained. A square area of 2×2 in the first illumination area, which includes 4 pixel points (one small square represents one pixel point in fig. 4), and the reference point is the center point of the first illumination area.
In one possible implementation, the GPU may interpolate the normals of the reference points from the normals of vertices (a, b, and c). For example, the reference point is at a distance D1 from vertex a, the reference point is at a distance D2 from vertex b, the reference point is at a distance D3 from vertex c, and the weight of vertex a can be obtained asThe weight of vertex b isThe weight of vertex c isAnd then calculate the reference point
For example, the triangle area formed by the reference point, the vertex a and the vertex b is S1, the triangle area formed by the reference point, the vertex b and the vertex c is S2, the triangle area formed by the reference point, the vertex a and the vertex c is S3, and the weight of the vertex a is obtainedThe weight of vertex b isThe weight of vertex c isAnd then calculate the reference point
In addition, the method can be used for obtaining the normals of 4 pixel points in the illumination area, and the principle is the same, so that the description is omitted here.
In the present application, other data of any point in the first rendering area may be obtained by the above method, and the data may include texture information, initial color data (for example, RGB values, which refers to initial color data, and illumination factors have not been considered) and the like in addition to the normal line and the coordinates. For example, initial color data of a plurality of vertices may be acquired, and interpolation may be performed based on the initial color data of the plurality of vertices to acquire initial color data of a plurality of pixel points. The method for obtaining the attribute data may be described above, and will not be described herein.
In the above example, the reference point is a center point of the first illumination area, and the embodiment of the present application may also select one pixel point in the first illumination area as the reference point, for example, the upper left corner pixel point of the square area of 2×2. In this case, the attribute data of the reference point may still be obtained by the above method, which is not described in detail.
In one possible implementation, the GPU may obtain the first illumination area according to a preset area map of the first image. The first image may be divided into a plurality of image blocks, for example, into a plurality of 32×32 image blocks, and a region size (quad size) may be set for each image block, for example, the region size may be set according to the content of the image block, the contained object, the texture, or the like. During the shading process, the first image may be traversed according to a predetermined order, and the image location (or image pixel) currently being shaded may be referred to as the current shading location. The region size corresponding to the current coloring position can be obtained from the region map according to the image block to which the current coloring position belongs, and then the region with the same size as the region size is obtained at the current coloring position as the first illumination region. The size of the first illumination area is 2×2 or 4×4.
The pixels on an image can be traversed, typically for one image, in a certain order to achieve the coloring, so that the position of the coloring varies in real time across the image. The above-described image position to be processed indicates an image position to be processed at the present moment. The region map (quad map) may be derived from a pipeline (pipeline), and is preset, and the quad map includes a plurality of regions and a region size (quad size) corresponding to each region. The GPU may find the region to which the image position to be processed belongs in the quad map, and then find the region size corresponding to the region, for example, the region size corresponding to the image position to be processed is 2×2. Based on this, the GPU obtains a2×2 region at the image position to be processed as an illumination region to be processed, the illumination region including 4 pixels.
The GPU may generate a region list (quad list) that includes image locations and region sizes corresponding to the image locations. The GPU thereafter groups (groups) all pixels included in the illumination area, processes a plurality of pixels in units of groups, and outputs a plurality of illumination data. The illumination area may be a square area or a rectangular area, which is not particularly limited. Alternatively, the size of the illumination area to be treated is 2×2 or 4×4.
Step 502, obtaining illumination data of the reference point according to the position information of the reference point.
In the embodiment of the application, the illumination data of the reference point can be obtained by adopting any illumination formula, for example, a Phong illumination model is adopted. The input of the illumination formula may comprise attribute data of the reference point, the illumination formula may be programmable, typically the same illumination formula may be used when processing the same region in the first image, whereas different illumination formulas may be used for different regions in the first image, e.g. representing different objects in the first image, and thus the application is not particularly limited to illumination formulas.
Step 503, obtaining illumination data of a plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points.
In one possible implementation, the GPU obtains a first parameter corresponding to location information of a reference point; acquiring illumination data of a first pixel point according to the first parameter, the relative position relation between the first pixel point and the reference point and the illumination data of the reference point, wherein the first pixel point is any one of a plurality of pixel points; the relative position relation between the first pixel point and the reference point is determined through the position information of the first pixel point and the position information of the reference point.
In one possible implementation, the location information refers to a normal. The normal line is one of the attribute data, and can be obtained by the interpolation unit. The normal (normal) is a line perpendicular to the tangent of the curve at a point, and can be calculated from the slope of the curve at the point, and can be used to indicate the point.
Obtaining illumination data of a target pixel point according to the following formula, wherein the target pixel point is one of the pixel points:
S(n 1)=S(n 0)+S′(n 0)(n 1-n 0)
Wherein S (n) is a function of calculating illumination data by normal (normal); s (n 1) represents illumination data of the target pixel point; s (n 0) represents illumination data of a reference point; s' (n 0) represents a gradient value of illumination data of the reference point with respect to a normal line of the reference point, i.e., a first parameter; n 0 represents the normal to the reference point; n 1 denotes the normal line of the target pixel point.
In one possible implementation, the location information refers to coordinates. The coordinates are one of the attribute data, and can be obtained by the interpolation unit described above.
Obtaining illumination data of a target pixel point according to the following formula, wherein the target pixel point is one of the pixel points:
S(p 1)=S(p 0)+S′(p 0)(x 1-x 0)
Wherein S (p) is a function of calculating illumination data by coordinates; s (p 1) represents illumination data of the target pixel point; s (p 0) represents illumination data of a reference point; s' (p 0) represents a gradient value of illumination data of the reference point with respect to coordinates of the reference point, i.e., a first parameter; p 0 denotes the coordinates of the reference point; p 1 represents the coordinates of the target pixel point; x 0 represents the x-axis coordinates of the reference point; x 1 represents the x-axis coordinates of the target pixel point.
Alternatively, S () in the above two formulas may be replaced with the following formula:
B(x,θ)=∫L(x,ω)ρ(x,θ,ω)V(x,ω)cosωdω
Wherein x represents the coordinates of the point; θ represents the direction of the observer; the normal n is used as a hiding parameter and is obtained by calculating the slope of a curve at the position of the point; l represents incident light; ρ () represents a bi-directional reflection distribution function; v represents visibility, determining whether the incident light is blocked; b () represents a function of calculating the radiation intensity (radiance), i.e. the light intensity seen from the viewpoint direction.
The gradient value calculation may be as follows:
wherein n represents a normal direction; x represents the abscissa of the position.
For example, a gradient rendering Phong model may be used. The Phong model is one of the simplest illumination models, and this embodiment uses Phong model to describe the effect of applying gradient rendering on the quad loader.
Specular component calculation of Phong model:
Specular=(dot(Reflect(L,N),V)) α
Wherein L represents the direction of incident light; n represents the current normal vector; v represents the direction of the viewpoint; alpha represents a highlight factor and determines the degree of highlight aggregation.
The gradient of Specular was calculated as:
Wherein, The gradient of speclar with respect to normal. Gradient calculations with respect to normal.y and normal.z are similar.
In one possible implementation manner, after the illumination data of a plurality of pixel points are acquired, when the second pixel point meets a preset condition, setting a mask of the second pixel point to be 1, and indicating that the illumination data of the second pixel point is valid; after the illumination data of the plurality of pixel points are obtained, when the second pixel point does not meet the preset condition, setting a mask of the second pixel point to 0 for indicating that the illumination data of the second pixel point is invalid; the second pixel point is any one of the plurality of pixel points, and the second pixel point may be the first pixel point above or may not be the first pixel point above.
The preset condition may be, for example: whether the second pixel is within the range of the currently drawn object. If yes, the illumination data representing the second pixel point is valid, the mask is set to 1, and if not, the illumination data representing the second pixel point is invalid, and the mask is set to 0. It should be noted that, the preset conditions are not specifically limited in the embodiment of the present application.
In addition, the GPU may further obtain target color data of the plurality of pixels according to the initial color data of the plurality of pixels and the illumination data of the plurality of pixels. For example, the initial color data and the illumination data of a pixel are added to obtain the target color data of the pixel.
In one possible implementation, the GPU may further obtain illumination data of the pixel point to be processed when the coloring mode is a pixel mode. The above related techniques for coloring the single pixel point may be adopted at this time, and will not be described here.
In the embodiment of the application, the illumination data of each pixel point illumination area in the first illumination area is obtained by calculating the illumination data of the reference point and according to the correlation between the reference point and the position of each pixel point in the first illumination area. Compared with the prior art, the processing mode can save a large amount of calculation force (about 50 percent) and reduce power consumption by independently calculating and acquiring the illumination data of each pixel point in the first illumination area, and can control errors by acquiring the illumination data of a plurality of pixel points in a small range (the illumination area is selected to be smaller in size) in a correlative manner. The illumination data is used for reflecting the change of the color data of the image under illumination, and can be represented by a pixel value mode, and mainly comprises RGB values.
Fig. 6a and fig. 6b are schematic views of rendering effects, in which the rendering effects shown in fig. 6a are obtained by performing a coloring process using the quad loader scheme according to the embodiment of the present application, and the rendering effects shown in fig. 6b are obtained by performing a coloring process using the VRS scheme, and the area sizes of both areas are 2×2.
It can be seen that the rendering effect of fig. 6a is smoother compared to the rendering effect of fig. 6b, showing the advantage of rendering based on gradient values.
Fig. 7a and fig. 7b are schematic views of rendering effects, in which the rendering effects shown in fig. 7a are obtained by performing a coloring process using the quad loader scheme according to the embodiment of the present application, and the rendering effects shown in fig. 7b are obtained by performing a coloring process using the pixel loader scheme.
It can be seen that the rendering effect of the quad loader with 2×2 is very similar to that of the pixel loader, and the computational power of the quad loader is only 1/2 of that of the pixel loader, so that the computational power is greatly saved, and the power consumption is reduced.
In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (field programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in the embodiment of the application can be directly embodied in a hardware encoding processor for execution or in a combination of hardware and software modules in the encoding processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The memory mentioned in the above embodiments may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (personal computer, server, network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (21)
- A coloring method, characterized by being applied to an image processor, wherein the image processor is used for processing a first image, and the first image comprises a plurality of preset illumination areas; the method comprises the following steps:Acquiring position information of a preset reference point in a first illumination area and position information of a plurality of pixel points in the first illumination area, wherein the first illumination area is any one of the plurality of illumination areas;Acquiring illumination data of the reference point according to the position information of the reference point;And acquiring illumination data of the plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points.
- The method according to claim 1, wherein the obtaining the illumination data of the plurality of pixel points according to the position information of the reference point, the illumination data of the reference point, and the position information of the plurality of pixel points includes:acquiring a first parameter corresponding to the position information of the reference point;Acquiring illumination data of a first pixel point according to the first parameter, the relative position relation between the first pixel point and the reference point and the illumination data of the reference point, wherein the first pixel point is any one of the plurality of pixel points;the relative position relation between the first pixel point and the reference point is determined through the position information of the first pixel point and the position information of the reference point.
- The method according to claim 1 or 2, wherein the image processor is configured to process the first image according to a preset rendering area, and the acquiring location information of a preset reference point in a first illumination area and location information of a plurality of pixel points in the first illumination area specifically includes:acquiring position information of a plurality of vertexes of a first rendering area, wherein the first rendering area comprises the first illumination area;And acquiring the position information of the reference point and the position information of the plurality of pixel points according to the position information of the plurality of vertexes and the preset reference point.
- A method according to claim 3, characterized in that the method further comprises:acquiring initial color data of the plurality of vertexes;Acquiring initial color data of the plurality of pixel points according to the initial color data of the plurality of vertexes;And after the illumination data of the plurality of pixel points are obtained, obtaining target color data of the plurality of pixel points according to the initial color data of the plurality of pixel points and the illumination data of the plurality of pixel points.
- The method of claim 3 or 4, wherein the first rendering area is triangular.
- The method according to any of claims 1-5, wherein the position information comprises normals and/or coordinates.
- The method according to any one of claims 1-6, further comprising:After the illumination data of the plurality of pixel points are obtained, when a second pixel point meets a preset condition, setting a mask of the second pixel point to be 1 for indicating that the illumination data of the second pixel point is effective;After the illumination data of the plurality of pixel points are obtained, when the second pixel point does not meet the preset condition, setting a mask of the second pixel point to 0 for indicating that the illumination data of the second pixel point is invalid;Wherein the second pixel point is any one of the plurality of pixel points.
- The method of any one of claims 1-7, wherein the first illumination region has a size of 2 x 2 or 4 x 4.
- The method according to any one of claims 1-8, wherein the reference point is a center point of the first illumination area; or the reference point is any pixel point in the first illumination area.
- An image processor, wherein the image processor is configured to process a first image, the first image including a preset plurality of illumination areas; the image processor includes: an interpolation unit and an operation unit; wherein,The interpolation unit is used for acquiring position information of a preset reference point in a first illumination area and position information of a plurality of pixel points in the first illumination area, wherein the first illumination area is any one of the plurality of illumination areas;The operation unit is used for acquiring illumination data of the reference point according to the position information of the reference point; and acquiring illumination data of the plurality of pixel points according to the position information of the reference point, the illumination data of the reference point and the position information of the plurality of pixel points.
- The image processor according to claim 10, wherein the arithmetic unit is specifically configured to:acquiring a first parameter corresponding to the position information of the reference point;Acquiring illumination data of a first pixel point according to the first parameter, the relative position relation between the first pixel point and the reference point and the illumination data of the reference point, wherein the first pixel point is any one of the plurality of pixel points;the relative position relation between the first pixel point and the reference point is determined through the position information of the first pixel point and the position information of the reference point.
- The image processor according to claim 10 or 11, wherein the interpolation unit is specifically configured to:acquiring the position information of a plurality of vertexes of a first rendering area, wherein the first rendering area comprises the first illumination area;And acquiring the position information of the reference point and the position information of the plurality of pixel points according to the position information of the plurality of vertexes and the preset reference point.
- The image processor of claim 12, wherein the interpolation unit is further configured to:acquiring initial color data of the plurality of vertexes;Acquiring initial color data of the plurality of pixel points according to the initial color data of the plurality of vertexes;the operation unit is further configured to:And after the illumination data of the plurality of pixel points are obtained, obtaining target color data of the plurality of pixel points according to the initial color data of the plurality of pixel points and the illumination data of the plurality of pixel points.
- The image processor of claim 12 or 13, wherein the first rendering region is triangular.
- The image processor according to any of claims 10-14, wherein the position information comprises normals and/or coordinates.
- The image processor according to any one of claims 10-15, wherein the arithmetic unit is further configured to:After the illumination data of the plurality of pixel points are obtained, when a second pixel point meets a preset condition, setting a mask of the second pixel point to be 1 for indicating that the illumination data of the second pixel point is effective;After the illumination data of the plurality of pixel points are obtained, when the second pixel point does not meet the preset condition, setting a mask of the second pixel point to 0 for indicating that the illumination data of the second pixel point is invalid;Wherein the second pixel point is any one of the plurality of pixel points.
- The image processor of any of claims 10-16, wherein the first illumination region has a size of 2 x 2 or 4 x 4.
- The image processor of any one of claims 10-17, wherein the reference point is a center point of the first illumination region; or the reference point is any pixel point in the first illumination area.
- An electronic device, comprising:an image processor;a memory for storing one or more programs;The one or more programs, when executed by the image processor, cause the image processor to implement the method of any of claims 1-9.
- A computer readable storage medium comprising a computer program which, when executed on a computer, causes the computer to perform the method of any of claims 1-9.
- A computer program for performing the method of any one of claims 1-9 when the computer program is executed by a computer.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/096353 WO2023230878A1 (en) | 2022-05-31 | 2022-05-31 | Coloring method and image processor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118266009A true CN118266009A (en) | 2024-06-28 |
Family
ID=89026685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280076969.XA Pending CN118266009A (en) | 2022-05-31 | 2022-05-31 | Coloring method and image processor |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN118266009A (en) |
WO (1) | WO2023230878A1 (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10147227B2 (en) * | 2017-02-17 | 2018-12-04 | Microsoft Technology Licensing, Llc | Variable rate shading |
CN107154063B (en) * | 2017-04-19 | 2023-03-14 | 腾讯科技(深圳)有限公司 | Method and device for setting shape of image display area |
US10074206B1 (en) * | 2017-05-23 | 2018-09-11 | Amazon Technologies, Inc. | Network-optimized graphics library for virtualized graphics processing |
CN110599577B (en) * | 2019-09-23 | 2020-11-24 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for rendering skin of virtual character |
-
2022
- 2022-05-31 WO PCT/CN2022/096353 patent/WO2023230878A1/en active Application Filing
- 2022-05-31 CN CN202280076969.XA patent/CN118266009A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2023230878A1 (en) | 2023-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3748584B1 (en) | Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location | |
EP0875860B1 (en) | Precise gradient calculation system and method for a texture mapping system of a computer graphics system | |
US7742060B2 (en) | Sampling methods suited for graphics hardware acceleration | |
US9530241B2 (en) | Clipping of graphics primitives | |
KR102701851B1 (en) | Apparatus and method for determining LOD(level Of detail) for texturing cube map | |
US10140750B2 (en) | Method, display adapter and computer program product for improved graphics performance by using a replaceable culling program | |
US20180158227A1 (en) | Infinite resolution textures | |
US7038678B2 (en) | Dependent texture shadow antialiasing | |
US10733785B2 (en) | Graphics processing | |
US10134171B2 (en) | Graphics processing systems | |
US20100110093A1 (en) | Graphics processing systems | |
KR20170036419A (en) | Graphics processing apparatus and method for determining LOD (level of detail) for texturing of graphics pipeline thereof | |
US11989807B2 (en) | Rendering scalable raster content | |
US10192348B2 (en) | Method and apparatus for processing texture | |
US20230298212A1 (en) | Locking mechanism for image classification | |
US20100277488A1 (en) | Deferred Material Rasterization | |
US7385604B1 (en) | Fragment scattering | |
CN118266009A (en) | Coloring method and image processor | |
US20230298133A1 (en) | Super resolution upscaling | |
US11776179B2 (en) | Rendering scalable multicolored vector content | |
Popescu et al. | Forward rasterization | |
CN118696350A (en) | A soft shadow algorithm with contact hardening for mobile GPUs | |
US20190139292A1 (en) | Method, Display Adapter and Computer Program Product for Improved Graphics Performance by Using a Replaceable Culling Program | |
Price | NPR. js: A JavaScript library for Interactive Non-Photorealistic Rendering in WebGL. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |