[go: up one dir, main page]

CN116129816A - Pixel rendering method, device, computer equipment and storage medium - Google Patents

Pixel rendering method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116129816A
CN116129816A CN202310066908.1A CN202310066908A CN116129816A CN 116129816 A CN116129816 A CN 116129816A CN 202310066908 A CN202310066908 A CN 202310066908A CN 116129816 A CN116129816 A CN 116129816A
Authority
CN
China
Prior art keywords
pixel
information
pixel point
neighborhood
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310066908.1A
Other languages
Chinese (zh)
Other versions
CN116129816B (en
Inventor
黄龙
徐一丁
彭怀宇
杨锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glenfly Tech Co Ltd
Original Assignee
Glenfly Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glenfly Tech Co Ltd filed Critical Glenfly Tech Co Ltd
Priority to CN202310066908.1A priority Critical patent/CN116129816B/en
Publication of CN116129816A publication Critical patent/CN116129816A/en
Application granted granted Critical
Publication of CN116129816B publication Critical patent/CN116129816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/22Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
    • G09G3/30Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels
    • G09G3/32Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
    • G09G3/3208Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED]
    • G09G3/3225Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED] using an active matrix
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2330/00Aspects of power supply; Aspects of display protection and defect management
    • G09G2330/10Dealing with defective pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a pixel rendering method, a pixel rendering device, a pixel rendering apparatus, a pixel rendering computer device, a pixel rendering storage medium and a pixel rendering program product. The method comprises the following steps: acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert the gray scale domain of the initial image into the brightness domain; acquiring a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size; determining a rendering mode corresponding to the pixel points based on the pixel neighborhood corresponding to each pixel point, and rendering the pixel points based on the rendering mode to obtain a second image; performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain. The method can solve the problem of color edge phenomenon of the local transition region and the boundary region of the image.

Description

Pixel rendering method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of display technology, and in particular, to a pixel rendering method, apparatus, computer device, storage medium, and computer program product.
Background
AMOLED (Active Matrix/Organic Light Emitting Diode, active Matrix organic light emitting diode) display screen adopts sub-pixels to emit light, each sub-pixel is a TFT thin film transistor, and different manufacturers adopt different sub-pixel arrangement modes. Because AMOLED is different from LCD (Liquid Crystal Display ) in uniform pixel arrangement, LCD panel adopts backlight plate to emit light, light emitting plate R (Red ), G (Green), B (Blue) channel has respective pixel ratio of 1:1:1, pixel arrangement is uniform, and adjacent pixels do not have light emission influence each other. Due to the manufacturing process and the power consumption requirement, the pixel ratio of R, G and B in the delta type arrangement is 1:1:1, the pixel units are complete, the pixel ratio of R, G and B in the RGBG type arrangement is 1:2:1, the number of green sub-pixel points is 2 times that of red and blue sub-pixel points, and if a single pixel is displayed, sub-pixel loss occurs.
In the conventional technology, interpolation operation under a linear domain is generally adopted, neighborhood weighting operation is directly carried out on pixel values, and the effect of uniform and normal display of pixels is simulated by a sub-pixel rendering method.
However, due to the fact that the sub-pixels arranged in the RGBG structure are missing, color cast can occur in the process of displaying special arrangement including straight lines, vertical lines, graphic boundaries and the like, and the conventional method cannot solve the display problem of the special arrangement by directly adopting a weighting mode of adjacent sub-pixels in a gray scale domain.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a pixel rendering method, apparatus, computer device, computer readable storage medium, and computer program product that can solve the problem of sub-pixel deletion when an image is displayed on an RGBG screen.
In a first aspect, the present application provides a pixel rendering method. The method comprises the following steps:
acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert the gray scale domain of the initial image into the brightness domain;
acquiring a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size;
determining a rendering mode corresponding to the pixel points based on the pixel neighborhood corresponding to each pixel point, and rendering the pixel points based on the rendering mode to obtain a second image;
performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain.
In one embodiment, determining a rendering mode corresponding to each pixel point based on a pixel neighborhood corresponding to each pixel point includes:
for each pixel point, coding a pixel neighborhood corresponding to the pixel point to obtain pixel neighborhood coding information corresponding to the pixel point;
And determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information.
In one embodiment, determining a rendering mode corresponding to a pixel point based on pixel neighborhood encoding information includes:
performing coding matching on the pixel neighborhood coding information and preset coding information; when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent;
when the coding matching result is that the coding information is inconsistent, taking the pixel point corresponding to the pixel neighborhood coding information as a target pixel point, and obtaining the pixel information of the target pixel point; and determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point.
In one embodiment, when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent; determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information comprises the following steps:
when the coding matching result is that the coding information is consistent, the pixel point corresponding to the pixel neighborhood coding information is used as a target pixel point, and the rendering mode of the target pixel point is determined to be structure correction processing.
In one embodiment, the pixel information of the target pixel point includes three channel RGB pixel values corresponding to the target pixel point; determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point, including:
Performing information matching on the pixel information of the target pixel point and preset pixel information to obtain an information matching result; when the pixel information of the target pixel point is inconsistent with the preset pixel information, the information matching result is that the pixel information is inconsistent;
when the information matching result is that the pixel information is inconsistent, the rendering mode of the target pixel point is modified from RGB arrangement to RGBG arrangement.
In one embodiment, when the pixel information of the target pixel point is consistent with the preset pixel information, the information matching result is that the pixel information is consistent; determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point, and further comprising:
when the information matching result is that the pixel information is consistent, a neighborhood weighting coefficient corresponding to the target pixel point is determined based on the pixel information of the target pixel point, and the neighborhood weighting system is used for rendering the target pixel point.
In a second aspect, the present application further provides a pixel rendering apparatus. The device comprises:
the first conversion processing module is used for acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert the gray scale domain of the initial image into the brightness domain;
The pixel neighborhood acquisition module is used for acquiring a pixel neighborhood corresponding to each pixel point in the first image based on the preset neighborhood size;
the rendering module is used for determining a rendering mode corresponding to the pixel points based on the pixel neighborhood corresponding to each pixel point, and rendering the pixel points based on the rendering mode to obtain a second image;
the second conversion processing module is used for carrying out second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method of any of the embodiments described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer device readable storage medium. The computer device readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the embodiments described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the method according to any of the embodiments described above.
The pixel rendering method, the device, the computer equipment, the storage medium and the computer program product are characterized in that an initial image is firstly obtained, and a first conversion process is carried out on the initial image to obtain a first image; the first conversion process converts the gray scale domain of the initial image into the brightness domain. And then, based on the preset neighborhood size, acquiring a pixel neighborhood corresponding to each pixel point in the first image. Further, based on the pixel neighborhood corresponding to each pixel point, a rendering mode corresponding to the pixel point is determined, and the pixel points are rendered based on the rendering mode, so that a second image is obtained. Finally, performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain. According to the method, the image is firstly converted from the gray scale domain to the brightness domain, the rendering mode is determined based on the pixel neighborhood, and the pixel rendering is carried out in the brightness domain, so that the problem of color edge phenomena of a local transition region and a boundary region of the image can be solved.
Drawings
FIG. 1 is an application environment diagram of a pixel rendering method in one embodiment;
FIG. 2 is a flow chart of a pixel rendering method in one embodiment;
FIG. 3 is a flow chart of a pixel rendering method according to another embodiment;
FIG. 4 is a graph of gamma curve in one embodiment;
FIG. 5 is a schematic diagram of a 3×3 pixel area according to one embodiment;
FIG. 6 is a schematic diagram of mirroring pixel areas with pixel values adjacent to a pixel to be processed in one embodiment;
FIG. 7 is a schematic diagram of pixel locations in one embodiment;
FIG. 8 is a schematic diagram of encoding corresponding to a pixel region with a size of 3×3 according to one embodiment;
FIG. 9 is a diagram of encoding a plurality of preset codes according to one embodiment;
FIG. 10 is a schematic illustration of a display of edge aliasing of an image in one embodiment;
FIG. 11 is a schematic illustration of a display in which the image appears to be colored by a parting line in one embodiment;
FIG. 12 is a schematic illustration of a loop structure in one embodiment;
FIG. 13 is a schematic diagram of pixel arrangement of pixels according to an RGBG arrangement in one embodiment;
FIG. 14 is a schematic diagram of a display of a straight horizontal pixel deletion in one embodiment;
FIG. 15 is a schematic diagram showing a diagonal pixel deletion in one embodiment;
FIG. 16 is a schematic illustration of a display of vertical line pixel deletion in one embodiment;
FIG. 17 is a block diagram of a pixel rendering device in one embodiment;
Fig. 18 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The pixel rendering method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 communicates with the first terminal 102 to enter the pixel rendering environment. Firstly, the server 104 may acquire an initial image from the terminal 102, and perform a first conversion process on the initial image to obtain a first image; the first conversion process converts the gray scale domain of the initial image into the brightness domain. Then, based on the preset neighborhood size, the server 104 may obtain a pixel neighborhood corresponding to each pixel point in the first image. Further, the server 104 may determine a rendering mode corresponding to the pixel point based on the pixel neighborhood corresponding to each pixel point, and render the pixel point based on the rendering mode, so as to obtain the second image. Finally, the server 104 may perform a second conversion process on the second image, to obtain a rendered image, and output the rendered image to the terminal 102 for display; the second conversion process converts the brightness domain of the second image into a gray scale domain. The terminal may include, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
The pixel rendering method provided by the embodiment of the application can be applied to a terminal or a server on one side, and can also be applied to a system comprising the terminal and the server, and the pixel rendering method is realized through interaction between a cluster and the server.
In one embodiment, as shown in fig. 2, a pixel rendering method is provided, and the method is applied to a server single-side implementation as an example for explanation, and includes the following steps 202 to 208.
Step 202, acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; as shown in fig. 3, the first conversion process is to convert the gray-scale domain of the initial image into the luminance domain.
In this embodiment, the server may acquire a plurality of initial images from the terminal. Wherein the pixel arrangement of the plurality of pixels in the initial image is an RGB arrangement.
In this embodiment, the server may perform linear interpolation on the R channel, the G channel, and the B channel of the pixel, respectively, to convert the gray scale domain of the initial image into the luminance domain.
In this embodiment, for each pixel, the server may perform linear interpolation processing on R channel, G channel, and B channel of the pixel through LUT (Look-Up Table) to convert the linear RGB signal into the luminance domain digital R ' G ' B ' signal. The LUTs corresponding to the R channel, the G channel and the B channel are inconsistent.
In this embodiment, 26 data points are included in each LUT.
In this embodiment, the server may visually display the LUT through a graph (e.g., gamma graph). As shown in fig. 4, the abscissa gamma_in of the Gamma graph may represent the gray domain value corresponding to the pixel, and the ordinate gamma_out of the Gamma graph may represent the brightness domain value corresponding to the pixel. The gamma_in input precision is 8 bits, corresponding to gray domain values of 0 to 255, and the gamma_out output precision is 12 bits, corresponding to the precision of the data stream in the chip.
In this embodiment, for each LUT, the data between every two adjacent data points in the graph can be obtained by linear interpolation calculation. For example, when gamma_in= { x1, x2, x3, x4, x5}; when gamma_out= { y1, y2, y3, y4, y5} corresponding to gamma_in, taking adjacent elements "x2" and "x3" in gamma_in as examples in the Gamma graph, gamma_out corresponding to element "x2" in gamma_in is "y2", gamma_out corresponding to element "x3" in gamma_in is "y3", therefore, there are data points a (x 2, y 2) and B (x 3, y 3) in the Gamma coordinates, the data points a and B constitute a line segment AB, and the slope k of the line segment AB is obtained based on the abscissa of the data points a and B, as shown in formula (1):
Figure BDA0004062458180000061
Based on the slope k of the line segment AB, the horizontal and vertical coordinates of the data point A and the data point B, the expression of the line segment AB can be determined, the coordinates of a plurality of data points on the line segment AB can be calculated based on the expression of the line segment AB, and the image data of the pixel point between the data point A and the data point B in the gamma curve is complemented to obtain the gamma curve corresponding to the initial image.
Step 204, based on the preset neighborhood size, a pixel neighborhood corresponding to each pixel point in the first image is obtained.
In this embodiment, as shown in fig. 5, the preset neighborhood size may be a pixel area of 3×3 size. The server may obtain a 3 x 3 size pixel area centered on the pixel to be processed as a pixel neighborhood.
In this embodiment, as shown in fig. 6, when the pixel to be processed is at the image boundary (i.e., the pixel to be processed has no neighbor), the pixel region where the pixel value is adjacent to the pixel to be processed may be subjected to mirror image processing (the neighbor on the side where the element exists is subjected to mirror image inversion), so as to obtain the pixel neighborhood corresponding to the pixel to be processed.
Step 206, determining a rendering mode corresponding to the pixel point based on the pixel neighborhood corresponding to each pixel point, and rendering the pixel point based on the rendering mode to obtain a second image.
In this embodiment, for each pixel, the server may obtain neighborhood information corresponding to a pixel neighborhood, generate neighborhood coding information corresponding to the pixel based on the neighborhood information, and determine a rendering mode corresponding to the pixel based on different neighborhood coding information.
Step 208, performing a second conversion process on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain.
In this embodiment, the server may obtain a gamma curve corresponding to the initial image, obtain a gamma function corresponding to the gamma curve based on the gamma curve, and calculate an inverse function of the gamma function. Further, the server may convert the luminance domain of the second image to a grayscale domain based on an inverse of the gamma function.
In the pixel rendering method, an initial image is firstly obtained, and a first conversion treatment is carried out on the initial image to obtain a first image; the first conversion process converts the gray scale domain of the initial image into the brightness domain. And then, based on the preset neighborhood size, acquiring a pixel neighborhood corresponding to each pixel point in the first image. Further, based on the pixel neighborhood corresponding to each pixel point, a rendering mode corresponding to the pixel point is determined, and the pixel points are rendered based on the rendering mode, so that a second image is obtained. Finally, performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain. According to the method, the image is firstly converted from the gray scale domain to the brightness domain, the rendering mode is determined based on the pixel neighborhood, and the pixel rendering is carried out in the brightness domain, so that the problem of color edge phenomena of a local transition region and a boundary region of the image can be solved.
In some embodiments, determining a rendering mode corresponding to each pixel point based on a pixel neighborhood corresponding to each pixel point may include: for each pixel point, coding a pixel neighborhood corresponding to the pixel point to obtain pixel neighborhood coding information corresponding to the pixel point; and determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information.
In this embodiment, the server may store the pixel neighborhood coding information corresponding to the pixel point to the database. Specifically, the server may store the pixel neighborhood coding information corresponding to the pixel point into a module or a folder corresponding to an encoding parameter in the database.
In this embodiment, for the pixel neighborhood corresponding to each pixel point, the server may encode the pixel neighborhood corresponding to the pixel point based on the preset encoding direction, and the corresponding pixel neighborhood encodes information. For example, when the pixel neighborhood corresponding to the pixel point is a pixel area of 3*3 size as shown in fig. 5, the center pixel is b, and the lateral neighborhood is a, b, c; the left oblique neighborhood is d, b and j; the vertical neighborhood is e, b and h; the right oblique neighborhood is f, b and g; the preset encoding direction comprises the following steps: a first direction (a-b-c), a second direction (d-b-j), a third direction (e-b-h) and a fourth direction (f-b-g). In the pixel region of 3*3 as shown in fig. 5, when the coordinates of the center pixel b are (l, j), as shown in fig. 7, (u_i, u_j) are the upper pixel coordinates and (d_i, d_j) are the lower pixel coordinates.
In this embodiment, the server encodes the pixel neighborhood corresponding to the pixel point according to the preset encoding directions, so as to obtain the encoding corresponding to each preset encoding direction. For example, when the pixel neighborhood corresponding to the pixel point is a pixel area of 3*3 size as shown in fig. 5, the preset encoding direction includes: when the first direction (a-b-c), the second direction (d-b-j), the third direction (e-b-h) and the fourth direction (f-b-g) are used, the first direction corresponds to the code horizontal (horizontal), the second direction corresponds to the code left (left), the third direction corresponds to the code vertical (vertical), and the fourth direction corresponds to the code right (right). Wherein,,
horizontal=int(int(a-b)>th)<<3+int(int(c-b)>th)<<2+int(int(b-a)>th)<<1+int(int(b-c)>th);
left=int(int(d-b)>th)<<3+int(int(j-b)>th)<<2+int(int(b-d)>th)<<1+int(int(b-j)>th);
vertical=int(int(e-b)>th)<<3+int(int(h-b)>th)<<2+int(int(b-e)>th)<<1+int(int(b-h)>th);
right=int(int(f-b)>th)<<3+int(int(g-b)>th)<<2+int(int(b-f)>th)<<1+int(int(b-g)>th)。
where th is the adjacent pixel threshold difference.
In this embodiment, the server may obtain pixel neighborhood coding information encoding (encoding) corresponding to the pixel point based on the encoding corresponding to each preset encoding direction, as shown in formula (2):
encode=horizontal<<3+left<<2+vertical<<1+right (2)
in this embodiment, the server may perform calculation of pixel neighborhood coding information on three channels R, G, B of the pixel point, respectively.
In this embodiment, taking the code corresponding to the pixel area of 3*3 size as shown in fig. 8 as an example, the white representing gray value is 255, the black representing gray value is 0, and when th corresponding to all four preset encoding directions is 40, taking the horizontal direction as an example, the code corresponding to the pixel area in the horizontal direction is calculated as: firstly, subtracting the middle gray value from the left gray value, and judging whether the gray value is larger than th, wherein 255-0>40 is true, namely 1; then the gray value on the right minus the gray value in the middle judges whether the gray value is larger than the threshold value, 255-0>40 is true, namely 1; then subtracting the left gray value from the middle gray value, judging whether the gray value is larger than a threshold value, wherein 0-255>40 is false, namely 0; and finally, subtracting the gray value on the right from the gray value in the middle, judging whether the gray value is larger than a threshold value, wherein 0-255>40 is false, namely 0, and the coded horizontal=1100 in the horizontal direction corresponding to the pixel area is converted into hexadecimal and is C in hexadecimal. It will be appreciated that the coding principle for the other three directions corresponding to the pixel region is the same as above. The coding of the pixel region corresponding to four preset coding directions is calculated as follows: c (horizontal), 0 (left), C (vertical), 0 (right), and the pixel neighborhood code corresponding to the pixel region is known as C0 in the order of horizontal-left-vertical-right.
In this embodiment, the R channel horizontal code may be horizontal_r, the G channel horizontal code may be horizontal_g, and the B channel horizontal code may be horizontal_b; the R channel left code may be left_R, the G channel left code may be left_G, and the B channel left code may be left_B; the R channel vertical code may be vertical_r, the G channel vertical code may be vertical_g, and the B channel vertical code may be vertical_b; the R-channel right code may be right_r, the G-channel right code may be right_g, and the B-channel right code may be right_b.
In some embodiments, determining a rendering mode corresponding to the pixel point based on the pixel neighborhood encoding information may include: performing coding matching on the pixel neighborhood coding information and preset coding information; when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent; when the coding matching result is that the coding information is inconsistent, taking the pixel point corresponding to the pixel neighborhood coding information as a target pixel point, and obtaining the pixel information of the target pixel point; and determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point.
In the present embodiment, the pixel information of the target pixel point may include, but is not limited to: pixel point coordinates, pixel neighborhood coding information corresponding to an R channel, pixel neighborhood coding information corresponding to a G channel, pixel neighborhood coding information corresponding to a B channel, and the like.
In this embodiment, the server may implement code matching of the pixel neighborhood coding information and the preset coding information by matching the coding structure.
In this embodiment, based on the pixel information of the target pixel point, the server may determine whether or not there is a pixel missing in the target pixel point. Specifically, when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent, and no pixel is lost; when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent, and the server can determine that the pixel is missing based on the pixel neighborhood coding information.
In this embodiment, the form of pixel deletion may include, but is not limited to: display straight line horizontal pixel missing, display vertical line pixel missing, display diagonal pixel missing, etc.
In this embodiment, the types of pixel deletion may include, but are not limited to: the pixel corresponding to the R channel is missing, the pixel corresponding to the G channel is missing, and the pixel corresponding to the B channel is missing.
In some embodiments, when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent; determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information may include: when the coding matching result is that the coding information is consistent, the pixel point corresponding to the pixel neighborhood coding information is used as a target pixel point, and the rendering mode of the target pixel point is determined to be structure correction processing.
In this embodiment, the preset encoding information may include a plurality of preset encodings as shown in fig. 9. When the encoding matching result is that the encoding information is consistent, the image display of the pixel neighborhood is as shown in fig. 10 and 11, and the display problems of edge saw-tooth (edges such as color edges, characters and the like), color cast and the like occur.
In this embodiment, when the code matching result is that the code information is consistent, the server may complement the 2R and 2B around each G pixel by using the surrounding structure as shown in fig. 12. Wherein, as shown in the rectangle in FIG. 12, the rectangle is composed of 4 surrounding structures p1, p2, p3, p4 respectively, and one surrounding structure is equivalent to an original RGB pixel, a center point sub-pixel R i,j ′(B i,j ') is shared by surrounding pixels, i.e. the center point pixel R (B) is defined by the p1 virtual pixel
Figure BDA0004062458180000111
p2 virtual pixel->
Figure BDA0004062458180000112
Left p3 virtual pixel->
Figure BDA0004062458180000113
p4 virtual pixel->
Figure BDA0004062458180000114
A total of 4 pixels of sub-pixels R '(B') affects.
In this embodiment, the mean square error of the surrounding structure equivalent pixels and the original RGB pixels can be minimized by averaging. The RGBG equivalent structure calculation mode is shown in formulas (3) and (4):
Figure BDA0004062458180000115
Figure BDA0004062458180000116
in some embodiments, the pixel information of the target pixel point includes RGB three-channel pixel values corresponding to the target pixel point; based on the pixel information of the target pixel, determining a rendering mode corresponding to the target pixel may include: performing information matching on the pixel information of the target pixel point and preset pixel information to obtain an information matching result; when the pixel information of the target pixel point is inconsistent with the preset pixel information, the information matching result is that the pixel information is inconsistent; when the information matching result is that the pixel information is inconsistent, the rendering mode of the target pixel point is modified from RGB arrangement to RGBG arrangement.
In this embodiment, as shown in fig. 13, when the input image pixel value is R, G, B, the output image pixel value after conversion by the conversion formula is R ', G ' B ', wherein the server may implement conversion based on the conversion formula (5) or formula (6):
Figure BDA0004062458180000117
Figure BDA0004062458180000118
in some embodiments, when the pixel information of the target pixel point is consistent with the preset pixel information, the information matching result is that the pixel information is consistent; based on the pixel information of the target pixel, determining a rendering mode corresponding to the target pixel may further include: when the information matching result is that the pixel information is consistent, a neighborhood weighting coefficient corresponding to the target pixel point is determined based on the pixel information of the target pixel point, and the neighborhood weighting system is used for rendering the target pixel point.
In this embodiment, when the encoding matching result is that the encoding information is consistent, the server may determine that there is a pixel missing based on the pixel neighborhood encoding information. When there is a pixel missing and the information matching result is that the pixel information is consistent, the server may patch the missing pixel based on the compensation coefficient z. In the gamma graph, the relationship between the brightness domain value y and the gray domain value x of the pixel is y=x 2.2 The method comprises the steps of carrying out a first treatment on the surface of the Therefore, the coefficient of the inverse function of the gamma function is
Figure BDA0004062458180000121
The compensation coefficient z=138 can be calculated based on the coefficients of the inverse function of the gamma function.
In particular, due to
Figure BDA0004062458180000122
Wherein 0 is the minimum value of the gray value in the gray domain, 255 is the maximum value of the gray value in the gray domain, and thus the compensation coefficient z is calculated as shown in formula (7):
Figure BDA0004062458180000123
the compensation coefficient z=138 is calculated from the above formula (2).
In this embodiment, as shown in fig. 14, when horizontal_r= 0x888, horizontal_g= 0x888, horizontal_b= 0x888, the lower pixel d_i, d_j is a straight line, and when horizontal_r= 0x333, horizontal_g= 0x333, horizontal_b= 0x333 is encoded, the pixel is a straight line (I, j), at this time, a straight line horizontal pixel is displayed, the missing element G needs to be complemented in the next line of the straight line, and at the same time, the R and B values where the straight line exists need to be readjusted, and the missing pixel is complemented as shown in formulas (8) to (9):
Figure BDA0004062458180000124
Figure BDA0004062458180000125
in another embodiment, as shown in fig. 15, the missing pixel B needs to be complemented, when the horizontal_r= =0x3033, horizontal_g= =0x3033, horizontal_b= =0x3033, or when the horizontal_r= = 0x3330 and,horizontal_G = 0x3330, horizontal_b= =0x3330 is encoded, the center pixel coordinate is (I, j) in the black frame of the figure, and the pixel value at (I, j+1) can be obtained by linear interpolation, but the B channel pixel value cannot be calculated, and the missing pixel is complemented by the following formulas (10) to (11):
R i,j-1 ′=R i,j *138/255 (10)
B i,j-1 ′=B i,j *138/255 (11)
In another embodiment, as shown in fig. 16, the lower pixel d_i, d_j that is a straight line when the horizontal_r= 0x8804, horizontal_g= 0x8804, horizontal_b= 0x8804 is encoded, and the pixel at the straight line (I, j) when the horizontal_r= 0x3303, horizontal_g= 0x3303, horizontal_b= 0x3303 is encoded, the vertical line is displayed, the missing pixel is complemented as shown in formulas (12) to (13):
Figure BDA0004062458180000131
Figure BDA0004062458180000132
it should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a pixel rendering device for implementing the above-mentioned pixel rendering method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the pixel rendering device or devices provided below may refer to the limitation of the pixel rendering method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 17, there is provided a pixel rendering apparatus including: a first conversion processing module 1702, a pixel neighborhood acquisition module 1704, a rendering module 1706, and a second conversion processing module 1708, wherein:
the first conversion processing module 1702 is configured to obtain an initial image, perform a first conversion process on the initial image, and obtain a first image; the first conversion process converts the gray scale domain of the initial image into the brightness domain.
The pixel neighborhood obtaining module 1704 is configured to obtain a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size.
The rendering module 1706 is configured to determine a rendering mode corresponding to each pixel based on the pixel neighborhood corresponding to each pixel, and render the pixel based on the rendering mode to obtain a second image.
A second conversion processing module 1708, configured to perform a second conversion process on the second image, obtain a rendered image, and output the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain.
In one embodiment, the rendering module 1706 may comprise:
the coding submodule is used for coding the pixel neighborhood corresponding to the pixel point aiming at each pixel point to obtain pixel neighborhood coding information corresponding to the pixel point.
And the rendering mode determining submodule is used for determining the rendering mode corresponding to the pixel point based on the pixel neighborhood coding information.
In one embodiment, the rendering mode determining sub-module may include:
the coding information matching unit is used for carrying out coding matching on the pixel neighborhood coding information and preset coding information; when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent.
The first code matching unit is used for taking a pixel point corresponding to the pixel neighborhood code information as a target pixel point when the code matching result is that the code information is inconsistent, and acquiring the pixel information of the target pixel point; and determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point.
In one embodiment, when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent; the rendering mode determination submodule may further include:
and the second code matching unit is used for taking the pixel point corresponding to the pixel neighborhood coding information as a target pixel point when the code matching result is that the code information is consistent, and determining the rendering mode of the target pixel point as structure correction processing.
In one embodiment, the pixel information of the target pixel point includes three channel RGB pixel values corresponding to the target pixel point; the first code matching unit may include:
the pixel information matching subunit is used for carrying out information matching on the pixel information of the target pixel point and preset pixel information to obtain an information matching result; when the pixel information of the target pixel point is inconsistent with the preset pixel information, the information matching result is that the pixel information is inconsistent.
And the pixel arrangement modification subunit is used for modifying the rendering mode of the target pixel point from RGB arrangement to RGBG arrangement when the information matching result is that the pixel information is inconsistent.
In one embodiment, when the pixel information of the target pixel point is consistent with the preset pixel information, the information matching result is that the pixel information is consistent; the rendering mode determination submodule may further include:
And the pixel information matching success unit is used for determining a neighborhood weighting coefficient corresponding to the target pixel point based on the pixel information of the target pixel point when the information matching result is that the pixel information is consistent, and the neighborhood weighting system is used for rendering the target pixel point.
The respective modules in the above-described pixel rendering device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 18. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer equipment is used for storing gamma curve graphs, coordinates corresponding to pixel points, RGB three-channel pixel values corresponding to the pixel points and other data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a pixel rendering method.
It will be appreciated by those skilled in the art that the structure shown in fig. 18 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application is applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of: acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert the gray scale domain of the initial image into the brightness domain;
acquiring a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size;
determining a rendering mode corresponding to the pixel points based on the pixel neighborhood corresponding to each pixel point, and rendering the pixel points based on the rendering mode to obtain a second image;
performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain.
In one embodiment, the determining, by the processor executing the computer program, a rendering mode corresponding to each pixel point based on a pixel neighborhood corresponding to each pixel point may include: for each pixel point, coding a pixel neighborhood corresponding to the pixel point to obtain pixel neighborhood coding information corresponding to the pixel point; and determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information.
In one embodiment, the processor, when executing the computer program, further implements determining a rendering mode corresponding to the pixel point based on the pixel neighborhood encoding information, and may include: performing coding matching on the pixel neighborhood coding information and preset coding information; when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent; when the coding matching result is that the coding information is inconsistent, taking the pixel point corresponding to the pixel neighborhood coding information as a target pixel point, and obtaining the pixel information of the target pixel point; and determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point.
In one embodiment, when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent; the processor, when executing the computer program, further implements determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information, and may include: when the coding matching result is that the coding information is consistent, the pixel point corresponding to the pixel neighborhood coding information is used as a target pixel point, and the rendering mode of the target pixel point is determined to be structure correction processing.
In one embodiment, the pixel information of the target pixel point includes three channel RGB pixel values corresponding to the target pixel point; the processor further realizes that the rendering mode corresponding to the target pixel point is determined based on the pixel information of the target pixel point when executing the computer program, and may include: performing information matching on the pixel information of the target pixel point and preset pixel information to obtain an information matching result; when the pixel information of the target pixel point is inconsistent with the preset pixel information, the information matching result is that the pixel information is inconsistent; when the information matching result is that the pixel information is inconsistent, the rendering mode of the target pixel point is modified from RGB arrangement to RGBG arrangement.
In one embodiment, when the pixel information of the target pixel point is consistent with the preset pixel information, the information matching result is that the pixel information is consistent; the processor further realizes that the rendering mode corresponding to the target pixel point is determined based on the pixel information of the target pixel point when executing the computer program, and may further include: when the information matching result is that the pixel information is consistent, a neighborhood weighting coefficient corresponding to the target pixel point is determined based on the pixel information of the target pixel point, and the neighborhood weighting system is used for rendering the target pixel point.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert the gray scale domain of the initial image into the brightness domain; acquiring a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size; determining a rendering mode corresponding to the pixel points based on the pixel neighborhood corresponding to each pixel point, and rendering the pixel points based on the rendering mode to obtain a second image; performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain.
In one embodiment, the computer program when executed by the processor further implements determining a rendering mode corresponding to each pixel point based on a pixel neighborhood corresponding to each pixel point, and may include: for each pixel point, coding a pixel neighborhood corresponding to the pixel point to obtain pixel neighborhood coding information corresponding to the pixel point; and determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information.
In one embodiment, the computer program when executed by the processor further implements determining a rendering mode corresponding to the pixel point based on the pixel neighborhood encoding information, and may include: performing coding matching on the pixel neighborhood coding information and preset coding information; when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent; when the coding matching result is that the coding information is inconsistent, taking the pixel point corresponding to the pixel neighborhood coding information as a target pixel point, and obtaining the pixel information of the target pixel point; and determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point.
In one embodiment, when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent; the computer program, when executed by the processor, further implements determining a rendering mode corresponding to the pixel point based on the pixel neighborhood encoding information, may include: when the coding matching result is that the coding information is consistent, the pixel point corresponding to the pixel neighborhood coding information is used as a target pixel point, and the rendering mode of the target pixel point is determined to be structure correction processing.
In one embodiment, the pixel information of the target pixel point includes three channel RGB pixel values corresponding to the target pixel point; the computer program, when executed by the processor, further implements determining a rendering mode corresponding to the target pixel point based on pixel information of the target pixel point, and may include: performing information matching on the pixel information of the target pixel point and preset pixel information to obtain an information matching result; when the pixel information of the target pixel point is inconsistent with the preset pixel information, the information matching result is that the pixel information is inconsistent; when the information matching result is that the pixel information is inconsistent, the rendering mode of the target pixel point is modified from RGB arrangement to RGBG arrangement.
In one embodiment, when the pixel information of the target pixel point is consistent with the preset pixel information, the information matching result is that the pixel information is consistent; the computer program when executed by the processor further realizes determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point, and may further include: when the information matching result is that the pixel information is consistent, a neighborhood weighting coefficient corresponding to the target pixel point is determined based on the pixel information of the target pixel point, and the neighborhood weighting system is used for rendering the target pixel point.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of: acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert the gray scale domain of the initial image into the brightness domain; acquiring a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size; determining a rendering mode corresponding to the pixel points based on the pixel neighborhood corresponding to each pixel point, and rendering the pixel points based on the rendering mode to obtain a second image; performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; the second conversion process converts the brightness domain of the second image into a gray scale domain.
In one embodiment, the computer program when executed by the processor further implements determining a rendering mode corresponding to each pixel point based on a pixel neighborhood corresponding to each pixel point, and may include: for each pixel point, coding a pixel neighborhood corresponding to the pixel point to obtain pixel neighborhood coding information corresponding to the pixel point; and determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information.
In one embodiment, the computer program when executed by the processor further implements determining a rendering mode corresponding to the pixel point based on the pixel neighborhood encoding information, and may include: performing coding matching on the pixel neighborhood coding information and preset coding information; when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent; when the coding matching result is that the coding information is inconsistent, taking the pixel point corresponding to the pixel neighborhood coding information as a target pixel point, and obtaining the pixel information of the target pixel point; and determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point.
In one embodiment, when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent; the computer program, when executed by the processor, further implements determining a rendering mode corresponding to the pixel point based on the pixel neighborhood encoding information, may include: when the coding matching result is that the coding information is consistent, the pixel point corresponding to the pixel neighborhood coding information is used as a target pixel point, and the rendering mode of the target pixel point is determined to be structure correction processing.
In one embodiment, the pixel information of the target pixel point includes three channel RGB pixel values corresponding to the target pixel point; the computer program, when executed by the processor, further implements determining a rendering mode corresponding to the target pixel point based on pixel information of the target pixel point, and may include: performing information matching on the pixel information of the target pixel point and preset pixel information to obtain an information matching result; when the pixel information of the target pixel point is inconsistent with the preset pixel information, the information matching result is that the pixel information is inconsistent; when the information matching result is that the pixel information is inconsistent, the rendering mode of the target pixel point is modified from RGB arrangement to RGBG arrangement.
In one embodiment, when the pixel information of the target pixel point is consistent with the preset pixel information, the information matching result is that the pixel information is consistent; the computer program when executed by the processor further realizes determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point, and may further include: when the information matching result is that the pixel information is consistent, a neighborhood weighting coefficient corresponding to the target pixel point is determined based on the pixel information of the target pixel point, and the neighborhood weighting system is used for rendering the target pixel point.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A pixel rendering method, the method comprising:
acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert a gray scale domain of the initial image into a brightness domain;
acquiring a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size;
Determining a rendering mode corresponding to each pixel point based on a pixel neighborhood corresponding to each pixel point, and rendering the pixel points based on the rendering mode to obtain a second image;
performing second conversion processing on the second image to obtain a rendered image and outputting the rendered image; wherein the second conversion process is to convert the luminance domain of the second image into a gray scale domain.
2. The method of claim 1, wherein determining the rendering mode corresponding to each pixel point based on the pixel neighborhood corresponding to the pixel point comprises:
for each pixel point, coding a pixel neighborhood corresponding to the pixel point to obtain pixel neighborhood coding information corresponding to the pixel point;
and determining a rendering mode corresponding to the pixel point based on the pixel neighborhood coding information.
3. The method according to claim 2, wherein determining the rendering mode corresponding to the pixel point based on the pixel neighborhood encoding information includes:
performing coding matching on the pixel neighborhood coding information and preset coding information; when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching result is that the coding information is inconsistent;
When the coding matching result is that the coding information is inconsistent, taking the pixel point corresponding to the pixel neighborhood coding information as a target pixel point, and obtaining the pixel information of the target pixel point; determining a rendering mode corresponding to the target pixel point based on the pixel information of the target pixel point; and when the pixel neighborhood coding information is inconsistent with the preset coding information, the coding matching fails.
4. A method according to claim 3, wherein when the pixel neighborhood coding information is consistent with the preset coding information, the coding matching result is that the coding information is consistent; the determining the rendering mode corresponding to the pixel point based on the pixel neighborhood coding information comprises the following steps:
when the coding matching result is that the coding information is consistent, the pixel point corresponding to the pixel neighborhood coding information is used as a target pixel point, and the rendering mode of the target pixel point is determined to be structure correction processing.
5. A method according to claim 3, wherein the pixel information of the target pixel point includes RGB three-channel pixel values corresponding to the target pixel point; the determining, based on the pixel information of the target pixel point, a rendering mode corresponding to the target pixel point includes:
Performing information matching on the pixel information of the target pixel point and preset pixel information to obtain an information matching result; when the pixel information of the target pixel point is inconsistent with the preset pixel information, the information matching result is that the pixel information is inconsistent;
when the information matching result is that the pixel information is inconsistent, modifying the rendering mode of the target pixel point from RGB arrangement to RGBG arrangement.
6. The method of claim 5, wherein the information matching result is that the pixel information is consistent when the pixel information of the target pixel point is consistent with the preset pixel information; the determining, based on the pixel information of the target pixel point, a rendering mode corresponding to the target pixel point, further includes:
and when the information matching result is that the pixel information is consistent, determining a neighborhood weighting coefficient corresponding to the target pixel point based on the pixel information of the target pixel point, wherein the neighborhood weighting coefficient is used for rendering the target pixel point.
7. A pixel rendering device, the device comprising:
the first conversion processing module is used for acquiring an initial image, and performing first conversion processing on the initial image to obtain a first image; the first conversion process is to convert a gray scale domain of the initial image into a brightness domain;
The pixel neighborhood acquisition module is used for acquiring a pixel neighborhood corresponding to each pixel point in the first image based on a preset neighborhood size;
the rendering module is used for determining a rendering mode corresponding to each pixel point based on a pixel neighborhood corresponding to the pixel point, and rendering the pixel point based on the rendering mode to obtain a second image;
the second conversion processing module is used for carrying out second conversion processing on the second image to obtain a rendered image and outputting the rendered image; wherein the second conversion process is to convert the luminance domain of the second image into a gray scale domain.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202310066908.1A 2023-02-06 2023-02-06 Pixel rendering method, device, computer equipment and storage medium Active CN116129816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310066908.1A CN116129816B (en) 2023-02-06 2023-02-06 Pixel rendering method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310066908.1A CN116129816B (en) 2023-02-06 2023-02-06 Pixel rendering method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116129816A true CN116129816A (en) 2023-05-16
CN116129816B CN116129816B (en) 2024-07-26

Family

ID=86296992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310066908.1A Active CN116129816B (en) 2023-02-06 2023-02-06 Pixel rendering method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116129816B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791755A (en) * 2016-12-27 2017-05-31 武汉华星光电技术有限公司 A kind of RGBW pixel rendering device and method
CN107767327A (en) * 2017-10-10 2018-03-06 武汉天马微电子有限公司 Image rendering method and device, computing equipment and display equipment
US20180122283A1 (en) * 2016-11-02 2018-05-03 Samsung Display Co., Ltd. Method of driving display device and display device for performing the same
CN114040246A (en) * 2021-11-08 2022-02-11 网易(杭州)网络有限公司 Image format conversion method, device, equipment and storage medium of graphic processor
CN114138224A (en) * 2021-12-07 2022-03-04 深圳市华星光电半导体显示技术有限公司 Rendering method and device of sub-pixels in image, computer equipment and storage medium
CN114286172A (en) * 2021-08-23 2022-04-05 腾讯科技(深圳)有限公司 Data processing method and device
CN114420027A (en) * 2021-12-29 2022-04-29 深圳市华星光电半导体显示技术有限公司 Method, device, equipment and medium for improving PPI display
CN115512654A (en) * 2022-09-29 2022-12-23 北京奕斯伟计算技术股份有限公司 Image display method and device, display device, electronic device and storage medium
CN115588409A (en) * 2022-10-11 2023-01-10 北京奕斯伟计算技术股份有限公司 Sub-pixel rendering method and device and display equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180122283A1 (en) * 2016-11-02 2018-05-03 Samsung Display Co., Ltd. Method of driving display device and display device for performing the same
CN106791755A (en) * 2016-12-27 2017-05-31 武汉华星光电技术有限公司 A kind of RGBW pixel rendering device and method
CN107767327A (en) * 2017-10-10 2018-03-06 武汉天马微电子有限公司 Image rendering method and device, computing equipment and display equipment
CN114286172A (en) * 2021-08-23 2022-04-05 腾讯科技(深圳)有限公司 Data processing method and device
CN114040246A (en) * 2021-11-08 2022-02-11 网易(杭州)网络有限公司 Image format conversion method, device, equipment and storage medium of graphic processor
CN114138224A (en) * 2021-12-07 2022-03-04 深圳市华星光电半导体显示技术有限公司 Rendering method and device of sub-pixels in image, computer equipment and storage medium
CN114420027A (en) * 2021-12-29 2022-04-29 深圳市华星光电半导体显示技术有限公司 Method, device, equipment and medium for improving PPI display
CN115512654A (en) * 2022-09-29 2022-12-23 北京奕斯伟计算技术股份有限公司 Image display method and device, display device, electronic device and storage medium
CN115588409A (en) * 2022-10-11 2023-01-10 北京奕斯伟计算技术股份有限公司 Sub-pixel rendering method and device and display equipment

Also Published As

Publication number Publication date
CN116129816B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
US8212741B2 (en) Dual display device
US20070257944A1 (en) Color display system with improved apparent resolution
US20160247440A1 (en) Display method and display panel
KR20110020711A (en) Gamut Mapping Considering Pixels in Adjacent Regions of Display Units
US9639920B2 (en) Dither directed LUT output value interpolation
JP5685064B2 (en) Image display device, image display device driving method, image display program, and gradation conversion device
CN1535031A (en) Video processor with reduced gamma corrected memory
CN114138224B (en) Rendering method and device for sub-pixels in image, computer equipment and storage medium
CN109461400B (en) Sub-pixel rendering method and device for converting RGB (red, green and blue) image into RGBW (red, green and blue) image
CN112614457B (en) Display control method, device and system
TWI647683B (en) Electronic device, display driver, and display data generating method of display panel
CN114822397A (en) Data processing method and device and display panel compensation method and device
CN102592545A (en) Image display device, method of driving the same, image display program executed in the same, and gradation converter included in the same
TWI542189B (en) Image display apparatus, method of driving image display apparatus, grayscale conversion conputer program product, and grayscale conversion apparatus
CN116129816B (en) Pixel rendering method, device, computer equipment and storage medium
CN117253453B (en) Display screen Mura elimination method, device, computer equipment and storage medium
WO2025035780A1 (en) Display apparatus and control method for display apparatus
US20110221775A1 (en) Method for transforming displaying images
CN116645428B (en) Image display method, device, computer equipment and storage medium
TW201712637A (en) Displaying method and display panel utilizing the same
CN114420027A (en) Method, device, equipment and medium for improving PPI display
CN115831042B (en) Image display method and system, display driving device, and storage medium
CN118014845B (en) Image processing method, apparatus, computer device, storage medium, and program product
CN114529620B (en) A grayscale image compression method and decompression method
CN119922318A (en) Image data conversion method, device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 200135, 11th Floor, Building 3, No. 889 Bibo Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: Granfei Intelligent Technology Co.,Ltd.

Address before: 200135 Room 201, No. 2557, Jinke Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant before: Gryfield Intelligent Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant