CN116529759A - Image processing method, device and terminal - Google Patents
Image processing method, device and terminal Download PDFInfo
- Publication number
- CN116529759A CN116529759A CN202380008317.7A CN202380008317A CN116529759A CN 116529759 A CN116529759 A CN 116529759A CN 202380008317 A CN202380008317 A CN 202380008317A CN 116529759 A CN116529759 A CN 116529759A
- Authority
- CN
- China
- Prior art keywords
- pixel
- sub
- pixel point
- gray scale
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 claims abstract description 15
- 101100134058 Caenorhabditis elegans nth-1 gene Proteins 0.000 claims description 8
- 239000011295 pitch Substances 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 4
- 238000003780 insertion Methods 0.000 description 4
- 230000037431 insertion Effects 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The disclosure discloses an image processing method, an image processing device and a terminal. The method comprises the following steps: acquiring gray scale values of sub-pixel points in an original image; generating a first target image and a second target image, and acquiring gray scale values of sub-pixel points in the first target image and the second target image according to the gray scale values of the sub-pixel points in the original image; and sequentially and circularly displaying the original image, the first target image and the second target image on a display panel based on the gray scale value of each sub-pixel point. The image processing method provided by the disclosure can display the effect of nearly 3 times resolution of the original image on the display panel, so that the effect from pixel level resolution to sub-pixel level resolution is realized, and the viewing experience of a user is improved.
Description
Technical Field
The disclosure relates to the technical field of image display, and in particular relates to an image processing method, an image processing device and a terminal.
Background
In the prior art, when the display panel relates to a method for multiplexing sub-pixels, the pixel display definition is improved, however, when the pixel display definition is improved, in the prior art, when the sub-pixel multiplexing method is related, the production process of the RGB arrangement display panel which is mainstream nowadays is greatly changed, for example, a method of forming four sub-pixels into one pixel, but each pixel has two sub-pixels with the same color, or a micro LED sub-pixel multiplexing technology is adopted, and although the technology is helpful for improving the viewing effect, when the method of forming one pixel by four sub-pixels is adopted, an image which is more than the physical resolution of a display is required to be input, and when the micro LED sub-pixel multiplexing technology is adopted, not only a high resolution image is required, but also the image is not easy to be blurred due to the fact that the center distance of the pixel is not equal, so that the user is dazzled, and the method for improving the pixel display definition is greatly demanded on the basis of not remarkably improving the physical resolution of a real panel.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device and a terminal, and aims to solve the problem that in the prior art, a display panel with higher physical resolution is required to be adopted to improve the pixel display definition of an original picture.
In a first aspect of the present disclosure, there is provided an image processing method including:
acquiring gray scale values of sub-pixel points in an original image;
generating a first target image and a second target image, and acquiring gray scale values of sub-pixel points in the first target image and the second target image according to the gray scale values of the sub-pixel points in the original image;
and sequentially and circularly displaying the original image, the first target image and the second target image on a display panel based on the gray scale value of each sub-pixel point.
In the image processing method, each pixel point in the display panel consists of three sub-pixel points, the three sub-pixel points in each pixel point form a square, and the intervals among the sub-pixel points are the same;
the distance between adjacent pixel points in the display panel is the same as the distance between adjacent sub-pixel points;
The arrangement mode of the sub-pixel points of each row in the display panel is that the first sub-pixel points, the second sub-pixel points and the third sub-pixel points are sequentially and circularly arranged.
In the image processing method, each pixel point in the original image is the first sub-pixel point, the second sub-pixel point and the third sub-pixel point which are sequentially arranged;
each pixel point in the first target image is the second sub-pixel point, the third sub-pixel point and the first sub-pixel point which are sequentially arranged;
each pixel point in the second target image is the third sub-pixel point, the first sub-pixel point and the second sub-pixel point which are sequentially arranged.
The image processing method, wherein the generating the first target image and the second target image includes:
extracting a first target pixel point from the original image to generate the first target image, wherein the first target pixel point consists of the second sub-pixel point, the third sub-pixel point and the first sub-pixel point in the right adjacent pixel point of the original pixel point;
and extracting a second target pixel point from the original image to generate the second target image, wherein the second target pixel point is composed of the third sub-pixel point corresponding to the original pixel point and the first sub-pixel point and the second sub-pixel point in the right adjacent pixel point of the original pixel point.
The image processing method, wherein the obtaining the gray scale value of each sub-pixel point in the first target image according to the gray scale value of the sub-pixel point in the original image includes:
and acquiring the gray scale value of each sub-pixel in the first target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighbor pixel corresponding to the first target pixel.
The image processing method, wherein the obtaining the gray scale value of each sub-pixel point in the second target image according to the gray scale value of the sub-pixel point in the original image includes:
and acquiring the gray scale value of each sub-pixel in the second target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighbor pixel corresponding to the second target pixel.
The image processing method, wherein the obtaining the gray-scale value of each sub-pixel in the first target pixel according to the gray-scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighboring pixel of the original pixel corresponding to the first target pixel includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a first formula;
The first formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith first target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of the first sub-pixel points in the original pixel point corresponding to the first target pixel point and the right adjacent pixel point of the original pixel point, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of the second sub-pixel points in the original pixel point corresponding to the first target pixel point and the right adjacent pixel point of the original pixel point, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the first target pixel and the right adjacent pixel of the original pixel are respectively obtained.
The image processing method, wherein the obtaining the gray-scale value of each sub-pixel in the second target pixel according to the gray-scale value of the sub-pixel in the original pixel and the sub-pixel in the right neighbor of the original pixel corresponding to the second target pixel includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a second formula;
the second formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith second target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of a first sub-pixel in an original pixel corresponding to the second target pixel and a right adjacent pixel of the original pixel, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of a second sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel, G b (x i )、G b (x i+1 ) Respectively are provided withAnd the gray scale value of a third sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel.
The image processing method comprises the steps that real pixel points of each row in the first target image are one less than those of the original image;
the leftmost side of each row in the first target image comprises a first virtual pixel point;
the rightmost side of each row in the first target image comprises a second virtual pixel point.
The image processing method, wherein the first virtual pixel point comprises a virtual second sub-pixel point, a virtual third sub-pixel point and a real first sub-pixel point;
the second virtual pixel point comprises a real second sub-pixel point, a real third sub-pixel point and a virtual first sub-pixel point.
According to the image processing method, gray scale values of the first sub-pixel points in the first virtual pixel points are calculated according to a third formula;
The third formula is:
wherein G is r1 'x' is a gray scale value of the first sub-pixel in the x-th row of the first virtual pixel, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) The gray scale value of the first sub-pixel point in the 2 nd pixel point of the x line in the original image is obtained;
calculating gray scale values of the second sub-pixel point and the third sub-pixel point in the second virtual pixel point according to a fourth formula;
the fourth formula is:
wherein G is g1 'x' is a gray scale value of the second sub-pixel point in the x-th row of the second virtual pixel point, G g (x n ) G is the gray scale value of the second sub-pixel point in the nth pixel point of the xth row in the original image g (x n-1 ) The gray scale value of the second sub-pixel point in the (n-1) th pixel point of the (x) th row in the original image, wherein each row in the original image has n pixel points;
G b1 'x' is a gray level value of the third sub-pixel point in the x-th row of the second virtual pixel points, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) And the gray scale value of the third sub-pixel point in the nth-1 pixel point of the xth row in the original image.
The image processing method comprises the steps that real pixel points of each row in the second target image are one less than those of the original image;
the leftmost side of each row in the second target image comprises a third virtual pixel point;
the rightmost side of each row in the second target image comprises a fourth virtual pixel point.
The image processing method, wherein the third virtual pixel point comprises a virtual third sub-pixel point, a real first sub-pixel point and a real second sub-pixel point;
the fourth virtual pixel point comprises a real third sub-pixel point, a virtual first sub-pixel point and a virtual second sub-pixel point.
According to the image processing method, gray scale values of the first sub-pixel point and the second sub-pixel point in the third virtual pixel point are calculated according to a fifth formula;
the fifth formula is:
wherein G is r2 'x' is a gray scale value of the first sub-pixel point in the x-th row of the third virtual pixel point, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) The gray scale value of the first sub-pixel point in the 2 nd pixel point of the x line in the original image is obtained;
G g2′ (x) For the gray scale value of the second sub-pixel point in the x-th row of the third virtual pixel point, G g (x 1 ) G is the gray scale value of the second sub-pixel point in the 1 st pixel point of the x line in the original image g (x 2 ) The gray scale value of the second sub-pixel point in the 2 nd pixel point of the x-th row in the original image, wherein each row in the original image has n pixel points;
calculating the gray scale value of the third sub-pixel point in the fourth virtual pixel point according to a sixth formula;
the sixth formula is:
wherein G is b2′ (x) For the gray scale value of the third sub-pixel in the fourth virtual pixel on the x-th row, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) Is taken as the x-th row n-1 pixel point in the original imageAnd the gray scale value of the third sub-pixel point.
In a second aspect of the present disclosure, there is provided an image processing apparatus including:
the original image acquisition module is used for acquiring gray scale values of sub-pixel points in the original image;
the target image acquisition module is used for generating a first target image and a second target image and acquiring gray scale values of sub-pixel points in the first target image and the second target image according to the gray scale values of the sub-pixel points in the original image;
And the image display module is used for sequentially and circularly displaying the original image, the first target image and the second target image on a display panel according to a frame rate based on the gray scale value of each sub-pixel point.
In a third aspect of the present disclosure, a terminal is provided, where the terminal includes: the image processing device comprises a memory, a processor and an image processing program stored in the memory and capable of running on the processor, wherein the image processing program realizes the steps of the image processing method when being executed by the processor.
The beneficial effects are that: compared with the prior art, the present disclosure provides an image processing method, an image processing device and a terminal. In the image processing method provided by the disclosure, gray scale values of sub-pixel points in an original image are firstly obtained, then a first target image and a second target image are generated, gray scale values of each sub-pixel point in the first target image and the second target image are obtained according to the gray scale values of the sub-pixel points in the original image, and finally the original image, the first target image and the second target image are sequentially and circularly displayed on a display panel based on the gray scale values of each sub-pixel point. According to the image processing method, the sub-pixel points on the display panel are multiplexed, the two-stage intermediate interpolation frame is generated through image amplification and extraction on the basis of not remarkably improving the physical resolution of the real panel, and the original image and the interpolation frame are uniformly distributed on multiplexing pixels to realize smoother inter-pixel transition, so that the pixel level resolution effect to the sub-pixel level resolution effect are realized, the granular sense brought by large pixel point spacing is well improved, the viewing experience of a user is improved, and brand-new visual experience is brought to the user.
Drawings
FIG. 1 is a flow chart of an embodiment of an image processing method provided by the present disclosure;
fig. 2 is a schematic view of RGB pixels of a display panel in an embodiment of an image processing method provided in the present disclosure;
fig. 3 is a schematic view of a display panel in an embodiment of an image processing method provided in the present disclosure;
fig. 4 is a schematic view of line-direction frame insertion in an embodiment of an image processing method provided in the present disclosure;
fig. 5 is a schematic view of scanning in a row direction in an embodiment of an image processing method provided in the present disclosure;
FIG. 6 is a schematic structural diagram of an embodiment of an image processing apparatus provided by the present disclosure;
fig. 7 is a structural schematic diagram of an embodiment of a terminal provided in the present disclosure.
Detailed Description
In order to make the objects, technical solutions and effects of the present disclosure clearer and more specific, the present disclosure will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present disclosure.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs, unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Example 1
The image processing method provided in this embodiment may be executed by a terminal, which may be, but is not limited to, an intelligent display panel, a computer, etc., and is described below by taking the intelligent display panel as an example, as shown in fig. 1, and the image processing method provided in this embodiment includes the steps of:
s100, acquiring gray scale values of sub-pixel points in an original image.
Specifically, the original image is opened in a display panel, and a gray scale value of each sub-pixel point in the original image is obtained.
Each pixel point in the display panel is composed of three sub-pixel points, the three sub-pixel points in each pixel point form a square, and the intervals among the sub-pixel points are the same;
The distance between adjacent pixel points in the display panel is the same as the distance between adjacent sub-pixel points;
the arrangement mode of the sub-pixel points of each row in the display panel is that the first sub-pixel points, the second sub-pixel points and the third sub-pixel points are sequentially and circularly arranged.
Specifically, referring to fig. 2, fig. 2 is a schematic view of an RGB pixel in the display panel, the pixel in fig. 2 is a square pixel formed by three R, G, B sub-pixels, D is a side length of the square pixel, D is a distance between sub-pixels in the square pixel, and distances between sub-pixels in the square pixel are D.
Referring to fig. 3, fig. 3 is a schematic diagram of the structure of the display panel, where the pitch of each pixel point in the display panel is the same as the pitch of the sub-pixel point, and d is also the pitch of each sub-pixel point. That is, the distance between each sub-pixel point in the display panel is d.
The arrangement mode of the sub-pixel points of each row in the display panel is that the first sub-pixel points, the second sub-pixel points and the third sub-pixel points are sequentially and circularly arranged.
That is, in the display panel, the first sub-pixel point of each row is the first sub-pixel point, the second sub-pixel point is the second sub-pixel point, the third sub-pixel point is the third sub-pixel point, and the third sub-pixel points are sequentially and circularly arranged, and meanwhile, the last sub-pixel point of each row is the third sub-pixel point.
In this embodiment, the first sub-pixel is a red sub-pixel, the second sub-pixel is a green sub-pixel, and the third sub-pixel is a blue sub-pixel.
S200, generating a first target image and a second target image, and acquiring gray scale values of sub-pixel points in the first target image and the second target image according to the gray scale values of the sub-pixel points in the original image.
The generating a first target image and a second target image includes:
extracting a first target pixel point from the original image to generate the first target image, wherein the first target pixel point consists of the second sub-pixel point, the third sub-pixel point and the first sub-pixel point in the right adjacent pixel point of the original pixel point;
and extracting a second target pixel point from the original image to generate the second target image, wherein the second target pixel point is composed of the third sub-pixel point corresponding to the original pixel point and the first sub-pixel point and the second sub-pixel point in the right adjacent pixel point of the original pixel point.
Further, the first target image and the second target image are interpolated images.
Specifically, when the original image is displayed, a first interpolation frame image is inserted in a j+1th frame, a second interpolation frame image is inserted in a j+2th frame of the display panel, wherein the arrangement mode of sub-pixel points of each pixel point in the first interpolation frame image is a second target pixel point, a third target sub-pixel point and a first target sub-pixel point in sequence from left to right, the arrangement mode of sub-pixel points of each pixel point in the second interpolation frame image is a third target pixel point, a first target sub-pixel point and a second target sub-pixel point in sequence from left to right, j is a positive integer, and j+2 is a multiple of 3;
and acquiring the gray scale value of each sub-pixel point in the first interpolation image and the second interpolation image according to the gray scale value of each sub-pixel point in the original image, and generating the first target image and the second target image.
Specifically, referring to fig. 4, fig. 4 is an x-th row of the original image, in this embodiment, the first target pixel and the second target pixel are respectively frame-inserted pixels corresponding to the first target image and the second target image, for example, frame-inserted 1 pixel 1 in fig. 4 corresponds to the first target pixel 1 in the first target image, frame-inserted 2 pixel 1 corresponds to the second target pixel 1 in the second target image, and the first target pixel and the second target pixel in the first target image and the second target image are interspersed in each two pixels of the original image. Specifically, the ith pixel point of the x-th row of the first target image and the ith pixel point of the x-th row of the second target image are inserted between the ith pixel point and the i+1th pixel point of the x-th row of the original image, that is, there are two inserted frame pixel points between each original pixel point, and n pixel points are set in each row of the display panel, so that there are 2 x (n-1) inserted frame pixel points in n original pixel points of each row of the original image, that is, n-1 inserted frame pixel points in each row of the first target image and the second target image. The frame inserting mode can enlarge the pixel points in the row direction of the original image by approximately 3 times on the premise of not increasing the sub-pixel points in the display panel so as to realize the image amplification.
Specifically, there are the following rules for the original image, the first target image, and the second target image:
each pixel point in the original image is a first sub-pixel point, a second sub-pixel point and a third sub-pixel point which are sequentially arranged;
each pixel point in the first target image is a second sub-pixel point, a third sub-pixel point and a first sub-pixel point which are sequentially arranged;
each pixel point in the second target image is a third sub-pixel point, a first sub-pixel point and a second sub-pixel point which are sequentially arranged.
Specifically, referring to fig. 5, it can be seen that each pixel point in the original image is a first sub-pixel point, a second sub-pixel point, and a third sub-pixel point that are sequentially arranged;
1 sub-pixel is added to the first pixel point of the x-th row in the first target image, namely, the interpolation frame 1 pixel 1, compared with the first pixel point of the x-th row in the original image, so that each pixel point in the first target image is changed into a second sub-pixel point, a third sub-pixel point and a first sub-pixel point which are sequentially arranged;
meanwhile, the first pixel point of the x-th row in the second target image, namely, the interpolated frame 2 pixel 1, is 2 sub-pixel points to the right compared with the first pixel point of the x-th row in the original image, so that each pixel point in the second target image is changed into a third sub-pixel point, a first sub-pixel point and a second sub-pixel point which are sequentially arranged.
The obtaining the gray scale value of each sub-pixel point in the first target image according to the gray scale value of the sub-pixel point in the original image includes:
s210, acquiring gray scale values of all sub-pixel points in the first target pixel point according to the gray scale values of the sub-pixel points in the original pixel point corresponding to the first target pixel point and the sub-pixel point in the right adjacent pixel point of the original pixel point.
The obtaining the gray scale value of each sub-pixel in the first target pixel according to the gray scale values of the sub-pixels in the original pixel and the right neighboring pixel corresponding to the first target pixel, includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a first formula;
the first formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith first target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of a first sub-pixel in an original pixel corresponding to the first target pixel and a right adjacent pixel of the original pixel, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of a second sub-pixel in the original pixel corresponding to the first target pixel and the right adjacent pixel of the original pixel, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the first target pixel and the right adjacent pixel of the original pixel are respectively obtained.
Specifically, the pixel point in the first target image is the first target pixel point. And acquiring the gray scale value of each sub-pixel point in the first target pixel point according to the gray scale value of the original pixel point and the sub-pixel point in the right adjacent pixel point of the original pixel point in the original image corresponding to the first target pixel point.
The method for calculating the sub-pixel in the ith row of the first target pixel point is that the gray level value of the same-color sub-pixel in the ith row of the ith pixel point in the original graph is subtracted by one third of the difference between the gray level value of the same-color sub-pixel in the ith row of the ith pixel point in the original graph and the gray level value of the same-color sub-pixel in the ith+1th pixel point in the ith row of the original graph, and the specific formula is as follows:
the first formula is obtained after simplification:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith first target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of a first sub-pixel in an original pixel corresponding to the first target pixel and a right adjacent pixel of the original pixel, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of a second sub-pixel in the original pixel corresponding to the first target pixel and the right adjacent pixel of the original pixel, G b (x i )、G b (x i+1 ) And the gray scale values of the second sub-pixel in the original pixel corresponding to the first target pixel and the right adjacent pixel of the original pixel are respectively obtained.
The obtaining the gray scale value of each sub-pixel point in the second target image according to the gray scale value of the sub-pixel point in the original image includes:
s220, according to the gray scale values of the sub-pixel points in the original pixel point corresponding to the second target pixel point and the sub-pixel point in the right adjacent pixel point of the original pixel point, acquiring the gray scale values of the sub-pixel points in the second target pixel point.
The obtaining the gray scale value of each sub-pixel in the second target pixel according to the gray scale values of the sub-pixels in the original pixel and the right neighboring pixel corresponding to the second target pixel, includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a second formula;
the second formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith second target pixel point, G r (x i )、G r (x i+1 ) Respectively the second target pixel point pairGray scale value of first sub-pixel in original pixel and right adjacent pixel of original pixel, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of a second sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel are respectively obtained.
Specifically, the pixel point in the second target image is the second target pixel point. And acquiring the gray scale value of each sub-pixel point in the second target pixel point according to the gray scale value of the original pixel point and the sub-pixel point in the right adjacent pixel point of the original pixel point in the original image corresponding to the second target pixel point.
The method for calculating the sub-pixel in the ith row of the second target pixel point is that the gray level value of the same-color sub-pixel in the ith row of the ith pixel point in the original graph is subtracted by two thirds of the difference between the gray level value of the same-color sub-pixel in the ith row of the ith pixel point in the original graph and the gray level value of the same-color sub-pixel in the ith+1th pixel point in the xth row of the original graph, and the specific formula is as follows:
the first formula is obtained after simplification:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith second target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of a first sub-pixel in an original pixel corresponding to the second target pixel and a right adjacent pixel of the original pixel, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of a second sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel are respectively obtained.
By the above mode, an enlarged image of m rows by 3n-2 columns of pixel points is generated. Specifically, after the frame insertion, the original image is enlarged by approximately 3 times, wherein the original image including one m rows by n columns of pixel points and the frame insertion image including two m rows by n-1 columns of frame insertion pixel points are the first target image and the second target image respectively.
It can also be seen from fig. 5 that: the first sub-pixel point at the leftmost side of each row, the second sub-pixel point at the rightmost side of each row and the third sub-pixel point in the first target image do not form a complete pixel point with other sub-pixel points, and are independent sub-pixel points; the first sub-pixel point and the second sub-pixel point at the leftmost side of each row and the third sub-pixel point at the rightmost side of each row in the second target image do not form a complete pixel point with other sub-pixel points, and are independent sub-pixel points.
Specifically, since the pixel points in the first target image are all shifted to the right by 1 sub-pixel point relative to the original image, the first sub-pixel point of the first column of each row in the first target image becomes an independent sub-pixel point, and at the same time, the second sub-pixel point and the third sub-pixel point of the last two columns of each row in the first target image also become independent sub-pixel points.
That is, the number of real pixel points of each line in the first target image is one less than that of the original image;
the leftmost side of each row in the first target image comprises a first virtual pixel point;
the rightmost side of each row in the first target image comprises a second virtual pixel point.
Further, the first virtual pixel point comprises a virtual second sub-pixel point, a virtual third sub-pixel point and a real first sub-pixel point;
the second virtual pixel point comprises a real second sub-pixel point, a real third sub-pixel point and a virtual first sub-pixel point.
Specifically, the first sub-pixel point at the leftmost side of each row, the second sub-pixel point at the rightmost side of each row, and the third sub-pixel point in the first target image are independent sub-pixel points. Adding two virtual sub-pixel points at the leftmost side of each row of the first target image: the first virtual pixel point is formed by a virtual second sub-pixel point, a virtual third sub-pixel point and a leftmost independent sub-pixel point; adding a virtual sub-pixel point on the left side and the right side of each row of the first target image: and the second virtual pixel point is formed by the first virtual sub-pixel point and two independent sub-pixel points at the rightmost side.
That is, two virtual sub-pixel points and one more sub-pixel point are added on the left side of each line of the first target image to form a virtual pixel, and one virtual sub-pixel point and two more sub-pixel points are added on the right side of each line of the first target image to form a first virtual pixel, so that the loss of the frame inserting information can be reduced. The virtual pixel point on the left side in the first target image is actually provided with only one physical sub-pixel point, and the virtual pixel point on the right side in the first target image is actually provided with only two physical sub-pixel points, wherein the gray scale value of the physical sub-pixel point of the virtual pixel point on the left side is defined as the average value of the gray scale values of the sub-pixel points on the upper side and the right side adjacently; the gray scale values of two physical sub-pixel points of the virtual pixel point on the right are defined as the average value of the gray scale values of the sub-pixel points on the upper side and the left side adjacently.
Specifically, calculating a gray scale value of the first sub-pixel point in the first virtual pixel point according to a third formula;
the third formula is:
wherein G is r1′ (x) For the gray scale value of the first sub-pixel in the x-th row of the first virtual pixel, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) And the gray scale value of the first sub-pixel point in the 2 nd pixel point of the x-th row in the original image.
Calculating gray scale values of the second sub-pixel point and the third sub-pixel point in the second virtual pixel point according to a fourth formula;
the fourth formula is:
wherein G is g1 'x' is a gray scale value of the second sub-pixel point in the x-th row of the second virtual pixel point, G g (x n ) For the original graphGray scale value of the second sub-pixel point in the x-th row and the n-th pixel point in the image, G g (x n-1 ) And the gray scale value of the second sub-pixel point in the nth-1 pixel point of the xth row in the original image.
G b1 'x' is a gray level value of the third sub-pixel point in the x-th row of the second virtual pixel points, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) And the gray scale value of the third sub-pixel point in the nth-1 pixel point of the xth row in the original image.
Further, the pixel points in the second target image are all 2 sub-pixel points to the right relative to the original image, so that the first sub-pixel point and the second sub-pixel point of the first two columns of each row in the second target image become independent sub-pixel points, and meanwhile, the third sub-pixel point of the last column of each row in the second interpolation frame image also becomes independent sub-pixel points.
That is, each row of real pixel points in the second target image is one less than the original image;
the leftmost side of each row in the second target image comprises a third virtual pixel point;
the rightmost side of each row in the second target image comprises a fourth virtual pixel point.
The third virtual pixel point comprises a virtual third sub-pixel point, a real first sub-pixel point and a real second sub-pixel point;
the fourth virtual pixel point comprises a real third sub-pixel point, a virtual first sub-pixel point and a virtual second sub-pixel point.
Specifically, the first sub-pixel point and the second sub-pixel point at the leftmost side of each row and the third sub-pixel point at the rightmost side of each row in the second target image are independent sub-pixel points. Adding a virtual sub-pixel point at the leftmost side of each row of the second target image: the third virtual pixel point is formed by the virtual third sub-pixel point and two independent sub-pixel points at the leftmost side; two virtual sub-pixel points are added on the left side and the right side of each row of the second target image: the first virtual pixel point and the second virtual pixel point are formed by one virtual first sub-pixel point and one virtual second sub-pixel point and an independent sub-pixel point at the rightmost side.
That is, a virtual sub-pixel and two more sub-pixels are added to the left side of each line of the second target image to form a virtual pixel, and two virtual sub-pixels and one more sub-pixel are added to the right side of each line of the second target image to form a virtual pixel, so that the loss of the frame inserting information can be reduced. The virtual pixel point on the left side in the second target image is actually provided with only two physical sub-pixel points, and the virtual pixel point on the right side in the second target image is actually provided with only one physical sub-pixel point, wherein the gray scale values of the two physical sub-pixel points of the virtual pixel point on the left side are defined as the average value of the gray scale values of the sub-pixel points on the upper side and the right side adjacently; the gray scale value of the physical sub-pixel of the virtual pixel on the right is defined as the average value of the gray scale values of the sub-pixel on the upper side and the left side adjacently.
Specifically, gray scale values of the first sub-pixel point and the second sub-pixel point in the third virtual pixel point are calculated according to a fifth formula;
the fifth formula is:
wherein G is r2′ (x) For the gray scale value of the first sub-pixel in the x-th row of the third virtual pixel, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) For the 2 nd pixel of the x line in the original imageGray scale values of the first sub-pixel points in the points;
G g2′ (x) For the gray scale value of the second sub-pixel point in the x-th row of the third virtual pixel point, G g (x 1 ) G is the gray scale value of the second sub-pixel point in the 1 st pixel point of the x line in the original image g (x 2 ) The gray scale value of the second sub-pixel point in the 2 nd pixel point of the x-th row in the original image, wherein each row in the original image has n pixel points;
calculating the gray scale value of the third sub-pixel point in the fourth virtual pixel point according to a sixth formula;
the sixth formula is:
wherein G is b2′ (x) For the gray scale value of the third sub-pixel in the fourth virtual pixel on the x-th row, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) And the gray scale value of the third sub-pixel point in the nth-1 pixel point of the xth row in the original image.
Referring again to fig. 1, the image processing method according to the present embodiment further includes the steps of:
and S300, sequentially and circularly displaying the original image, the first target image and the second target image on a display panel based on the gray scale value of each sub-pixel point.
Specifically, after the gray scale value of each sub-pixel is obtained, the first target image and the second target image are displayed according to the frame rate based on the gray scale value of each sub-pixel. And circularly displaying the first frame as the original image, the second frame as the first target image and the third frame as the second target image on a display panel.
In summary, the present embodiment provides an image processing method, firstly, gray-scale values of sub-pixels in an original image are obtained, then a first target image and a second target image are generated, the gray-scale values of each sub-pixel in the first target image and the second target image are obtained according to the gray-scale values of the sub-pixels in the original image, and finally, the original image, the first target image and the second target image are sequentially and circularly displayed on a display panel based on the gray-scale values of each sub-pixel. According to the image processing method provided by the embodiment, through multiplexing the sub-pixel points on the display panel, two-stage intermediate interpolation frames can be generated through image amplification and extraction on the basis of not remarkably improving the physical resolution of the real panel, and the original image and the interpolation frames are uniformly distributed on multiplexing pixels so as to realize smoother inter-pixel transition, so that the pixel level resolution effect to sub-pixel level resolution effect is realized, the granular sense brought by large pixel point spacing is well improved, the viewing experience of a user is improved, and brand new visual experience is brought to the user.
It should be understood that, although the steps in the flowcharts shown in the drawings of this disclosure are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps in this disclosure are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of the present disclosure may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by signaling related hardware by a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the above-described embodiments of the methods. Any reference to memory, storage, database, or other medium used in embodiments provided by the present disclosure may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Example two
Based on the above embodiments, the present disclosure further provides an image processing apparatus, whose functional block diagram is shown in fig. 6, including:
the original image acquisition module is used for acquiring gray scale values of sub-pixel points in an original image, and is specifically described in the first embodiment;
the target image acquisition module is used for generating a first target image and a second target image and acquiring the gray scale value of each sub-pixel point in the first target image and the second target image according to the gray scale value of the sub-pixel point in the original image, and the specific embodiment is as described in the first embodiment;
and the image display module is used for sequentially and circularly displaying the original image, the first target image and the second target image on a display panel based on the gray scale value of each sub-pixel point, and the specific embodiment is as described in the first embodiment.
Example III
As shown in fig. 7, based on the above image processing method, the present disclosure further provides a terminal, which includes a processor 10, a memory 20, and a display 30. Fig. 7 shows only some of the components of the terminal, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may alternatively be implemented.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may in other embodiments also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software installed in the terminal and various data, such as program codes of the installation terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores an image processing program 40, and the image processing program 40 is executable by the processor 10 to implement the image processing method in the present application.
The processor 10 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for executing program code or processing data stored in the memory 20, for example for performing the image processing method or the like.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 30 is used for displaying information at the terminal and for displaying a visual user interface. The components 10-30 of the terminal communicate with each other via a system bus.
In one embodiment, the following steps are implemented when the processor 10 executes the image processing program 40 in the memory 20:
acquiring gray scale values of sub-pixel points in an original image;
generating a first target image and a second target image, and acquiring gray scale values of sub-pixel points in the first target image and the second target image according to the gray scale values of the sub-pixel points in the original image;
and sequentially and circularly displaying the original image, the first target image and the second target image on a display panel based on the gray scale value of each sub-pixel point.
Each pixel point in the display panel consists of three sub-pixel points, wherein the three sub-pixel points in each pixel point form a square, and the intervals among the sub-pixel points are the same;
the distance between adjacent pixel points in the display panel is the same as the distance between adjacent sub-pixel points;
The arrangement mode of the sub-pixel points of each row in the display panel is that the first sub-pixel points, the second sub-pixel points and the third sub-pixel points are sequentially and circularly arranged.
Each pixel point in the original image is the first sub-pixel point, the second sub-pixel point and the third sub-pixel point which are sequentially arranged;
each pixel point in the first target image is the second sub-pixel point, the third sub-pixel point and the first sub-pixel point which are sequentially arranged;
each pixel point in the second target image is the third sub-pixel point, the first sub-pixel point and the second sub-pixel point which are sequentially arranged.
Wherein the generating the first target image and the second target image includes:
extracting a first target pixel point from the original image to generate the first target image, wherein the first target pixel point consists of the second sub-pixel point, the third sub-pixel point and the first sub-pixel point in the right adjacent pixel point of the original pixel point;
and extracting a second target pixel point from the original image to generate the second target image, wherein the second target pixel point is composed of the third sub-pixel point corresponding to the original pixel point and the first sub-pixel point and the second sub-pixel point in the right adjacent pixel point of the original pixel point.
The obtaining the gray scale value of each sub-pixel point in the first target image according to the gray scale value of the sub-pixel point in the original image includes:
and acquiring the gray scale value of each sub-pixel in the first target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighbor pixel corresponding to the first target pixel.
The obtaining the gray scale value of each sub-pixel point in the second target image according to the gray scale value of the sub-pixel point in the original image includes:
and acquiring the gray scale value of each sub-pixel in the second target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighbor pixel corresponding to the second target pixel.
The obtaining the gray scale value of each sub-pixel in the first target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighboring pixel corresponding to the first target pixel includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a first formula;
the first formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith first target pixel point, G r (x i )、G r (x i+1 ) Respectively the original pixel point and the original pixel point corresponding to the first target pixel pointGray scale value, G, of the first sub-pixel in the pixel right adjacent to the pixel g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of the second sub-pixel points in the original pixel point corresponding to the first target pixel point and the right adjacent pixel point of the original pixel point, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the first target pixel and the right adjacent pixel of the original pixel are respectively obtained.
The obtaining the gray scale value of each sub-pixel in the second target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighboring pixel corresponding to the second target pixel includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a second formula;
the second formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith second target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of a first sub-pixel in an original pixel corresponding to the second target pixel and a right adjacent pixel of the original pixel, G g (x i )、G g (x i+1 ) Respectively the gray scale value of a second sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel are respectively obtained.
Wherein, the real pixel points of each row in the first target image are one less than the original image;
the leftmost side of each row in the first target image comprises a first virtual pixel point;
the rightmost side of each row in the first target image comprises a second virtual pixel point.
The first virtual pixel point comprises a virtual second sub-pixel point, a virtual third sub-pixel point and a real first sub-pixel point;
the second virtual pixel point comprises a real second sub-pixel point, a real third sub-pixel point and a virtual first sub-pixel point.
The gray scale value of the first sub-pixel point in the first virtual pixel point is calculated according to a third formula;
the third formula is:
wherein G is r1′ (x) For the gray scale value of the first sub-pixel in the x-th row of the first virtual pixel, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) The gray scale value of the first sub-pixel point in the 2 nd pixel point of the x line in the original image is obtained;
calculating gray scale values of the second sub-pixel point and the third sub-pixel point in the second virtual pixel point according to a fourth formula;
the fourth formula is:
wherein G is g1′ (x) For the gray scale value of the second sub-pixel point in the x-th row of the second virtual pixel point, G g (x n ) G is the gray scale value of the second sub-pixel point in the nth pixel point of the xth row in the original image g (x n-1 ) The gray scale value of the second sub-pixel point in the (n-1) th pixel point of the (x) th row in the original image, wherein each row in the original image has n pixel points;
G b1′ (x) For the gray scale value of the third sub-pixel point in the x-th row of the second virtual pixel point, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) And the gray scale value of the third sub-pixel point in the nth-1 pixel point of the xth row in the original image.
Wherein, the real pixel points of each row in the second target image are one less than the original image;
the leftmost side of each row in the second target image comprises a third virtual pixel point;
the rightmost side of each row in the second target image comprises a fourth virtual pixel point.
The third virtual pixel point comprises a virtual third sub-pixel point, a real first sub-pixel point and a real second sub-pixel point;
the fourth virtual pixel point comprises a real third sub-pixel point, a virtual first sub-pixel point and a virtual second sub-pixel point.
The gray scale values of the first sub-pixel point and the second sub-pixel point in the third virtual pixel point are calculated according to a fifth formula;
the fifth formula is:
wherein G is r2′ (x) For the gray scale value of the first sub-pixel in the x-th row of the third virtual pixel, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) The gray scale value of the first sub-pixel point in the 2 nd pixel point of the x line in the original image is obtained;
G g2′ (x) For the gray scale value of the second sub-pixel point in the x-th row of the third virtual pixel point, G g (x 1 ) G is the gray scale value of the second sub-pixel point in the 1 st pixel point of the x line in the original image g (x 2 ) The gray scale value of the second sub-pixel point in the 2 nd pixel point of the x-th row in the original image, wherein each row in the original image has n pixel points;
Calculating the gray scale value of the third sub-pixel point in the fourth virtual pixel point according to a sixth formula;
the sixth formula is:
wherein G is b2′ (x) For the gray scale value of the third sub-pixel in the fourth virtual pixel on the x-th row, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) And the gray scale value of the third sub-pixel point in the nth-1 pixel point of the xth row in the original image.
Finally, it should be noted that: the above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.
Claims (16)
1. An image processing method, comprising:
acquiring gray scale values of sub-pixel points in an original image;
generating a first target image and a second target image, and acquiring gray scale values of sub-pixel points in the first target image and the second target image according to the gray scale values of the sub-pixel points in the original image;
And sequentially and circularly displaying the original image, the first target image and the second target image on a display panel based on the gray scale value of each sub-pixel point.
2. The image processing method according to claim 1, wherein each pixel in the display panel is composed of three sub-pixels, the three sub-pixels in each pixel are composed of a square, and the pitches between the sub-pixels are the same;
the distance between adjacent pixel points in the display panel is the same as the distance between adjacent sub-pixel points;
the arrangement mode of the sub-pixel points of each row in the display panel is that the first sub-pixel points, the second sub-pixel points and the third sub-pixel points are sequentially and circularly arranged.
3. The image processing method according to claim 2, wherein each pixel point in the original image is the first sub-pixel point, the second sub-pixel point, and the third sub-pixel point which are sequentially arranged;
each pixel point in the first target image is the second sub-pixel point, the third sub-pixel point and the first sub-pixel point which are sequentially arranged;
each pixel point in the second target image is the third sub-pixel point, the first sub-pixel point and the second sub-pixel point which are sequentially arranged.
4. The image processing method according to claim 3, wherein the generating the first target image and the second target image includes:
extracting a first target pixel point from the original image to generate the first target image, wherein the first target pixel point consists of the second sub-pixel point, the third sub-pixel point and the first sub-pixel point in the right adjacent pixel point of the original pixel point;
and extracting a second target pixel point from the original image to generate the second target image, wherein the second target pixel point is composed of the third sub-pixel point corresponding to the original pixel point and the first sub-pixel point and the second sub-pixel point in the right adjacent pixel point of the original pixel point.
5. The method according to claim 4, wherein the obtaining the gray-scale value of each sub-pixel in the first target image according to the gray-scale value of the sub-pixel in the original image comprises:
and acquiring the gray scale value of each sub-pixel in the first target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighbor pixel corresponding to the first target pixel.
6. The method according to claim 4, wherein the obtaining the gray-scale value of each sub-pixel in the second target image according to the gray-scale value of the sub-pixel in the original image comprises:
and acquiring the gray scale value of each sub-pixel in the second target pixel according to the gray scale values of the sub-pixels in the original pixel and the sub-pixel in the right neighbor pixel corresponding to the second target pixel.
7. The method according to claim 5, wherein the obtaining the gray-scale value of each sub-pixel in the first target pixel according to the gray-scale values of the sub-pixels in the original pixel and the sub-pixel in the right-adjacent pixel corresponding to the first target pixel includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a first formula;
the first formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith first target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of the first sub-pixel points in the original pixel point corresponding to the first target pixel point and the right adjacent pixel point of the original pixel point, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of the second sub-pixel points in the original pixel point corresponding to the first target pixel point and the right adjacent pixel point of the original pixel point, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the first target pixel and the right adjacent pixel of the original pixel are respectively obtained.
8. The method of claim 6, wherein the obtaining the gray-scale value of each sub-pixel in the second target pixel according to the gray-scale values of the sub-pixels in the original pixel and the sub-pixel in the right-neighboring pixel of the original pixel corresponding to the second target pixel includes:
calculating the gray scale value of each sub-pixel point in the first target pixel point according to a second formula;
the second formula is:
wherein G is r1 (x i )、G g1 (x i )、G b1 (x i ) Respectively the gray scale values of the first, the second and the third sub-pixel points in the ith row and the ith second target pixel point, G r (x i )、G r (x i+1 ) Respectively obtaining gray scale values of a first sub-pixel in an original pixel corresponding to the second target pixel and a right adjacent pixel of the original pixel, G g (x i )、G g (x i+1 ) Respectively obtaining gray scale values of a second sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel, G b (x i )、G b (x i+1 ) And the gray scale values of the third sub-pixel in the original pixel corresponding to the second target pixel and the right adjacent pixel of the original pixel are respectively obtained.
9. The image processing method according to claim 4, wherein the number of real pixels per line in the first target image is one less than that of the original image;
The leftmost side of each row in the first target image comprises a first virtual pixel point;
the rightmost side of each row in the first target image comprises a second virtual pixel point.
10. The image processing method according to claim 9, wherein the first virtual pixel point includes one virtual second sub-pixel point, one virtual third sub-pixel point, and one real first sub-pixel point;
the second virtual pixel point comprises a real second sub-pixel point, a real third sub-pixel point and a virtual first sub-pixel point.
11. The image processing method according to claim 9, wherein a gray-scale value of the first sub-pixel in the first virtual pixel is calculated according to a third formula;
the third formula is:
wherein G is r1′ (x) For the gray scale value of the first sub-pixel in the x-th row of the first virtual pixel, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) The gray scale value of the first sub-pixel point in the 2 nd pixel point of the x line in the original image is obtained;
Calculating gray scale values of the second sub-pixel point and the third sub-pixel point in the second virtual pixel point according to a fourth formula;
the fourth formula is:
wherein G is g1′ (x) For the gray scale value of the second sub-pixel point in the x-th row of the second virtual pixel point, G g (x n ) G is the gray scale value of the second sub-pixel point in the nth pixel point of the xth row in the original image g (x n-1 ) The gray scale value of the second sub-pixel point in the (n-1) th pixel point of the (x) th row in the original image, wherein each row in the original image has n pixel points;
G b1′ (x) For the gray scale value of the third sub-pixel point in the x-th row of the second virtual pixel point, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) And the gray scale value of the third sub-pixel point in the nth-1 pixel point of the xth row in the original image.
12. The image processing method according to claim 4, wherein the number of real pixels per line in the second target image is one less than that of the original image;
the leftmost side of each row in the second target image comprises a third virtual pixel point;
the rightmost side of each row in the second target image comprises a fourth virtual pixel point.
13. The image processing method according to claim 12, wherein the third virtual pixel point includes a virtual third sub-pixel point, a real first sub-pixel point, and a real second sub-pixel point;
the fourth virtual pixel point comprises a real third sub-pixel point, a virtual first sub-pixel point and a virtual second sub-pixel point.
14. The image processing method according to claim 13, wherein the gray scale values of the first and second sub-pixel points in the third virtual pixel point are calculated according to a fifth formula;
the fifth formula is:
wherein G is r2′ (x) For the gray scale value of the first sub-pixel in the x-th row of the third virtual pixel, G r (x 1 ) G is the gray scale value of the first sub-pixel point in the 1 st pixel point of the x line in the original image r (x 2 ) The gray scale value of the first sub-pixel point in the 2 nd pixel point of the x line in the original image is obtained;
G g2′ (x) For the gray scale value of the second sub-pixel point in the x-th row of the third virtual pixel point, G g (x 1 ) G is the gray scale value of the second sub-pixel point in the 1 st pixel point of the x line in the original image g (x 2 ) The gray scale value of the second sub-pixel point in the 2 nd pixel point of the x-th row in the original image, wherein each row in the original image has n pixel points;
calculating the gray scale value of the third sub-pixel point in the fourth virtual pixel point according to a sixth formula;
the sixth formula is:
wherein the method comprises the steps of,G b2 ' x is a gray scale value of the third sub-pixel point in the fourth virtual pixel point in the x-th row, G b (x n ) G is the gray scale value of the third sub-pixel point in the nth pixel point of the xth row in the original image b (x n-1 ) And the gray scale value of the third sub-pixel point in the nth-1 pixel point of the xth row in the original image.
15. An image processing apparatus, characterized in that the apparatus comprises:
the original image acquisition module is used for acquiring gray scale values of sub-pixel points in the original image;
the target image acquisition module is used for generating a first target image and a second target image and acquiring gray scale values of sub-pixel points in the first target image and the second target image according to the gray scale values of the sub-pixel points in the original image;
and the image display module is used for sequentially and circularly displaying the original image, the first target image and the second target image on a display panel based on the gray scale value of each sub-pixel point.
16. A terminal, the terminal comprising: memory, a processor and an image processing program stored on the memory and executable on the processor, which image processing program when executed by the processor implements the steps of the image processing method according to any of claims 1-14.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2023/081166 WO2024187352A1 (en) | 2023-03-13 | 2023-03-13 | Image processing method and apparatus, and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116529759A true CN116529759A (en) | 2023-08-01 |
Family
ID=87390809
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202380008317.7A Pending CN116529759A (en) | 2023-03-13 | 2023-03-13 | Image processing method, device and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116529759A (en) |
WO (1) | WO2024187352A1 (en) |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886825B (en) * | 2014-02-21 | 2016-02-17 | 北京京东方光电科技有限公司 | The driving method of pel array and display device |
CN103915044B (en) * | 2014-03-25 | 2016-03-30 | 京东方科技集团股份有限公司 | Display packing |
CN104597609A (en) * | 2015-02-06 | 2015-05-06 | 京东方科技集团股份有限公司 | Pixel array, display device and display method |
CN107767808B (en) * | 2017-11-13 | 2020-09-08 | 北京京东方光电科技有限公司 | Display panel driving method, display driving circuit and display device |
CN110807819B (en) * | 2019-11-25 | 2022-05-17 | 昆山国显光电有限公司 | Image processing method and apparatus, and storage medium |
CN113946301B (en) * | 2020-07-16 | 2024-02-09 | 京东方科技集团股份有限公司 | Tiled display system and image processing method thereof |
CN113362760B (en) * | 2021-06-24 | 2022-12-16 | 康佳集团股份有限公司 | Pixel multiplexing display method and device, storage medium and terminal equipment |
-
2023
- 2023-03-13 WO PCT/CN2023/081166 patent/WO2024187352A1/en unknown
- 2023-03-13 CN CN202380008317.7A patent/CN116529759A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2024187352A1 (en) | 2024-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9240033B2 (en) | Image super-resolution reconstruction system and method | |
CN104867453B (en) | The dot structure conversion method and its device of display screen | |
US10347220B1 (en) | Data compression and decompression method for DeMura table | |
CN114138224B (en) | Rendering method and device for sub-pixels in image, computer equipment and storage medium | |
CN109461400B (en) | Sub-pixel rendering method and device for converting RGB (red, green and blue) image into RGBW (red, green and blue) image | |
US9076408B2 (en) | Frame data shrinking method used in over-driving technology | |
US11616895B2 (en) | Method and apparatus for converting image data, and storage medium | |
US7760966B2 (en) | Method and apparatus for downscaling a digital colour matrix image | |
US8269786B2 (en) | Method for reading and writing image data in memory | |
CN114203089B (en) | Sub-pixel rendering method for RGB-Delta arrangement | |
CN105096884A (en) | Sub-pixel rendering method and display apparatus | |
US11367387B1 (en) | Method and image processing device for Mura compensation on display panel | |
CN116529759A (en) | Image processing method, device and terminal | |
WO2024239340A1 (en) | Image display method and apparatus, and terminal | |
CN1663289A (en) | Method and device for correcting color unevenness | |
CN111831212B (en) | Data writing and reading method, device and equipment | |
US20070104392A1 (en) | Image enlarging method and TV wall using the same | |
CN111273882B (en) | Demura Table data compression method and decompression method | |
US8514253B2 (en) | Image data processing method and image display system with reduction of image data transfer amount | |
WO2024187339A1 (en) | Image display method and apparatus, and terminal | |
CN112119631A (en) | Monitoring image generation method, device, equipment and system, and image processing equipment | |
CN110875024B (en) | Display brightness adjusting method and device | |
KR20140014662A (en) | Device for interpolating data and method for interpolating data | |
CN114420027A (en) | Method, device, equipment and medium for improving PPI display | |
CN105513004A (en) | Image distortion correction system, storage method thereof, and addressing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |