Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The anisotropic filtering method and device provided by the embodiment of the application mainly aim to optimize the anisotropic filtering performance under the lossless condition.
The key words related to the embodiment of the application are anisotropic filtering (Anisotropic Filtering, AF for short), bilinear filtering (Bilinear), texture element (Texel, T for short), pixel point (P for short), and sampling point (Sample, S for short).
For anisotropy, if a group of four pixels of 2×2 adjacent to each other on the screen corresponds to a quadrangle enclosed by the texture map, the quadrangle is not square, which means that there is an angle inclination between the texture map and the screen, i.e., anisotropy. When the anisotropy is adopted, if the filtering mode of bilinear filtering is adopted, the information at the distance is more fuzzy, and at the moment, the anisotropy filtering is needed, so that the textures of the inclined object on the surface are clearer and sharper, and the anisotropy filtering can be carried out for more times according to the included angle formed by the texture map and the screen, thereby achieving better display effect.
In connection with the foregoing background section, it is clear that although the sharpness of current anisotropic filtering methods is better than bilinear filtering, it comes at the cost of a significant decrease in performance. Taking AF16x (i.e. one pixel is split into 16 samples) as an example, referring to fig. 1, four pixels corresponding to four pixels P0 to P1 are non-square, so that a certain angle exists between the texture map and the display screen. According to the included angle, a pixel point is split into a plurality of sampling points S (the sampling points S form a sampling point sequence), for example, a pixel point P0 in FIG. 1 is split into 16 sampling points S, then bilinear filtering is carried out on the 16 sampling points S to obtain 16 intermediate results, and then the 16 intermediate results are weighted and summed according to a certain weight, so that a final result of the pixel point P0 is obtained. Therefore, although the current anisotropic filtering can obtain better effect, the number of sampling points for bilinear filtering is 16 times that of common Bilinear, and the performance is greatly influenced.
Next, to better explain the basic idea of the present application, the flow related to the conventional anisotropic filtering method adopted at present is briefly described, as shown in fig. 2, in the case of AF (anisotropic filtering), step S201 determines that AF is on, and when AF is on, step S202 is entered to calculate AF parameters (such as LOD, ratio, step, sample Number, weight of each S), at this time, an ellipse is determined by using the received 4P, i.e. four pixels, and the logarithm of the minor axis of the ellipse is taken as the LOD value, ratio is the Ratio of the major axis of the ellipse, ratio is rounded up to a multiple of 2 as Sample Number, i.e. several x, step is the distance between two adjacent sampling points S in the UV direction, so that the coordinates of all S can be obtained, the Weight of the middle S is 1/Ratio, and the Weight of the two remote sampling points S is (1- (Sample Number-2)/Ratio 2). The method comprises the steps of calculating the information of all sample points S, including the coordinates and weights thereof, proceeding to the steps S203-S206, carrying out bilinear filtering Bilinear on all sample points S with the sample points S as granularity, calculating the coordinates of 4T (namely four corresponding texture elements) corresponding to each sample point S for each sample point S in the step S204, taking out the corresponding 4T color information from a memory, then calculating the corresponding 4T weight according to the coordinates of each sample point S in the step S205, carrying out bilinear filtering calculation according to the 4T color information and weights thereof in the step S206, calculating the bilinear filtering calculation result of each sample point S, proceeding to the step S207 at the end of the cycle, and accumulating the bilinear filtering calculation results of all sample points S to obtain a final result.
The application is mainly based on the following three points:
1. The distance between adjacent sampling points S is 1;
2. Each texel T is a square with a side length of 1;
3. Based on the previous two points, it can be known that there is a repeated texel T among eight texels T corresponding to two adjacent sampling points S.
The basic idea of the application is to use at least one identical texel T based on two adjacent sampling points S in point 3, so the application is to avoid that the identical texel T is reused, thus improving the performance.
Before deriving the 3 rd point, it is explained how the nearest four texels T are obtained by one coordinate. As shown in fig. 3, if the texture elements T0 to T3 are to be taken as the nearest four texture elements T, the center point of the texture elements T0 to T3 is taken as the vertex, and a square is made, that is, a square surrounded by a left-side broken line in fig. 3, if the coordinates of one sample point S1 fall within the square surrounded by the left-side broken line, then the nearest four texture elements around the sample point S1 are texture elements T0 to T3, and the side length of the square surrounded by the broken line is 1, and for the square surrounded by the right-side broken line in fig. 3, then the nearest four texture elements around the sample point S2 are texture elements T4 to T7.
Based on this, the above 3 rd point is demonstrated:
It is assumed that eight texels T corresponding to adjacent sample points S1, S2 are not repeated and are denoted as corresponding texels T0 to T3, respectively T4 to T7. At this time, since the sample point S1 falls within the square surrounded by the left broken line, the sample point S1 is spaced from the middle black solid line by a distance L1>0.5. Similarly, since the sample point S2 falls within the square surrounded by the right broken line, the sample point S2 is spaced from the middle solid black line by a distance L2>0.5. Therefore, the distance between the sample points S1 and S2 is l12> =l1+l2 >1, which contradicts the first point. It can be seen that at least one texture element T of the eight texture elements T corresponding to two adjacent sample points S is repetitive.
Based on the previous three points, it can be seen that when an original pixel point P is split into n sampling points S on the texture map, the number of texture elements T actually performing filtering calculation is necessarily smaller than 4×n.
As can be known from the flow related to the conventional anisotropic filtering method adopted at present, the anisotropic filtering method adopted at present carries out bilinear filtering calculation by taking the sampling point S as granularity, the number of texture elements T required to carry out bilinear filtering calculation is 4×n, and some texture elements T are reused. The basic idea of the application is therefore to avoid repeated filtering calculations of the same texel T, thus achieving an improved performance. In order to achieve this, the conventional anisotropic filtering method adopted at present needs to be improved, the sampling points S cannot be used as granularity, the texture elements T are used as granularity to perform operation, and then the information such as weights of all required texture elements T is calculated immediately after the information of all the sampling points S is obtained.
The anisotropic filtering method provided by the application is described below in connection with specific embodiments, and the method can be executed by a terminal, a server, or other computer equipment.
In one embodiment, as shown in fig. 4, the anisotropic filtering method provided by the present application specifically includes the following steps:
in step S401, a sampling point sequence corresponding to the pixel point on the texture map is determined.
The pixel point P can be split into a plurality of sampling points on the texture map, and the plurality of sampling points form a sampling point sequence. For example, the pixel point P may be split into two (AF 2 x), four (AF 4 x), eight (AF 8 x), sixteen (AF 16 x) sampling points on the texture map, and the anisotropic filtering method provided by the present application may be applicable to all AF cases. In this embodiment, the distance between adjacent sampling points in the sampling point sequence is equal to the side length of the texture element in the texture map, and is 1.
Step S402, determining a texture element group corresponding to each sampling point on the texture map in the sampling point sequence;
In this step, for each sampling point S in the sequence of sampling points, a respective plurality of corresponding texels T are found on the texture map to form a texel group.
Specifically, step S402 includes determining, for each sampling point, a plurality of texture elements on the texture map that are closest to the sampling point according to coordinates of the sampling point on the texture map, and obtaining a corresponding texture element group according to the plurality of texture elements. Wherein four texels T closest to it may be determined on the texture map for each sampling point S as corresponding texel groups.
Step S403, for each sampling point in the sampling point sequence, obtaining the weight of each texture element in the corresponding texture element group;
in this step, for each sampling point S in the sampling point sequence, the weight of each texel T in the corresponding texel group needs to be obtained before the subsequent filtering calculation is performed.
Step S404, based on the weight of each texture element in each texture element group, accumulating the weights corresponding to the same texture element to obtain the total weight of each texture element;
Since different sampling points S correspond to some identical texture elements T, this step adds the weights of each texture element T corresponding to different sampling points S as the total weight of the texture element T.
Step S405, filtering calculation is performed based on the total weight of each texture element and the color information of each texture element, so as to obtain a filtering calculation result corresponding to each texture element;
The filtering calculation is performed on the basis of the total weight of each texture element and the color information of the total weight of each texture element to obtain a respective corresponding filtering calculation result, wherein the filtering calculation performed on each texture element can comprise bilinear filtering. This entails that the number of texels for the filtering calculation is necessarily less than 4 xn, so that the AF performance is optimized, since the calculation of the weights for the texels concerned is already done in the preceding step, the calculation of the relevant information is no longer needed in the loop of the filtering calculation with the texels as granularity.
In some embodiments, step S405 specifically includes taking a plurality of texture elements as a loop granularity of filtering calculation, and calculating to obtain a filtering calculation result corresponding to each texture element according to the total weight and the color information of the corresponding texture element.
In this embodiment, the filtering calculation may be performed with, for example, four texture elements T as the granularity of the filtering calculation, and in each cycle, color information of the four corresponding texture elements T is extracted from the memory, and for each texture element T, a filtering calculation result corresponding to each texture element is calculated according to the total weight and the color information corresponding to each texture element T.
The filtering calculation result corresponding to each texture element is obtained through calculation according to the total weight and the color information of the corresponding texture element, and specifically comprises the steps of multiplying the corresponding total weight and the color information for each texture element to obtain the filtering calculation result corresponding to each texture element.
Step S406, obtaining the color information of the pixel point according to the filtering calculation result corresponding to each texture element.
The step is mainly to integrate the filtering calculation results corresponding to each texture element T to obtain the color information of the pixel point P. The step S406 may specifically include accumulating the filtering calculation results corresponding to each texture element to obtain color information of the pixel point. That is, the filtering calculation results corresponding to each of the texture elements T obtained in step S405 are added as color information of the pixel point P.
Next, referring to an example, in the anisotropic filtering method of the present application, as shown in fig. 5, step S501 determines that AF is on, and when AF is on, step S502 is entered to calculate AF parameters (such as LOD, ratio, step, sample Number, weight of each of the samples S), after information of all the samples S is obtained in Step S502, weight information of all the texture elements T corresponding to the samples S needs to be further obtained, because some identical texture elements T are used for different samples S, weights of each texture element T corresponding to different samples S are added up as total weights of texture elements T in Step S503, and steps S504 to S506 are entered, i.e., the Number of texture elements T actually requiring bilinear filtering calculation is necessarily smaller than 4×n, so that the optimization of AF is obtained, and the steps S504 to S506 do not need to be repeated for the relevant information.
For the merging of the weights of the texels, in an embodiment, the adding the weights corresponding to the same texel based on the weights of each texel in the texel groups to obtain the total weight of each texel specifically includes:
Determining the texture elements with the same coordinates as the same texture element based on the coordinates of each texture element in each texture element group; the weights corresponding to the same texture element are accumulated to obtain the total weight of each texture element.
In this embodiment, for each texel T in each texel group, the texels T with the same coordinates are combined, and the weights thereof are accumulated, and the final weight is the total weight of the texel T.
Further, for each weight of each texel, in an embodiment, the obtaining the weight of each texel in the corresponding texel group for each sampling point in the sampling point sequence specifically includes:
And acquiring the weight and the coordinates of each texture element in the corresponding texture element group for each sampling point in the sampling point sequence. That is, for a plurality of sampling points S split on the texture map for the pixel point P, the coordinates of each texture element are acquired while the weights of the corresponding texture elements are acquired so that the weights of the same texture element are combined according to the coordinates of the texture elements.
In the embodiment, the weight and the coordinates of each texture element in the corresponding texture element group are obtained for each sampling point in the sampling point sequence, specifically including calculating the weight and the coordinates of each texture element in the corresponding texture element group according to the coordinates and the weights of the sampling points for each sampling point in the sampling point sequence. That is, for each sampling point S in the sampling point sequence, the sampling points S are cycled with granularity, and the coordinates and weights of the corresponding texture elements T are calculated according to the coordinates and weights of each sampling point S.
The weight combination at granularity S (corresponding to step S503) before the filtering calculation at granularity 4T is performed will be described with reference to fig. 6. After obtaining the relevant information of the sampling points S in step S502, steps S601 to S603 may be entered, the sampling points S are used as granularity and circulated, the coordinates and weights of four texture elements corresponding to each sampling point S are calculated according to the coordinates and weights of the sampling points S, then the texture elements with the same coordinates are combined to accumulate the weights, and finally the accumulated weights are the total weights of the texture elements for subsequent filtering calculation with 4T granularity.
It can be seen that although the conventional method only needs to loop the sampling point S, the method provided by the present application not only loops the sampling point S but also loops the texture element T, which seems to have a larger number of loops, but the number of filtering calculations that actually affect the AF performance is reduced.
The anisotropic filtering method provided by the application is described below for improving the efficiency of AF performance.
The anisotropic filtering method provided by the application can be suitable for all AF cases. And in the calculation process, all needed information is reserved, so that the method is a lossless optimization method. The efficiency of the optimization will vary from case to case, and the best and worst cases of the method will be mainly described below.
The two factors influencing the optimization efficiency are the number n of the sampling points S after the pixel point P is split ①, and the included angle theta between the connecting line of all the sampling points S and the horizontal direction in ②. It will be appreciated that the larger the number n of sampling points S, the more the number of texture elements T, and the smaller the number n of sampling points S, the fewer the number of texture elements T. Therefore, the number n of the sampling points S does not form a factor affecting the monotonicity of the number change function of the texture elements T, so that the number n of the sampling points S is set to 16 for the sake of simplicity in calculation, and the influence of the included angle θ on the number of the texture elements T is mainly described. Since the application mainly reduces the number of the texture elements T for filtering calculation, the optimization efficiency is realized by taking the ratio of the reduced number of the texture elements T to 4 multiplied by n (the number of the texture elements T which are required to be filtered and calculated by taking S as granularity).
Referring to fig. 7, the number of texels T that need to be calculated is equivalent to the area of the enclosed parallelogram. The height of the parallelogram is (n+1) sin θ, and the length of the base varies due to the variation of θ. The closer the angle θ is to 0 °, the greater the length of the base is, the closer to n+1, and the closer the angle θ is to 90 °, the closer the length of the base is to 2. Because of its varying symmetry, only the variation of θ ε [45 °,90 ° ] will be described here. Therefore, it is first necessary to find the relationship between the base and the θ angle. As can be seen from fig. 7, the length of the bottom edge is equal to the average of the number of texture elements T occupied by each line, and thus the average can be obtained by calculating the expected value. Since the distance between adjacent sampling points S is 1, it can be derived that the adjacent sampling points S are separated by cos θ and sin θ in the horizontal and vertical directions, respectively.
Referring to fig. 8, if the mapping of the first sampling point S of two adjacent sampling points S in the horizontal direction falls within 1-cos θ, the second sampling point S still falls within the dotted rectangle, then no new texture element T will be used in the horizontal direction, and if the first sampling point S falls within the cos θ, then the second sampling point S will use a new texture element T, so in this case, there will be three texture elements T in this row. Thus, the length of the base is2 (1-cos θ) +3-cos θ. Then, the number of texture elements T used is (2 x (1-cos θ) +3 x cos θ) x (n+1) sin θ. The derivative analysis of the function revealed that the maximum value was about 68 ° and the minimum value was about 45 ° and 90 °.
Substituting these angles into the calculation results in a worst case of approximately 37.43 texels T, and a best case of approximately 34 texels T. Therefore, the improvement is about 41.5% and 46.875%, respectively, compared to the 64 texels T calculated at granularity S. Further, as one of the above formulas is n+1, it is understood that the larger n is, the smaller the influence of 1 on the formula is. Meaning that the performance improvement is more pronounced as the number n of sampling points S is greater.
The performance improvement will be described below by taking θ as 90 ° and a non-specific angle as examples.
For θ 90 °:
When the anisotropic filtering generates a plurality of sampling points S, the distance between two adjacent sampling points S is 1, each sampling point S is filtered according to a bilinear filtering mode, and the bilinear filtering mode is described in the foregoing, that is, four texture elements T around the sampling point S are used to calculate a new color information according to the corresponding weights to obtain the value of the pixel point P. Taking AF4x as an example, as shown in fig. 9 (a), P is a position where a pixel corresponds to the texture map, and this pixel P is split into four sampling points S, i.e., S0, S1, S2, and S3. The sampling point S0 takes the nearest four texture elements T around for filtering, namely texture elements T0, T1, T2 and T3, the sampling point S1 uses texture elements T2, T3, T4 and T5, the sampling point S2 uses texture elements T4, T5, T6 and T7, and the sampling point S3 uses texture elements T6, T7, T8 and T9. It can be seen that the texels T2 to T7 are all used twice. Therefore, the present application calculates the texture elements T0 to T3 of the sampling point S0 and the texture elements T2 and T3 of the sampling point S1 together, calculates the texture elements T4 and T5 of the sampling point S1, the texture elements T4 to T7 of the sampling point S2 and the texture elements T6 and T7 of the sampling point S3 together, and thus, can avoid the repeated use of the same texture elements to perform calculations in the multipliers, thereby improving the performance.
Specifically, according to the method provided by the application, the coordinates and weights of the corresponding four texture elements T are calculated according to the coordinates and weights of each sampling point S. Namely, the weights of the texture elements T0 to T3 corresponding to the sampling point S0 are weight_S0_T0, weight_S0_T1, weight_S0_T2 and weight_S0_T3, the weights of the texture elements T2 to T5 corresponding to the sampling point S1 are weight_S1_T2, weight_S1_T3, weight_S1_T4 and weight_S1_T5, the weights of the texture elements T4 to T7 corresponding to the sampling point S2 are weight_S2_T4, weight_S2_T5, weight_S2_T6 and weight_S2_T7, and the weights of the texture elements T6 to T9 corresponding to the sampling point S3 are weight_S3_T6, weight_S3_T7 and weight_S3_T8 and weight_S3_T9.
Then, the texture elements T corresponding to all the sampling points S are combined according to the equal coordinates, the weights of the texture elements T with the same coordinates are accumulated, and the obtained weights are used as the total weights of the texture elements T. Namely, the Weight of the texture element T0 is weight_S0_T0, the Weight of the texture element T1 is weight_S0_T1, the Weight of the texture element T2 is (weight_S0_T2+weight_S1_T2), the Weight of the texture element T3 is (weight_S0_T3+weight_S1_T3), the Weight of the texture element T4 is (weight_S1_T4+weight_S2_T4), the Weight of the texture element T5 is (weight_S1_T5+weight_S2_T5), the Weight of the texture element T6 is (weight_S2_T6+weight_S3_T6), the Weight of the texture element T7 is (weight_S2+weight_S3_T7), the Weight of the texture element T4 is (weight_S3+weight_T8), and the Weight of the texture element T5 is (weight_S3_T9).
The coordinate and weight information of all texture elements T needed to be used are obtained, then the 4T is used as granularity to circulate, bilinear filtering calculation is carried out according to the obtained total weight and color information, all the filtering calculation results of the split pixel point P are accumulated to obtain the final result of the pixel point P, and the flow can be expressed by the following calculation formula :P=S0*Weight_S0+S1*Weight_S1+S2*Weight_S2+S3*Weight_S3=T0*Weight_S0_T0+T1*Weight_S0_T1+T2*Weight_S0_T2+T3*Weight_S0_T3+T2*Weight_S1_T2+T3*Weight_S1_T3+T4*Weight_S1_T4+T5*Weight_S1_T5+T4*Weight_S2_T4+T5*Weight_S2_T5+T6*Weight_S2_T6+T7*Weight_S2_T7+T6*Weight_S3_T6+T7*Weight_S3_T7+T8*Weight_S3_T8+T9*Weight_S3_T9=T0*Weight_S0_T0+T1*Weight_S0_T1+T2*(Weight_S0_T2+Weight_S1_T2)+T3*(Weight_S0_T3+Weight_S1_T3)+T4*(Weight_S1_T4+Weight_S2_T4)+T5*(Weight_S1_T5+Weight_S2_T5)+T6*(Weight_S2_T6+Weight_S3_T6)+T7*(Weight_S2_T7+*Weight_S3_T7)+T8*Weight_S3_T8+T9*Weight_S3_T9.
According to the calculation, the result obtained after the optimization is not affected, so that the optimization does not lose accuracy. And 4 times of filtering (16 times of multiplication) are needed originally, and only 2.5 times of filtering (10 times of multiplication) are needed after optimization, so that the performance is improved by about 37.5%. The more sampling points the AF generates, the more significant the performance improvement. Performance can be improved by up to about 46.875% when AF16 x.
Referring to fig. 9 (b), for the example that θ is a non-specific angle and is AF8x, similarly, the coordinates and weights of four texels T corresponding to each sampling point S are calculated by circulating with the sampling point S as granularity. Namely, the weights of the texture elements T0, T1, T2 and T3 corresponding to the sampling point S0 are weight_S0_T0, weight_S0_T1, weight_S0_T2 and weight_S0_T3; the weights of the texture elements T2, T3, T5 and T6 corresponding to the sampling point S1 are weight_S1_T2, weight_S1_T3, weight_S1_T5 and weight_S1_T6; the weights of the texture elements T4, T5, T7 and T8 corresponding to the sampling point S2 are weight_S2_T4, weight_S2_T5, weight_S2_T7 and weight_S2_T8; the weights of the texture elements T7, T8, T10 and T11 corresponding to the sampling point S3 are weight_S3_T7, weight_S3_T8, weight_S3_T10 and weight_S3_T11, the weights of the texture elements T9, T10, T13 and T14 corresponding to the texture element S4 are weight_S4_T9, weight_S4_T10, weight_S4_T13 and weight_S4_T14, the weights of the texture elements T13, T14, T16 and T17 corresponding to the texture element S5 are weight_S5_T13, weight_S5_T14, weight_S5_T16 and weight_S5_T17, the weights of the texture elements T12, T13, T15 and T6 corresponding to the texture element S4 are weight_S6_T10, weight_S4_T13, weight_S4_T14, weight_S14, weight_S16 and weight_S6_T6_T6, and weight_S7_S7_T16, weight_S6 and weight_S7_S15.
And combining the texture elements T corresponding to all the sampling points S according to the equal coordinates, accumulating the weights of the texture elements T with the same coordinates, and taking the obtained weights as the total weights of the texture elements T. That is, the Weight of texel T0 is weight_S0_T0; the Weight of texel T1 is weight_s0_t1; the Weight of texel T2 is (weight_s0_t2+weight_s1_t2); the Weight of the texture element T3 is (Weight S0_T3+weight_S1_T3), the Weight of the texture element T4 is (weight_S2_T4), the Weight of the texture element T5 is (weight_S1_T5+weight_S2_T5), the Weight of the texture element T6 is (weight_S1_T6), the Weight of the texture element T7 is (weight_S2_T7+weight_T7), the Weight of the texture element T8 is (weight_S2_T8+weight_T8), the Weight of the texture element T9 is (weight_S4_T9), the Weight of the texture element T10 is (weight_S3+weight_T10), the Weight of the texture element T11 is (weight_S3_T6), the Weight of the texture element T6 is (weight_S6+weight_S6), the Weight of the texture element T6 is (weight_S6+weight_S6+weight_T5), the Weight of the texture element T9 is (weight_S4+weight_S4_T9), and the Weight of the texture element T9 is (weight_S9+weight_S4_T9).
And then, 4T is taken as granularity to carry out circulation, bilinear filtering calculation is carried out according to the obtained total weight and color information, and all filtering calculation results split out by the pixel point P are accumulated to obtain the final result of the pixel point P. The above procedure can be expressed by the following calculation formula :P=T0*Weight_S0_T0+T1*Weight_S0_T1+T2*(Weight_S0_T2+Weight_S1_T2)+T3*(Weight_S0_T3+Weight_S1_T3)+T4*Weight_S2_T4+T5*(Weight_S1_T5+Weight_S2_T5)+T6*Weight_S1_T6+T7*(Weight_S2_T7+Weight_S3_T7)+T8*(Weight_S2_T8+Weight_S3_T8)+T9*Weight_S4_T9+T10*(Weight_S3_T10+Weight_S4_T10)+T11*Weight_S3_T11+T12*Weight_S6_T12+T13*(Weight_S4_T13+Weight_S5_T13+Weight_S6_T13)+T14*(Weight_S4_T14+Weight_S5_T14)+T15*(Weight_S6_T15+Weight_S7_T15)+T16*(Weight_S5_T16+Weight_S6_T16+Weight_S7_T16)+T17*Weight_S5_T17+T18*Weight_S7_T18+T19*Weight_S7_T19.
The above flow shows that the original method takes S as granularity and needs 8 times Bilinear Filter times (32 times of multiplication), and the application only needs 5 times Bilinear Filter times of calculation, namely 20 times of multiplication according to 4T circulation. The performance is improved by about 37.5 percent.
Therefore, the application improves the performance by selecting and combining the texture elements on the basis of the original AF calculation method, and the main idea is to avoid repeated operation of the same texture elements so as to dynamically reduce sampling points and realize the optimization of anisotropic filtering performance under the lossless condition. The main advantages in the treatment flow and the technical effect are as follows:
① The application is a lossless optimization method, which can ensure that all the information of the required texture elements T are calculated into a final result;
② The more the split sampling points S are, the more obvious the performance improvement is, which can approach to 50%;
③ The application performs Bilinear calculation with S as granularity, and modifies the calculation with 4T as granularity. In the process, the coordinate and weight calculation of the texture element T in the original method is only required to be advanced to the outside of the 4T loop.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an anisotropic filter device for realizing the above-mentioned anisotropic filter method. The implementation of the solution provided by the device is similar to that described in the above method, so specific limitations in one or more embodiments of the anisotropic filtration device provided below can be found in the above limitations of the anisotropic filtration method, and will not be repeated here.
In one embodiment, as shown in FIG. 10, an anisotropic filtration device 1100 is provided, the device comprising:
The sampling point determining module 1001 is configured to determine a sampling point sequence corresponding to a pixel point on a texture map, where a distance between adjacent sampling points in the sampling point sequence is equal to a side length of a texture element in the texture map;
A texel determination module 1002, configured to determine a respective set of texels on the texture map for each sampling point in the sequence of sampling points;
A weight obtaining module 1003, configured to obtain, for each sampling point in the sampling point sequence, a weight of each texture element in the corresponding texture element group;
A weight accumulating module 1004, configured to accumulate weights corresponding to the same texture element based on the weight of each texture element in each texture element group, to obtain a total weight of each texture element;
A filtering calculation module 1005, configured to perform filtering calculation based on the total weight of each texture element and the color information of each texture element, to obtain a filtering calculation result corresponding to each texture element;
And a result processing module 1006, configured to obtain color information of the pixel point according to the filtering calculation result corresponding to each texture element.
In one embodiment, the weight accumulation module 1004 is configured to determine, based on the coordinates of each texture element in each texture element group, the texture elements with the same coordinates as the same texture element, and accumulate the weights corresponding to the same texture element to obtain the total weight of each texture element.
In one embodiment, the weight obtaining module 1003 is configured to obtain, for each sampling point in the sampling point sequence, a weight and coordinates of each texture element in the corresponding texture element group.
In one embodiment, the weight obtaining module 1003 is configured to calculate, for each sampling point in the sampling point sequence, a weight and a coordinate of each texture element in the corresponding texture element group according to the coordinate and the weight of the sampling point.
In one embodiment, the filtering calculation module 1005 is configured to calculate, with a plurality of texture elements as a cycle granularity of the filtering calculation, a filtering calculation result corresponding to each texture element according to the total weight and the color information of the corresponding texture element.
In one embodiment, the filtering calculation module 1005 is configured to multiply, for each texture element, the corresponding total weight and the color information to obtain a respective filtering calculation result for each texture element.
In one embodiment, the result processing module 1006 is configured to accumulate the filtering calculation results corresponding to each of the texture elements to obtain color information of the pixel point.
In one embodiment, the texel determining module 1002 is configured to determine, for each sampling point, a plurality of texels on the texture map that are closest to the sampling point according to coordinates of the sampling point on the texture map, and obtain the corresponding texel group according to the plurality of texels.
In one embodiment, the filtering calculation comprises bilinear filtering.
The various modules in the anisotropic filtration device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.