CN101271588B - Reconstructable Geometry Shadow Map Method - Google Patents
Reconstructable Geometry Shadow Map Method Download PDFInfo
- Publication number
- CN101271588B CN101271588B CN2008100961357A CN200810096135A CN101271588B CN 101271588 B CN101271588 B CN 101271588B CN 2008100961357 A CN2008100961357 A CN 2008100961357A CN 200810096135 A CN200810096135 A CN 200810096135A CN 101271588 B CN101271588 B CN 101271588B
- Authority
- CN
- China
- Prior art keywords
- shading
- triangle
- test pixel
- depth value
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012360 testing method Methods 0.000 claims description 81
- 239000011159 matrix material Substances 0.000 claims description 3
- FGRBYDKOBBBPOI-UHFFFAOYSA-N 10,10-dioxo-2-[4-(N-phenylanilino)phenyl]thioxanthen-9-one Chemical compound O=C1c2ccccc2S(=O)(=O)c2ccc(cc12)-c1ccc(cc1)N(c1ccccc1)c1ccccc1 FGRBYDKOBBBPOI-UHFFFAOYSA-N 0.000 claims 1
- 238000005070 sampling Methods 0.000 abstract description 19
- 238000013508 migration Methods 0.000 description 5
- 230000005012 migration Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Landscapes
- Image Generation (AREA)
Abstract
Description
技术领域 technical field
本发明是有关于一种图形处理,且特别是有关于一种阴影绘图。The present invention relates to graphics processing, and more particularly to shaded drawing.
背景技术 Background technique
在计算机绘图中,阴影图(Shadow mapping)以及阴影锥(shadow volumes)是二种常用的实时(real-time)阴影技术。阴影锥是Frank Crow在1977年所提出的技术,使用几何方法计算3维(3-D)物体遮光区域。此算法利用模板缓存(stencil buffer)来计算某一像素(测试像素)是否在阴影中。阴影锥的主要优点是对于像素是准确的,反之阴影图的准确性需视纹理存储器(texture memory)大小以及阴影如何投影而定。阴影锥技术需要大量的硬件填绘时间,而其执行速度往往会比阴影图技术要慢,尤其是处理大规模且复杂的几何场景时。In computer graphics, shadow mapping and shadow volumes are two commonly used real-time shadowing techniques. The shadow cone is a technique proposed by Frank Crow in 1977, which uses geometric methods to calculate the shading area of 3-dimensional (3-D) objects. This algorithm uses the stencil buffer to calculate whether a certain pixel (test pixel) is in shadow. The main advantage of shadow cones is that they are pixel accurate, whereas the accuracy of shadow maps depends on the size of texture memory and how shadows are cast. The shadow cone technique requires a lot of hardware rendering time, and its execution speed is often slower than the shadow map technique, especially when dealing with large-scale and complex geometric scenes.
阴影图是将阴影加入3-D计算机图像中的技术,其由LanceWilliams在1978提出。此算法被广泛应用于预先描绘(pre-rendered)场景,以及实时(real-time)的应用中。通过光源观察点比较遮光物与测试像素的深度,亦即测试对光源而言某个测试像素是否可见,以便建立遮光物的阴影。阴影图是一种简单有效的图像空间方法(image space method)。阴影图是阴影表现方法其中之一,其常常应用在高速需求上。然而,阴影图会遇到锯齿问题(aliasing error)以及深度偏移问题(depth bias issues)。解决这两个缺点是阴影表现技术领域的研究课题。Shadow maps are a technique for adding shadows to 3-D computer images, proposed by Lance Williams in 1978. This algorithm is widely used in pre-rendered scenes and real-time applications. The depth of the shading object and the test pixel is compared through the light source observation point, that is, it is tested whether a certain test pixel is visible to the light source, so as to establish the shadow of the shading object. Shadow maps are a simple and effective image space method. Shadow map is one of the shadow representation methods, which is often applied to high-speed requirements. However, shadow maps suffer from aliasing errors and depth bias issues. Addressing these two shortcomings is a research topic in the field of shadow representation techniques.
在阴影图中的锯齿错误可以分为二类:透视锯齿错误(perspective aliasing errors)以及投影锯齿错误(projectivealiasing errors)。在阴影边缘放大时会发生透视锯齿错误。当光线几乎平行几何表面且延伸超出深度范围时,就会发生投影锯齿错误。大部分阴影图技术的另一个问题是深度偏移问题。为了避免错误的“自阴影”(seld-shadowing)问题,William揭露一种常数深度偏移技术,其在与真实表面(true surface)比较之前便将偏移值加入深度采样中。不幸地,太多的偏移可能会导致错误的“无阴影”(non-shadowing,看起来像是遮光物浮在光线接收物的上方)而使阴影后退太远。实际上,直接地决定偏移值是非常难的,并且无法在每一个场景找出一个通用可接受的值。Aliasing errors in shadow maps can be divided into two categories: perspective aliasing errors and projective aliasing errors. Perspective aliasing occurs when zooming in on shadow edges. Projection aliasing occurs when rays are nearly parallel to a geometric surface and extend beyond the depth range. Another problem with most shadow map techniques is the problem of depth offset. To avoid false "seld-shadowing" problems, William discloses a constant depth offset technique that adds the offset value to the depth samples before comparing with the true surface. Unfortunately, too much offset can lead to false "non-shadowing" (looks like the occluder is floating above the light receiver) causing shadows to recede too far. In practice, it is very difficult to directly determine the offset value, and it is impossible to find a universally acceptable value in every scene.
发明内容 Contents of the invention
本发明提供一种可重建几何阴影图方法,以降低“透视锯齿”(perspective aliasing)与“投影锯齿”(projective aliasing)这二种锯齿错误,并解决深度偏移而引起的错误“自阴影”(falseself-shadowing)与错误“无阴影”(false non-shadowing)等问题的课题。The present invention provides a method for reconstructing geometric shadow maps to reduce the two kinds of aliasing errors of "perspective aliasing" and "projective aliasing", and solve the error "self-shadowing" caused by depth offset (false self-shadowing) and false "no shadow" (false non-shadowing) issues.
本发明提出一种可重建几何阴影图方法。首先以光源为观察点,储存物体前表面(fonrt-face)的多个遮光三角形的几何信息。对测试像素进行一致性测试,以便从多个遮光三角形中找出相对应于测试像素的遮光三角形。其中相对应于测试像素的遮光三角形具有遮光点;以光源为观察点,此测试像素与该遮光点重叠。使用上述遮光三角形的几何信息以及测试像素的位置信息,重建遮光点的深度值。最后比较遮光点的深度值与测试像素的深度值,以完成测试像素的阴影判断;其中,该测试像素的坐标为(p.x,p.y,p.z),而该一致性测试包括:选择所述多个遮光三角形其中之一;读取所选择的遮光三角形的几何信息,该所选择的遮光三角形的几何信息中包括该所选择的遮光三角形的顶点坐标(v0.x,v0.y,v0.z)、(v1.x,v1.y,v1.z)以及(v2.x,v2.y,v2.z);计算等式
在本发明的一实施例中,代替计算等式
在本发明的一实施例中,代替计算等式
本发明因以光源为观察点储存物体的前表面多个三角形的几何信息,因此可以使用该测试像素的位置信息以及所储存的几何信息,重建遮光点的深度值。获得遮光点的深度值后,便可以比较遮光点与测试像素二者的深度值,以完成该测试像素的阴影判断。The present invention uses the light source as the observation point to store the geometric information of multiple triangles on the front surface of the object, so the position information of the test pixel and the stored geometric information can be used to reconstruct the depth value of the shading point. After obtaining the depth value of the shading point, the depth value of the shading point and the test pixel can be compared to complete the shadow judgment of the test pixel.
本发明所述的可重建几何阴影图方法,通过减低透视锯齿与投影锯齿,可以产生精确的阴影边缘。The reconstructable geometric shadow map method described in the present invention can generate accurate shadow edges by reducing perspective aliasing and projection aliasing.
附图说明 Description of drawings
图1是依照本发明实施例说明一种可重建几何阴影图方法的流程图。FIG. 1 is a flowchart illustrating a method for reconstructing a geometric shadow map according to an embodiment of the present invention.
图2是依照本发明实施例说明阴影图、物体表面(部分)与测试像素的空间关系。FIG. 2 illustrates the spatial relationship among shadow maps, object surfaces (parts) and test pixels according to an embodiment of the present invention.
图3A说明二个相邻接三角形TR0与TR1。FIG. 3A illustrates two adjacent triangles TR0 and TR1.
图3B说明在图3A中三角形TR0与TR1的光栅化区域AR0与AR1。FIG. 3B illustrates rasterized areas AR0 and AR1 of triangles TR0 and TR1 in FIG. 3A.
图3C是依照本发明说明二种采样模板的图样范例。FIG. 3C is an example of patterns illustrating two sampling templates according to the present invention.
图4A说明标准阴影图所产生的投影锯齿错误。Figure 4A illustrates projection aliasing errors produced by standard shadow maps.
图4B是依照本发明实施例说明可重建几何阴影图所产生的投影锯齿结果。FIG. 4B illustrates the projection aliasing results generated by the reconstructable geometric shadow map according to an embodiment of the present invention.
图5A说明标准阴影图以常数深度偏移技术(深度偏移值1e-3)所产生的测试场景。FIG. 5A illustrates a test scene generated by a standard shadow map with a constant depth offset technique (depth offset value 1e-3).
图5B说明标准阴影图以常数深度偏移技术(深度偏移值1e-6)所产生的测试场景。FIG. 5B illustrates a test scene generated by a standard shadow map with a constant depth offset technique (depth offset value 1e-6).
图5C是依照本发明实施例说明可重建几何阴影图(深度偏移值1e-6)所产生的图形深度偏移测试场景。FIG. 5C is a graph illustrating a depth migration test scene generated by a reconstructable geometric shadow map (depth migration value 1e-6) according to an embodiment of the present invention.
具体实施方式 Detailed ways
为让本发明的上述特征和优点能更明显易懂,下文特举较佳实施例,并配合所附图式,作详细说明如下。In order to make the above-mentioned features and advantages of the present invention more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.
本领域技术人员可以参照以下实施例来实现本发明。当然,下述实施例亦可以计算机程序的形式实现,并利用计算机可读取储存介质储存该计算机程序,以利计算机执行可重建几何阴影图的方法。Those skilled in the art can realize the present invention with reference to the following embodiments. Of course, the following embodiments can also be implemented in the form of a computer program, and a computer-readable storage medium is used to store the computer program, so that the computer can execute the method for reconstructing a geometric shadow map.
图1是依照本发明实施例说明一种可重建几何阴影图方法的流程图。本实施例可以处理多个光源。为能简单清楚说明本实施例,以下将以单一光源为例来说明可重建几何阴影图的方法。在计算机所绘制的图形中,物体表面可以由多个几何形(例如三角形或其他几何形)所构成。本实施例将假设物体表面是由多个三角形所构成。本领域普通技术人员可以任何技术绘制上述物体表面。FIG. 1 is a flowchart illustrating a method for reconstructing a geometric shadow map according to an embodiment of the present invention. This embodiment can handle multiple light sources. In order to describe this embodiment simply and clearly, a single light source will be taken as an example below to describe a method for reconstructing a geometric shadow map. In graphics drawn by a computer, the surface of an object may be composed of multiple geometric shapes (such as triangles or other geometric shapes). In this embodiment, it is assumed that the surface of the object is composed of multiple triangles. A person of ordinary skill in the art may render the surface of the above-mentioned object by any technique.
图2依照本发明实施例说明阴影图、物体表面(部分)与测试像素的空间关系。场景可以从光源为观察点(light’s point ofview)绘出。以点光源(pointlight source)而言,此观察点可以是透视投影(perspective projection)。对于指向性光源(directional light)而言,可以使用正交投影(orthographicprojection)。如图2所示,遮光物体表面包括三角形TR0、TR1、TR2与TR3。从上述绘制中,会撷取每一个遮光三角形(occludingtriangles)TR0~TR3的信息,并将其存放在几何阴影图(geometryshadow maps)中。亦即,以光源观点光源为观察点,储存某一物体的前表面的多个几何形的几何信息(步骤S110)。于本实施例中,上述几何信息可以包括各个几何形的顶点坐标,例如遮光三角形TR0~TR3的顶点坐标或者包括各个几何形的图形索引。在光源观察点的规范视体(light canonical view volume)与光源视点空间(light view space)中,此三角形的线性特性可以让我们针对点光源(指向性光源也一样)下重建这些遮光三角形。FIG. 2 illustrates the spatial relationship among shadow maps, object surfaces (parts) and test pixels according to an embodiment of the present invention. A scene can be drawn from the light's point of view. In the case of a point light source, this observer point may be a perspective projection. For directional light, an orthographic projection can be used. As shown in FIG. 2 , the surface of the shading object includes triangles TR0 , TR1 , TR2 and TR3 . From the above drawing, the information of each occluding triangles TR0-TR3 will be extracted and stored in geometry shadow maps. That is, the geometric information of a plurality of geometric shapes on the front surface of an object is stored by taking the light source as the observation point (step S110 ). In this embodiment, the above-mentioned geometric information may include vertex coordinates of each geometric shape, for example, vertex coordinates of the light-shielding triangles TR0 - TR3 , or include graphic indexes of each geometric shape. The linearity of this triangle allows us to reconstruct these shading triangles for point lights (and directional lights as well) in the light canonical view volume of the light observer and in the light view space.
接下来进行步骤S120,对测试像素进行一致性测试,以从所有几何形中找出一遮光几何形。其中,该遮光几何形具有遮光点Pd(以光源为观察点的几何阴影图中,测试像素P与遮光点Pd重叠)。步骤S120可以应用几何阴影图(geometry shadowmaps)从摄影机观察点(camera viewpoint)绘制场景。此处理具有三个主要构件。对于物体的每一个测试像素(testing pixel,例如图2中的测试像素P)而言,首先要找出从光源所看到像素的坐标(p.x,p.y,p.z)。坐标(p.x,p.y,p.z)的x与y值对应于在几何图纹理(geometry map texture)中的位置,并且被使用于三角形一致性测试(triangle consistency tests)以便找出遮光三角形。Next, step S120 is performed to perform a consistency test on the test pixels to find a shading geometric shape from all the geometric shapes. Wherein, the shading geometry has a shading point Pd (the test pixel P overlaps with the shading point Pd in the geometric shadow diagram with the light source as the observation point). Step S120 may apply geometry shadowmaps to draw the scene from the camera viewpoint. This process has three main building blocks. For each testing pixel (testing pixel, such as the testing pixel P in FIG. 2 ) of the object, it is first necessary to find out the coordinates (p.x, p.y, p.z) of the pixel seen from the light source. The x and y values of the coordinates (p.x, p.y, p.z) correspond to positions in the geometry map texture and are used in triangle consistency tests to find shading triangles.
上述步骤S120可以找出测试像素P的遮光三角形是TR0。接下来进行步骤S130,使用遮光几何形的几何信息以及测试像素的位置信息,重建遮光点的深度值。也就是使用步骤S110所储存的几何信息来重建像素P的遮光点深度值(例如图2中遮光点Pd的深度值)。The above step S120 can find out that the shading triangle of the test pixel P is TR0. Then proceed to step S130, using the geometric information of the shading geometry and the position information of the test pixel to reconstruct the depth value of the shading point. That is, the geometric information stored in step S110 is used to reconstruct the depth value of the shading point of the pixel P (eg, the depth value of the shading point Pd in FIG. 2 ).
接下来进行步骤S140,比较遮光点Pd的深度值与测试像素P的深度值,以完成测试像素P的阴影判断。对照于遮光三角形TR0的重建深度值,测试像素P的z值(深度值,得自光源观察点的规范视体(light canonical view volume))将被测试,以完成测试像素P的阴影判断。最后,绘出在阴影中或是在光亮中的所测试像素。若有多个光源,则对每一个光源使用各自不同的几何阴影图。Next, step S140 is performed to compare the depth value of the shading point Pd with the depth value of the test pixel P, so as to complete the shadow determination of the test pixel P. Compared with the reconstructed depth value of the shading triangle TR0, the z value (depth value, obtained from the light canonical view volume of the light source observation point) of the test pixel P is tested to complete the shadow judgment of the test pixel P. Finally, the tested pixel is drawn either in shadow or in light. If there are multiple lights, use a separate geometric shadow map for each light.
本领域普通技术人员可以依照上述说明而实现本实施例。以下将说明图1中各步骤的详细实施范例,然而本发明的实现方式不应以此受限。图2说明从光的观察视空间的点光源转换至光源观察点的规范视体中的指向性光源。假设在光源观察点的规范视体中的场景是由四个相邻三角形TR0、TR1、TR2与TR3所组成。Those skilled in the art can implement this embodiment according to the above description. A detailed implementation example of each step in FIG. 1 will be described below, but the implementation of the present invention should not be limited thereto. Figure 2 illustrates the conversion from a point source of light in the view view space of light to a directional light source in the canonical view volume of the view point of the light source. Assume that the scene in the canonical viewing volume of the light source observer is composed of four adjacent triangles TR0, TR1, TR2 and TR3.
首先(步骤S110),三角形TR0~TR3分别被投影(projected)与光栅化(rasterized)至几何阴影图中其对应的区域AR0、AR1、AR2与AR3。在各区域AR0~AR3中的每一个纹理元素(texel)包含其对应三角形的几何信息(本实施例中为顶点坐标),例如在区域AR0中的纹理元素包含三角形TR0的顶点坐标(v0.x,v0.y,v0.z)、(v1.x,v1.y,v1.z)以及(v2.x,v2.y,v2.z)。步骤S110除了将几何信息储存在阴影图外(现有技术储存在阴影图的是深度值,而不是几何信息),步骤S 110的操作几乎等同于标准阴影图。对于点光源而言将场景转换至光源观察点的规范视体(light canonical view volume),然后在其阴影图的光栅化区域储存了三角形的三个顶点坐标。另一个方式可以从相邻接的三角形获得坐标顶点。例如在图2中,与三角形TR0相邻接的三角形TR1、TR2与TR3的6个顶点坐标均被储存在三角形TR0的光栅化区域。对于指向性光源而言,则储存指定“处理中”光源观察点的规范视体空间的顶点坐标,此观察点空间的光线平行于z轴。First (step S110 ), the triangles TR0 - TR3 are respectively projected and rasterized to their corresponding regions AR0 , AR1 , AR2 and AR3 in the geometric shadow map. Each texel in each area AR0-AR3 contains the geometric information (vertex coordinates in this embodiment) of its corresponding triangle, for example, the texel in the area AR0 contains the vertex coordinates of the triangle TR0 (v 0 . x, v 0 .y, v 0 .z), (v 1 .x, v 1 .y, v 1 .z) and (v 2 .x, v 2 .y, v 2 .z). Except that the geometry information is stored in the shadow map in step S110 (in the prior art, depth values are stored in the shadow map instead of geometry information), the operation of step S110 is almost identical to the standard shadow map. For point lights, transforms the scene to the light canonical view volume of the light's observer point, then stores the coordinates of the triangle's three vertices in a rasterized region of its shadow map. Another way is to get coordinate vertices from adjacent triangles. For example, in FIG. 2 , the coordinates of the six vertices of the triangles TR1 , TR2 and TR3 adjacent to the triangle TR0 are all stored in the rasterization area of the triangle TR0 . For directional lights, stores the vertex coordinates of the canonical view volume space specifying the "processing"light's observer point whose rays are parallel to the z-axis.
接下来,在可视空间(eye space)中的可见像素(visiblepixel)P被转换至光源观察点的规范视体坐标(p.x,p.y,p.z)。步骤S120所述一致性测试可能包括选择几何形(例如三角形TR0~TR3)其中之一。步骤S120可能包括读取所选择几何形的几何信息(例如,若选择三角形TR0,则从几何阴影图读取区域AR0的几何信息)。上述几何信息中可以包括几何形的顶点坐标,例如三角形TR0的顶点坐标(v0.x,v0.y,v0.z)、(v1.x,v1.y,v1.z)以及(v2.x,v2.y,v2.z)。以二维(2-D)坐标(p.x,p.y),可以找出几何阴影图中对应的采样点T。在此步骤S120可能包括计算等式1:Next, a visible pixel (visible pixel) P in eye space is transformed to the canonical view volume coordinates (px, py, pz) of the light source observer point. The consistency test in step S120 may include selecting one of the geometric shapes (eg, triangles TR0 - TR3 ). Step S120 may include reading the geometric information of the selected geometric shape (for example, if the triangle TR0 is selected, reading the geometric information of the area AR0 from the geometric shadow map). The above geometric information may include the vertex coordinates of geometric shapes, for example, the vertex coordinates (v 0 .x, v 0 .y, v 0 .z), (v 1 .x, v 1 .y, v 1 .z ) and (v 2 .x, v 2 .y, v 2 .z). With two-dimensional (2-D) coordinates (px, py), the corresponding sampling point T in the geometric shadow map can be found. At this step S120 may include calculating Equation 1:
以求取对应于三角形TR0顶点的遮光点Pd的三维(3-D)重心坐标值(w1,w2,w3)。依据遮光点Pd的重心坐标值(w1,w2,w3)判断所选择的几何形(三角形TR0)是否为一致的。To obtain the three-dimensional (3-D) barycentric coordinates (w 1 , w 2 , w 3 ) of the shading point Pd corresponding to the apex of the triangle TR0. According to the barycentric coordinates (w 1 , w 2 , w 3 ) of the shading point Pd, it is judged whether the selected geometry (triangle TR0) is consistent.
对于每一个可见像素P,遮光三角形TR0需要被正确定位,以便接下来可以从储存在几何阴影图的几何信息重建此遮光点Pd的深度值。此处理便是所谓三角形“一致性测试”。然而,具有测试像素坐标(x,y)的采样纹理图(sampling texture maps)不一定能返回有关挡住该测试像素P的三角形TR0的信息。若从等式1所计算获得的三个重心坐标值(w1,w2,w3)是在[0,1]范围中(意思是此三角形挡住了该测试像素),便称此三角形测试是一致的(consistent)。否则此测试是不一致的(inconsistent)。若所选择的几何形判断结果为一致的,则几何形(三角形TR0)为测试像素P的遮光几何形。For each visible pixel P, the shading triangle TR0 needs to be correctly positioned so that the depth value of this shading point Pd can be reconstructed from the geometric information stored in the geometric shadow map. This process is the so-called triangle "conformance test". However, sampling texture maps with test pixel coordinates (x, y) may not necessarily return information about the triangle TR0 occluding the test pixel P. If the three center-of-gravity coordinates (w 1 , w 2 , w 3 ) calculated from
由于阴影图的有限解析度,可能导致三角形测试的不一致结果。若纹理贴图的解析度比较低,则更有可能让三角形测试结果变得不一致。图3A说明二个相邻接三角形TR0与TR1,而三角形TR0与TR1为有限解析度。图3B说明在图3A中三角形TR0与TR1的光栅化区域AR0与AR1。在有限解析度之下,区域AR0为三角形TR0的光栅化区域,而区域AR1是三角形TR1的光栅化区域。点T是采样点(sampled point),其具有与所测试可见像素P相同的(x,y)坐标。然而通过采样点T,本实施例所存取者为带有三角形TR0几何信息的纹理元素A。如图3B所示,采样点T本来应该在三角形TR1的光栅化区域内,然而三角形TR0的信息可能会因为有限解析度而导致错误的深度值重建(错将采样点T视为三角形TR0的遮光点)。图3B中采样点T’亦有相似问题。Due to the limited resolution of the shadow map, this can lead to inconsistent results for triangle tests. Lower resolution texture maps are more likely to make triangle test results inconsistent. FIG. 3A illustrates two adjacent triangles TR0 and TR1 , and the triangles TR0 and TR1 are limited resolution. FIG. 3B illustrates rasterized areas AR0 and AR1 of triangles TR0 and TR1 in FIG. 3A. Under limited resolution, area AR0 is the rasterized area of triangle TR0 and area AR1 is the rasterized area of triangle TR1. Point T is a sampled point which has the same (x, y) coordinates as visible pixel P under test. However, through the sampling point T, the object accessed in this embodiment is the texel A with the geometric information of the triangle TR0. As shown in Figure 3B, the sampling point T should be in the rasterization area of the triangle TR1, but the information of the triangle TR0 may cause wrong depth value reconstruction due to the limited resolution (the sampling point T is mistakenly regarded as the shading of the triangle TR0 point). The sampling point T' in Fig. 3B also has a similar problem.
以相邻接三角形的几何信息,通过采样对应点T可以找出挡住所测试像素P的遮光三角形。然而,当二个相邻区域被光栅化,邻接三角形的几何信息便可能会不能使用。为了解决这个问题,本实施例增加采样点以包含更多三角形的几何信息,因此也增加了找出一致的三角形测试的机会。图3C是依照本发明说明二种采样模板(sampling kernels)T与T’的图样范例。若所测试像素P被多层几何表面所挡住,此模板亦可排序所有一致三角形测试的深度结果,并且取其最小值作为遮光点的最终深度值。以采样模板T的图样为例,除了存取带有区域AR0信息的纹理元素来计算采样点T之外,更存取带有区域AR0信息的纹理元素来计算采样点T2的深度值,存取带有区域AR2信息的纹理元素来计算采样点T1的深度值,以及存取带有区域AR1信息的纹理元素来计算采样点T3与T4的深度值。接下来排序所有一致三角形测试的深度结果(T、T1、T2、T3与T4的深度值),并且取其最小值作为遮光点Pd的最终深度值。With the geometric information of the adjacent triangles, by sampling the corresponding point T, the shading triangle that blocks the pixel P under test can be found. However, when two adjacent regions are rasterized, the geometric information of adjacent triangles may not be available. In order to solve this problem, this embodiment increases sampling points to include more geometric information of triangles, thus increasing the chance of finding consistent triangle tests. FIG. 3C is an example diagram illustrating two sampling kernels T and T' according to the present invention. If the tested pixel P is blocked by multi-layer geometric surfaces, this template can also sort the depth results of all consistent triangle tests, and take the minimum value as the final depth value of the shading point. Taking the pattern of the sampling template T as an example, in addition to accessing the texel with area AR0 information to calculate the sampling point T, it also accesses the texel with area AR0 information to calculate the depth value of the sampling point T2. The texel with the information of the area AR2 is used to calculate the depth value of the sampling point T1, and the texel with the information of the area AR1 is accessed to calculate the depth values of the sampling points T3 and T4. Next, sort the depth results of all consistent triangle tests (the depth values of T, T1, T2, T3 and T4), and take the minimum value as the final depth value of the shading point Pd.
对于准确性而言,选择适当的模板图样是很重要的。相较于较小模板图样而言,大一点的模板图样常常提供较高准确性。然而,包含许多采样点的较大模板可能不利于性能。图3C所示的特殊模板图样可以较少的采样而实现相近的准确性。通过设定某一测试像素的三角形一致性测试总量,更可以减少采样数量。For accuracy, it is important to select an appropriate template pattern. Larger stencil patterns often provide higher accuracy than smaller stencil patterns. However, larger templates containing many sampling points may be detrimental to performance. The special template pattern shown in FIG. 3C can achieve similar accuracy with less sampling. By setting the total amount of triangle consistency test for a certain test pixel, the number of samples can be reduced.
对于所测试的像素P,当纹理解析度是临界的(subcritical,其将导致一些遮光三角形无法存入阴影图),这些对应三角形测试必定不一致。基于此,这些三角形测试便依照到中央三角形的加权距离的顺序而被排序,以便使用对应于“最近距离”(closest-distance)权重值三角形信息来进行重建。当合理的假设重建遮光点是在“最近距离”三角形的相同平面,此经计算获得的加权距离可以是欧几里德几何学(Euclidean)的计算方法。For the tested pixel P, when the texture resolution is subcritical (which will cause some shading triangles to fail to be stored in the shadow map), the corresponding triangles must be tested inconsistently. Based on this, the triangle tests are sorted in order of weighted distance to the central triangle for reconstruction using the triangle information corresponding to the "closest-distance" weight value. When it is reasonable to assume that the reconstructed shading point is on the same plane as the "closest distance" triangle, the calculated weighted distance can be calculated in Euclidean geometry.
在获得了正确的三角形信息后,所测试像素的遮光点深度值可以被重建。经由三角形插值(triangle interpolation),遮光三角形TR0中遮光点Pd的深度值可以被重建。在从等式1计算出上述权重值后,步骤S130中遮光点Pd的深度值T.z可以利用下式重建:After obtaining the correct triangle information, the shading point depth value of the tested pixel can be reconstructed. Through triangle interpolation, the depth value of the shading point Pd in the shading triangle TR0 can be reconstructed. After calculating the above weight value from
或者,结合等式1与等式2,可以获得等式3:Alternatively, combining
步骤S130中遮光点Pd的深度值T.z可以利用等式3重建。在上述等式3中必须进行3×3矩阵的逆运算。目前图形处理单元(Graphics Processing Unit,GPU)硬件并未直接支持3×3矩阵的逆运算。因此我们必须将其分解成一些通常的算术逻辑运算单元(arithmetic and logic unit,ALU)指令。然而,ALU指令集并不能保证精确性,而且可能会对逆运算结果引入更多相关的误差而影响到最终的重建深度值。The depth value T.z of the shading point Pd in step S130 can be reconstructed using Equation 3. In Equation 3 above, an inverse operation of the 3*3 matrix must be performed. Currently, Graphics Processing Unit (GPU) hardware does not directly support the inverse operation of a 3×3 matrix. So we have to break it down into some usual arithmetic and logic unit (ALU) instructions. However, the ALU instruction set does not guarantee accuracy, and may introduce more related errors to the inversion result and affect the final reconstructed depth value.
为了改善上述问题,本实施例将等式3改写为下述等效的等式:In order to improve the above problems, this embodiment rewrites Equation 3 into the following equivalent equation:
因此,步骤S130中遮光点Pd的深度值T.z亦可以利用等式4重建。Therefore, the depth value T.z of the shading point Pd in step S130 can also be reconstructed using Equation 4.
最后通过比较遮光点Pd与像素P的光源规范视体深度值(canonical volume depth values),亦即比较T.z与p.z,可以完成像素P的阴影判断(步骤S140)。Finally, by comparing the shading point Pd with the canonical volume depth values of the pixel P, that is, comparing T.z and p.z, the shadow judgment of the pixel P can be completed (step S140).
图4A说明标准阴影图所产生的投影锯齿错误。图4A显示的场景是悬浮在底平面上方的一块四边形板,因此四边形板在底平面形成一带状阴影。图4A左下角显示所述带状阴影的局部放大图。从图4A可以很明显看出,传统标准阴影图所产生的投影锯齿错误是很明显的。相较于图4A,图4B是依照本发明实施例说明可重建几何阴影图所产生的投影锯齿结果。亦即,图4B使用了上述本发明实施例所介绍的新算法:可重建几何阴影图(Reconstructable Geometry Shadow Map,RGSM),做为锯齿问题的解决方案。图4B显示的场景与图4A相同。从图4B可以很明显看出,本发明实施例所使用的RGSM算法所产生的投影锯齿错误很明显的得到大幅度的改善。Figure 4A illustrates projection aliasing errors produced by standard shadow maps. The scene shown in Fig. 4A is a quadrilateral plate floating above the bottom plane, so the quadrilateral plate forms a strip shadow on the bottom plane. The lower left corner of Figure 4A shows a partially enlarged view of the banded shadow. It can be clearly seen from FIG. 4A that the projection aliasing error produced by the traditional standard shadow map is very obvious. Compared with FIG. 4A , FIG. 4B illustrates the projection aliasing result generated by the reconstructable geometric shadow map according to an embodiment of the present invention. That is, FIG. 4B uses the new algorithm described in the above-mentioned embodiments of the present invention: Reconstructable Geometry Shadow Map (RGSM), as a solution to the aliasing problem. Figure 4B shows the same scene as Figure 4A. It can be clearly seen from FIG. 4B that the projection aliasing error generated by the RGSM algorithm used in the embodiment of the present invention is significantly improved.
大部分阴影图技术的另一个问题是深度偏移问题。图5A、5B与5C的图形深度偏移测试的场景相同,均是房子与栏杆。图5A说明标准阴影图以常数深度偏移技术(深度偏移值1e-3)所产生的测试场景,以避免错误的自身阴影(self-shadowing)问题。亦即,其在与真实表面(true surface)比较之前便将深度偏移值加入深度采样中。由于图5A的深度偏移值过大,导致错误的“无阴影”(non-shadowing,看起来像是遮光物浮在光线接收物的上方)现象而使阴影后退太远。实际上,直接地决定偏移值是非常难的,并且无法在每一个场景找出一个可接受的值。例如,图5B说明标准阴影图以常数深度偏移技术(深度偏移值1e-6)所产生的测试场景。为了改善错误的“无阴影”现象而使用较小的深度偏移值(1e-6),虽然改善了“无阴影”现象,却产生了错误的“自阴影”(self-shadowing)问题(如图5B所示)。图5C是依照本发明实施例说明可重建几何阴影图所产生的图形深度偏移测试场景。亦即,图5C使用了上述本发明实施例所介绍的RGSM算法做为深度偏移问题的解决方案。图5C的深度偏移值与图5B相同,均是1e-6。从图5C可以很明显看出,本发明实施例所使用的RGSM算法可以使用极小的深度偏移值,而不会产生错误的“自阴影”问题。Another problem with most shadow map techniques is the problem of depth offset. 5A, 5B and 5C have the same scene for the graphics depth migration test, which are houses and railings. FIG. 5A illustrates a test scene generated by a standard shadow map with a constant depth offset technique (depth offset value 1e-3) to avoid false self-shadowing problems. That is, it adds the depth offset value to the depth sample before comparing it to the true surface. Because the depth offset value in FIG. 5A is too large, it causes false "non-shadowing" (it looks like the shading object is floating above the light receiving object) phenomenon and the shadow recedes too far. In practice, it is very difficult to directly determine the offset value, and it is impossible to find an acceptable value in every scene. For example, FIG. 5B illustrates a test scene generated by a standard shadow map with a constant depth offset technique (depth offset value 1e-6). In order to improve the false "no shadow" phenomenon, a small depth offset value (1e-6) is used. Although the "no shadow" phenomenon is improved, the false "self-shadowing" problem (such as Figure 5B). FIG. 5C is a graph illustrating a depth migration test scene generated by a reconstructable geometric shadow map according to an embodiment of the present invention. That is, FIG. 5C uses the RGSM algorithm introduced in the above embodiments of the present invention as a solution to the depth migration problem. The depth offset value of FIG. 5C is the same as that of FIG. 5B, which is 1e-6. It can be clearly seen from FIG. 5C that the RGSM algorithm used in the embodiment of the present invention can use a very small depth offset value without causing false "self-shadowing" problems.
综上所述,本实施例可以保证像素级别的(pixel-wise)深度准确性,具有下列优点:In summary, this embodiment can ensure pixel-wise depth accuracy and has the following advantages:
1.通过减低透视锯齿与投影锯齿,其可以产生精确的阴影边缘。其更可以在动态场景中移除阴影边缘“抖动”(jittering)现象。1. By reducing perspective aliasing and projection aliasing, it can produce accurate shadow edges. It also removes shadow edge "jittering" in dynamic scenes.
2.比起其他的阴影图技术,本实施例可以具有很小的深度偏移值。通过设定单一且固定的偏移值,使用RGSM的程序设计者可以符合大部分应用的需求,并且产生正确图像而避免错误的“自阴影”(false self-shadowing)或是错误的“无阴影”(falsenon-shadowing)问题。2. Compared with other shadow map techniques, this embodiment can have a small depth offset value. By setting a single and fixed offset value, programmers using RGSM can meet the needs of most applications and produce correct images without false self-shadowing or false "no shadows". "(falsenon-shadowing) problem.
3.在相同输出阴影品质与高速执行的前提下,其只使用标准阴影图的少量存储器空间。3. Under the premise of the same output shadow quality and high-speed execution, it only uses a small amount of memory space of the standard shadow map.
以上所述仅为本发明较佳实施例,然其并非用以限定本发明的范围,任何熟悉本项技术的人员,在不脱离本发明的精神和范围内,可在此基础上做进一步的改进和变化,因此本发明的保护范围当以本申请的权利要求书所界定的范围为准。The above description is only a preferred embodiment of the present invention, but it is not intended to limit the scope of the present invention. Any person familiar with this technology can make further improvements on this basis without departing from the spirit and scope of the present invention. Improvements and changes, so the protection scope of the present invention should be defined by the claims of the present application.
附图中符号的简单说明如下:A brief description of the symbols in the drawings is as follows:
A、B:纹理元素A, B: texture elements
AR0、AR1、AR2、AR3:几何阴影图中的对应区域AR0, AR1, AR2, AR3: Corresponding regions in the geometric shadow map
P:测试像素P: test pixel
Pd:遮光点Pd: shading point
S110~S140:依照本发明实施例说明可重建几何阴影图方法的各步骤S110-S140: According to the embodiment of the present invention, each step of the method for reconstructing the geometric shadow map is described
TR0、TR1、TR2、TR3:遮光物体表面的三角形TR0, TR1, TR2, TR3: triangles on the surface of the shading object
T、T’:采样点。T, T': sampling point.
Claims (17)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US98272707P | 2007-10-26 | 2007-10-26 | |
US60/982,727 | 2007-10-26 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101271588A CN101271588A (en) | 2008-09-24 |
CN101271588B true CN101271588B (en) | 2012-01-11 |
Family
ID=40005539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008100961357A Active CN101271588B (en) | 2007-10-26 | 2008-05-06 | Reconstructable Geometry Shadow Map Method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN101271588B (en) |
TW (1) | TWI417808B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104966297A (en) * | 2015-06-12 | 2015-10-07 | 浙江大学 | General assistant technique for generating shadow through shadow map |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101882324B (en) * | 2010-05-19 | 2012-03-28 | 北京航空航天大学 | Real-time rendering method of soft shadow based on two-way penumbra |
FR2988891A1 (en) * | 2012-03-29 | 2013-10-04 | Thomson Licensing | METHOD FOR ESTIMATING OPACITY LEVEL IN A SCENE AND CORRESPONDING DEVICE |
US9083960B2 (en) * | 2013-01-30 | 2015-07-14 | Qualcomm Incorporated | Real-time 3D reconstruction with power efficient depth sensor usage |
US10074211B2 (en) | 2013-02-12 | 2018-09-11 | Thomson Licensing | Method and device for establishing the frontier between objects of a scene in a depth map |
CN104966313B (en) * | 2015-06-12 | 2017-09-19 | 浙江大学 | A Geometric Shadow Map Method for Triangle Reconstruction |
CN109712211B (en) * | 2018-12-21 | 2023-02-10 | 西安恒歌数码科技有限责任公司 | Efficient body shadow generation method based on OSG |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5870097A (en) * | 1995-08-04 | 1999-02-09 | Microsoft Corporation | Method and system for improving shadowing in a graphics rendering system |
US6208361B1 (en) * | 1998-06-15 | 2001-03-27 | Silicon Graphics, Inc. | Method and system for efficient context switching in a computer graphics system |
US6903741B2 (en) * | 2001-12-13 | 2005-06-07 | Crytek Gmbh | Method, computer program product and system for rendering soft shadows in a frame representing a 3D-scene |
-
2008
- 2008-05-06 CN CN2008100961357A patent/CN101271588B/en active Active
- 2008-05-21 TW TW97118693A patent/TWI417808B/en active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104966297A (en) * | 2015-06-12 | 2015-10-07 | 浙江大学 | General assistant technique for generating shadow through shadow map |
CN104966297B (en) * | 2015-06-12 | 2017-09-12 | 浙江大学 | A kind of method that general echo generates shade |
Also Published As
Publication number | Publication date |
---|---|
TWI417808B (en) | 2013-12-01 |
CN101271588A (en) | 2008-09-24 |
TW200919369A (en) | 2009-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI584223B (en) | Method and system of graphics processing enhancement by tracking object and/or primitive identifiers,graphics processing unit and non-transitory computer readable medium | |
US11069124B2 (en) | Systems and methods for reducing rendering latency | |
US7570266B1 (en) | Multiple data buffers for processing graphics data | |
US11138782B2 (en) | Systems and methods for rendering optical distortion effects | |
US7280121B2 (en) | Image processing apparatus and method of same | |
US10699467B2 (en) | Computer-graphics based on hierarchical ray casting | |
US8791945B2 (en) | Rendering tessellated geometry with motion and defocus blur | |
US7362332B2 (en) | System and method of simulating motion blur efficiently | |
JP3390463B2 (en) | Shadow test method for 3D graphics | |
CN101271588B (en) | Reconstructable Geometry Shadow Map Method | |
US8379021B1 (en) | System and methods for rendering height-field images with hard and soft shadows | |
WO2017206325A1 (en) | Calculation method and apparatus for global illumination | |
JP2001501349A (en) | Method and apparatus for attribute interpolation in 3D graphics | |
US10553012B2 (en) | Systems and methods for rendering foveated effects | |
TW201015490A (en) | Method and system for rendering 3D model of 3D object | |
US7038678B2 (en) | Dependent texture shadow antialiasing | |
TW201235975A (en) | Hierarchical motion blur rasterization | |
US11941746B2 (en) | Accurate smooth occluding contours | |
CN103544731B (en) | A kind of quick reflex method for drafting based on polyphaser | |
US8605088B2 (en) | Method for reconstructing geometry mapping | |
US6542154B1 (en) | Architectural extensions to 3D texturing units for accelerated volume rendering | |
US7684641B1 (en) | Inside testing for paths using a derivative mask | |
CN104966313A (en) | Geometric shadow map method for triangle reconstruction | |
Hormann et al. | A quadrilateral rendering primitive | |
Yuan et al. | Tile pair-based adaptive multi-rate stereo shading |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |