[go: up one dir, main page]

US20130207977A1 - Method and Apparatus for Rendering Translucent and Opaque Objects - Google Patents

Method and Apparatus for Rendering Translucent and Opaque Objects Download PDF

Info

Publication number
US20130207977A1
US20130207977A1 US13/763,974 US201313763974A US2013207977A1 US 20130207977 A1 US20130207977 A1 US 20130207977A1 US 201313763974 A US201313763974 A US 201313763974A US 2013207977 A1 US2013207977 A1 US 2013207977A1
Authority
US
United States
Prior art keywords
tag
buffer
primitive
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/763,974
Inventor
John W. Howson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagination Technologies Ltd
Original Assignee
Imagination Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagination Technologies Ltd filed Critical Imagination Technologies Ltd
Priority to US13/763,974 priority Critical patent/US20130207977A1/en
Publication of US20130207977A1 publication Critical patent/US20130207977A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • G06T15/405Hidden part removal using Z-buffer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/62Semi-transparency

Definitions

  • This invention relates to a 3-dimensional computer graphics system and in particular to methods and apparatus which reduce the number of times the data assigned to each pixel have to be modified when rendering an image in such a system.
  • Tile based rendering systems are known. These break down an image to be rendered into a plurality of rectangular blocks or tiles. The way in which this is done and the subsequent texturing and shading performed is shown schematically in FIG. 1 .
  • This shows a geometry processing unit 2 which receives the image data from an application and transforms it into screen space using a well-known method. The data is then supplied to a tiling unit 4 , which inserts the screen space geometry into object lists for a set of defined rectangular regions, or tiles, 6 .
  • Each list contains primitives (surfaces) that exist wholly or partially in a sub-region of a screen (i.e. a tile). A list exists for every tile on the screen, although it should be borne in mind that some lists may have no data in them.
  • Data then passes tile by tile to a hidden surface removal unit 8 (HSR) and from there to a texturing and shading unit 10 (TSU).
  • HSR hidden surface removal unit 8
  • TSU texturing and shading unit 10
  • the HSR unit In order to correctly render such an image, the HSR unit must pass “layers” of pixels which need to be shaded to the TSU. This is because more than one object will contribute to the image data to be applied to a particular pixel. For example the view from the inside of a building looking through a pane of dirty glass requires both the geometry visible through the glass, and then the pane of glass itself to be passed to the TSU. This process is referred to as “pass spawning”.
  • a tile based rendering device of the type shown in FIG. 1 will use a buffer to hold a tag for the front most object for each pixel in the tile currently being processed.
  • a pass is typically spawned whenever the HSR unit 8 processes a translucent object, before the visibility test is performed. This results in all currently visible tags stored in the buffer followed by the visible pixels of the translucent object being sent to the TSU, i.e. more than one set of pixel data being passed for each pixel,
  • the flow diagram of FIG. 2 illustrates this approach, in this, a determination is made at 12 as to whether or not a primitive being processed is opaque. If it is not, then the buffer of tags is sent at 14 to the TSU 10 . All visible tags for the non-opaque primitives are then also passed to the TSU at 15 , the HSR unit 8 will then move onto the next primitive at 16 , if the primitive is determined to be opaque at step 12 then its tags are written into the buffer at 16 before moving onto the next primitive at 18 .
  • the tag is a piece of data indicating which object is visible at a pixel. More than one tag per pixel is required when translucent objects cover opaque objects.
  • tag buffer in the above description enables modifications to be made to the pass spawning rules (also described above) that allow the removal of some unnecessary passes.
  • the translucent tags are rendered into the tag buffer in the same manner as opaque objects and a pass only spawned at the point a visible translucent pixel is required to be written to a location that is already occupied. Further, as the translucent object tags are now being rendered into the tag buffer there is no need to pass them immediately to the TSU, Therefore, in the event of them being subsequently obscured they may be discarded.
  • FIG. 1 shows a block diagram of a tile based rendering system discussed above
  • FIG. 2 shows a flow chart of a known pass spawning system
  • FIG. 3 ( a )-( c ) shows a sequence of three triangles being rendered using modified pass spawning rules embodying the invention
  • FIG. 4 is a flow diagram of the embodiment of the invention.
  • FIG. 5 is an enhancement to FIG. 4 ;
  • FIG. 6 is a block diagram of an embodiment of the invention.
  • FIG. 3 a sequence of three triangles is shown being rendered using a set of modified pass spawning rules.
  • an opaque triangle T 1 is rendered into a tag buffer. If a translucent triangle T 2 is rendered on top of the visible opaque pixels then the HSR unit must pass those pixels to the TSU before it can continue to rasterise the translucent tags.
  • the opaque triangle is encountered in scan line order as T 2 is rasterised. Thus all the previously rasterised pixels of T 1 and the pixels of T 2 are passed to the TSU. This leaves the remainder of the translucent object as shown in FIG. 3 b .
  • An opaque triangle T 3 is then rendered into the tag buffer as shown in FIG. 3 c , This triangle T 3 obscures all of T 1 and T 2 .
  • the object is opaque it is processed from 100 on a pixel by pixel basis. For each pixel within the object the system first determines its visibility at 102 . If a pixel is not visible then the system skips to 108 to determine if there any more pixels left in the object. If a pixel is visible then its tag is written into the current tag buffer at 104 , the tags in all other buffers are cleared at 106 . The system then determines at 108 if any more pixels are left to process from the current object, if there are it moves to the next pixel at 110 and continues processing from 102 . If there are no more pixels to process in the object then the system moves to the next object at 112 and returns to 20 .
  • 3D computer graphics often use what is termed as “punch through” objects. These objects use a back end test to determine if a pixel should be drawn. For example, before a pixel is written to the frame buffer its alpha value can be compared against a reference using one of several compare modes, if the result of this comparison is true then the pixel is determined to be visible, if false then it is not. Pixels that are determined to not be visible do not update the depth buffer. It should be noted that this test can be applied to both opaque and partially translucent objects. This technique is common in 3D game applications because it allows complex scenes such as forests to be modelled using relatively few polygons and because a traditional Z buffer can correctly render punch through translucency irrespective of the order in which polygons are presented to the system.
  • TBR back end test tile based rendering
  • punch through objects can be optimised. If punch through is applied to an opaque object it will either be fully visible or not visible at all, this allows them to be treated as opaque with respect to the flushing of the tag buffers. Specifically at the point an opaque punch through object is received any pixels that are determined to be visible by the hidden surface removal (HSR) unit are passed directly to the TSU. The TSU then feeds back pixels that are valid to the HSR unit which will, for the fed back pixels, update the depth buffer as appropriate and invalidate the same pixels in the tag buffer. The later is possible as the act of passing the punch through pixels to the TSU means that they have already been drawn so any valid tags at the same locations in the tag buffer are no longer needed. If multiple tags buffers are present then the tags are invalidated across all buffers.
  • HSR hidden surface removal
  • Partially transparent punch through data requires all tag buffers up to and including the current tag buffer to be flushed to the TSU. This is because the transparency may need to be blended with any overlapped tags currently contained in the tag buffer.
  • the punch through tags may be passed to the TSU with their states modified such that they will not update the frame buffer image and their tags are written to the next buffer as dictated by the punch through test. This allows objects that lie under the punch through object that are subsequently obscured not to be rendered. However, this is at the cost of potentially rendering the punch through object twice, once to determine pixel visibility and once to render the final image if it is not obscured. The impact of rendering the punch through object twice could be reduced by splitting the TSU pixel's shading state into that required to determine punch through state and that required to render the final image.
  • the system jumps to Process Punch Through 300 .
  • the object is then processed pixel by pixel, first determining visibility at 302 . If a pixel is not visible the system skips to 324 . If a pixel is visible a further test is made at 304 to determine if the object pixel is also transparent, and if this is determined to be the first visible pixel within the object at 306 then all tag buffers up to and including the current buffer are flushed at 308 , i.e. sent to the TSU. The test for translucency is performed per pixel so that the tag buffers do not get flushed in the event of, the object not being visible.
  • the pixels for the object itself are then sent to the TSU at 310 where texturing and shading are applied at 312 using well-known methods.
  • a punch through test is then applied at 314 and the validity of the pixel determined at 316 . If the pixel is found to be invalid at 316 , e.g. it fails the alpha test, the system skips to 324 . If the pixel is valid its coordinates are passed back to the HSR unit at 318 , which will then store the pixels depth value to the depth buffer at 320 and invalidate the corresponding tag in all tag buffers at 322 . The system then determines if there are any more pixels to be processed in the object at 324 , if there are it moves to the next pixel at 326 and jumps back to 302 to continue processing. If no more pixels are present in the object the system moves to the next object and returns to 20 .
  • the object When the punch through determination at 24 is negative then the object must be translucent and the system jumps to Process Translucent 200 .
  • the object is then processed pixel by pixel, first determining visibility at 202 . If a pixel is not visible the system skips to 222 . If the pixel is visible the system determines if location in the current tag buffer is occupied at 204 . If the current tag buffer location is determined to be occupied the system will move to the next tag buffer at 206 . A determination is then made as to whether or not there are any valid tags in the buffer at 208 and if there are, they are sent to the TSU at 210 and the tag buffer reset at 212 . If there are no valid tags then a tag is written at 220 and the system goes on to the next pixel or object as described for opaque objects.
  • FIG. 5 illustrates these updated rules which can be used to replace the portion of the flow diagram of FIG. 4 surrounded by a dotted line.
  • a determination is made as to whether or not this buffer has been looked at before. If it has, then the flow moves onto 210 and 212 . If it has not, flow passes back to 204 where a determination is made as to whether or not the tag location is occupied. If it is, then the diagram moves to the next tag buffer at 240 before again determining whether or not that buffer has been looked at at 242 .
  • a further enhancement can be made to single and multiple buffer implementations. Rather than flushing the whole tag buffer at the point that no unoccupied pixel can be found for a translucent object, only those tags that would be overwritten by the translucent pixel are flushed to the TSU.
  • the main disadvantage of this approach is that it can result in the partial submission of an object to a TSU which can result in it being submitted many times. This leads to additional state fetch and set up costs in the TSU. This could be alleviated by submitting all pixels with the same tag value to the TSU rather than only those that are overlapped by the translucent object.
  • the tag buffer could be subdivided into square/rectangular sections such that when the above condition occurs only the section of the tag buffer containing the conflict would be flushed. This approach also potentially results in multiple submissions of tags to the TSU but to a lesser extent.
  • FIG. 6 A block diagram of a preferred embodiment of the invention is shown in FIG. 6 .
  • This comprises a parameter fetch unit 50 which reads in per tile lists of triangles and rasterisation state from a per triangle list 52 . These are then passed to a scan converter 54 and to a pass spawn control unit 56 respectively.
  • the scan converter 54 generates position, depth and stencil values for each pixel within each triangle and passes them to an HSR unit 58 .
  • the HSR unit determines the visibility of each pixel and passes this information onto the pass spawning control unit (PSC) 56 .
  • PSC pass spawning control unit
  • This unit has access to two or more tag buffers 60 which are cycled through in a circular manner. For opaque pixels the PSC unit writes the tag of the triangle to the current tag buffer and invalidates the corresponding location in the other buffer.
  • the PSC unit checks to see if the location in the current tag buffer is valid. If it is, then it switches to the other tag buffer. If a tag location is not valid it writes the translucent tag and moves onto the next pixel. If the tag location is valid then the tag buffer is flushed to the texturing and shading unit 62 . At this point all locations in the tag buffer are marked as invalid and the translucent tag is written to the buffer.
  • the TSU determines pixel validity as appropriate and returns the status of those pixels to the HSR and PSC units 58 , 56 .
  • the HSR and PSC units then update depth buffer and invalidate locations in tag buffers respectively for valid pixels. Translucent punch through pixels behave the same as opaque punch through except that the PSC unit will flush all currently valid tags in the tag buffers to the TSU before proceeding. This process is repeated for all pixels in all triangles within the tile.
  • the parameter fetch unit 50 determines that there are no more triangles in the current tile it signals to the pass spawning unit 56 to flush any remaining valid tags from the tag buffers to the TSU 62 .
  • the parameter fetch unit then proceeds to read the parameter list for the next tile and repeats the process until all tiles that make up the final image have been-rendered. It should be noted that all of the units, with the exception of the parameter fetch, can be modified to operate on multiple pixels in parallel, thereby speeding up the process.
  • the HSR unit 58 has access to a depth and stencil buffer 64 in which the depth and stencil values for each pixel within each triangle are stored.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A method and an apparatus provided for rendering three-dimensional computer graphic images which include both translucent and opaque objects. A list of objects which may be visible in the images is determined and for each pixel in the list a determination is made as to whether or not the object in the list may be visible at that pixel. A data tag is stored for a transparent object determined to be visible at the pixel, and the data tag and object data are passed to a texturing and shading unit when the translucent object is determined to be overwriting the location in the tag buffer already occupied by another data tag.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of 11/787,893 filed on Apr. 4, 2007, which is a continuation of 10/795,561, filed on Mar. 5, 2004, now abandoned, which claims priority from GB 0317479.4 filed on Jul. 25, 2003, all of these applications are incorporated herein in their entireties for all purposes
  • This invention relates to a 3-dimensional computer graphics system and in particular to methods and apparatus which reduce the number of times the data assigned to each pixel have to be modified when rendering an image in such a system.
  • Tile based rendering systems are known. These break down an image to be rendered into a plurality of rectangular blocks or tiles. The way in which this is done and the subsequent texturing and shading performed is shown schematically in FIG. 1. This shows a geometry processing unit 2 which receives the image data from an application and transforms it into screen space using a well-known method. The data is then supplied to a tiling unit 4, which inserts the screen space geometry into object lists for a set of defined rectangular regions, or tiles, 6. Each list contains primitives (surfaces) that exist wholly or partially in a sub-region of a screen (i.e. a tile). A list exists for every tile on the screen, although it should be borne in mind that some lists may have no data in them.
  • Data then passes tile by tile to a hidden surface removal unit 8 (HSR) and from there to a texturing and shading unit 10 (TSU). The HSR unit processes each primitive in the tile and passes to the TSU only data about visible pixels.
  • Many images comprise both opaque and translucent objects. In order to correctly render such an image, the HSR unit must pass “layers” of pixels which need to be shaded to the TSU. This is because more than one object will contribute to the image data to be applied to a particular pixel. For example the view from the inside of a building looking through a pane of dirty glass requires both the geometry visible through the glass, and then the pane of glass itself to be passed to the TSU. This process is referred to as “pass spawning”.
  • Typically, a tile based rendering device of the type shown in FIG. 1 will use a buffer to hold a tag for the front most object for each pixel in the tile currently being processed. A pass is typically spawned whenever the HSR unit 8 processes a translucent object, before the visibility test is performed. This results in all currently visible tags stored in the buffer followed by the visible pixels of the translucent object being sent to the TSU, i.e. more than one set of pixel data being passed for each pixel,
  • The flow diagram of FIG. 2 illustrates this approach, in this, a determination is made at 12 as to whether or not a primitive being processed is opaque. If it is not, then the buffer of tags is sent at 14 to the TSU 10. All visible tags for the non-opaque primitives are then also passed to the TSU at 15, the HSR unit 8 will then move onto the next primitive at 16, if the primitive is determined to be opaque at step 12 then its tags are written into the buffer at 16 before moving onto the next primitive at 18. The tag is a piece of data indicating which object is visible at a pixel. More than one tag per pixel is required when translucent objects cover opaque objects.
  • Use of the approach above means that opaque pixels that are not covered by translucent pixels, and are potentially obscured by further opaque objects may be passed to the TSU unnecessarily, In addition to this, a translucent object is passed to the TSU even if an opaque object subsequently obscures it.
  • SUMMARY OF THE INVENTION
  • The presence of the tag buffer in the above description enables modifications to be made to the pass spawning rules (also described above) that allow the removal of some unnecessary passes.
  • In an embodiment of the present invention, rather than spawning a pass at the point a translucent object is seen, the translucent tags are rendered into the tag buffer in the same manner as opaque objects and a pass only spawned at the point a visible translucent pixel is required to be written to a location that is already occupied. Further, as the translucent object tags are now being rendered into the tag buffer there is no need to pass them immediately to the TSU, Therefore, in the event of them being subsequently obscured they may be discarded.
  • The invention is defined with more precision in the appended claims to which reference should now be made.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred embodiments of the invention will now be described in detail by way of example with reference to the accompanying drawings in which:
  • FIG. 1 shows a block diagram of a tile based rendering system discussed above;
  • FIG. 2 shows a flow chart of a known pass spawning system;
  • FIG. 3 (a)-(c) shows a sequence of three triangles being rendered using modified pass spawning rules embodying the invention;
  • FIG. 4 is a flow diagram of the embodiment of the invention;
  • FIG. 5 is an enhancement to FIG. 4; and
  • FIG. 6 is a block diagram of an embodiment of the invention,
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
  • In FIG. 3 a sequence of three triangles is shown being rendered using a set of modified pass spawning rules. In FIG. 1 a, an opaque triangle T1 is rendered into a tag buffer. If a translucent triangle T2 is rendered on top of the visible opaque pixels then the HSR unit must pass those pixels to the TSU before it can continue to rasterise the translucent tags. The opaque triangle is encountered in scan line order as T2 is rasterised. Thus all the previously rasterised pixels of T1 and the pixels of T2 are passed to the TSU. This leaves the remainder of the translucent object as shown in FIG. 3 b. An opaque triangle T3 is then rendered into the tag buffer as shown in FIG. 3 c, This triangle T3 obscures all of T1 and T2.
  • It will be seen that tags from T1 and T2 have been passed unnecessarily to the TSU in spite of the improved rules. The triangles passed unnecessarily are shown by dotted lines in FIG. 3 c. In addition to this, if T3 had been translucent and was subsequently obscured by another object, all tags from T2 would be passed to the TSU.
  • IF more than one tag buffer is provided then this can further reduce the number of unnecessary passes. In a system with N tag buffers a minimum of N obscured passes can be removed. This is achieved by switching to a new tag buffer each time an attempt is made to write a translucent pixel to an occupied location, until all tag buffers have been written to. At this point, the first tag buffer is passed to the TSU.
  • If an opaque pixel is written to the current tag buffer this will result in the same pixel being invalidated in all other buffers, thus removing any obscured pixels. This can be done as any pixel that is written to by an opaque primitive will only be composed of the data generated by that primitive. The flow chart of FIG. 4 illustrates how the pass spawning rules behave in this case.
  • After the start at 20 of processing an object, as in FIG. 2, a determination is made as to the type of the object at 22 and 24. If an object is determined to be opaque at 22 then the system executes the opaque processing path at 100, if not and the object is determined to be punch through at 24 then punch through processing path is executed at 300 otherwise translucent processing is executed at 200.
  • If the object is opaque it is processed from 100 on a pixel by pixel basis. For each pixel within the object the system first determines its visibility at 102. If a pixel is not visible then the system skips to 108 to determine if there any more pixels left in the object. If a pixel is visible then its tag is written into the current tag buffer at 104, the tags in all other buffers are cleared at 106. The system then determines at 108 if any more pixels are left to process from the current object, if there are it moves to the next pixel at 110 and continues processing from 102. If there are no more pixels to process in the object then the system moves to the next object at 112 and returns to 20.
  • 3D computer graphics often use what is termed as “punch through” objects. These objects use a back end test to determine if a pixel should be drawn. For example, before a pixel is written to the frame buffer its alpha value can be compared against a reference using one of several compare modes, if the result of this comparison is true then the pixel is determined to be visible, if false then it is not. Pixels that are determined to not be visible do not update the depth buffer. It should be noted that this test can be applied to both opaque and partially translucent objects. This technique is common in 3D game applications because it allows complex scenes such as forests to be modelled using relatively few polygons and because a traditional Z buffer can correctly render punch through translucency irrespective of the order in which polygons are presented to the system.
  • As the update of the depth buffer is dependant on the results of the back end test tile based rendering (TBR) systems must handle these objects in a special manner. Existing TBR systems will first spawn a pass as if the punch through objects were transparent, the punch through object is then passed to the TSU which then feeds back visibility information to the HSR unit which updates the depth buffer as necessary.
  • The handling of punch through objects can be optimised. If punch through is applied to an opaque object it will either be fully visible or not visible at all, this allows them to be treated as opaque with respect to the flushing of the tag buffers. Specifically at the point an opaque punch through object is received any pixels that are determined to be visible by the hidden surface removal (HSR) unit are passed directly to the TSU. The TSU then feeds back pixels that are valid to the HSR unit which will, for the fed back pixels, update the depth buffer as appropriate and invalidate the same pixels in the tag buffer. The later is possible as the act of passing the punch through pixels to the TSU means that they have already been drawn so any valid tags at the same locations in the tag buffer are no longer needed. If multiple tags buffers are present then the tags are invalidated across all buffers.
  • Partially transparent punch through data requires all tag buffers up to and including the current tag buffer to be flushed to the TSU. This is because the transparency may need to be blended with any overlapped tags currently contained in the tag buffer. Alternatively, the punch through tags may be passed to the TSU with their states modified such that they will not update the frame buffer image and their tags are written to the next buffer as dictated by the punch through test. This allows objects that lie under the punch through object that are subsequently obscured not to be rendered. However, this is at the cost of potentially rendering the punch through object twice, once to determine pixel visibility and once to render the final image if it is not obscured. The impact of rendering the punch through object twice could be reduced by splitting the TSU pixel's shading state into that required to determine punch through state and that required to render the final image.
  • As far as the flow diagram of FIG. 4 is concerned, where the object is determined to be a punch through at 24, the system jumps to Process Punch Through 300. The object is then processed pixel by pixel, first determining visibility at 302. If a pixel is not visible the system skips to 324. If a pixel is visible a further test is made at 304 to determine if the object pixel is also transparent, and if this is determined to be the first visible pixel within the object at 306 then all tag buffers up to and including the current buffer are flushed at 308, i.e. sent to the TSU. The test for translucency is performed per pixel so that the tag buffers do not get flushed in the event of, the object not being visible. The pixels for the object itself are then sent to the TSU at 310 where texturing and shading are applied at 312 using well-known methods. A punch through test is then applied at 314 and the validity of the pixel determined at 316. If the pixel is found to be invalid at 316, e.g. it fails the alpha test, the system skips to 324. If the pixel is valid its coordinates are passed back to the HSR unit at 318, which will then store the pixels depth value to the depth buffer at 320 and invalidate the corresponding tag in all tag buffers at 322. The system then determines if there are any more pixels to be processed in the object at 324, if there are it moves to the next pixel at 326 and jumps back to 302 to continue processing. If no more pixels are present in the object the system moves to the next object and returns to 20.
  • When the punch through determination at 24 is negative then the object must be translucent and the system jumps to Process Translucent 200. The object is then processed pixel by pixel, first determining visibility at 202. If a pixel is not visible the system skips to 222. If the pixel is visible the system determines if location in the current tag buffer is occupied at 204. If the current tag buffer location is determined to be occupied the system will move to the next tag buffer at 206. A determination is then made as to whether or not there are any valid tags in the buffer at 208 and if there are, they are sent to the TSU at 210 and the tag buffer reset at 212. If there are no valid tags then a tag is written at 220 and the system goes on to the next pixel or object as described for opaque objects.
  • As opaque objects invalidate tags across all buffers, the pass spawning rules can be further extended such that passes only spawn when no tag buffer can be found into which a translucent pixel can be written. FIG. 5 illustrates these updated rules which can be used to replace the portion of the flow diagram of FIG. 4 surrounded by a dotted line. Instead of the determination at 208 as to whether there are any valid tags in the buffer, a determination is made as to whether or not this buffer has been looked at before. If it has, then the flow moves onto 210 and 212. If it has not, flow passes back to 204 where a determination is made as to whether or not the tag location is occupied. If it is, then the diagram moves to the next tag buffer at 240 before again determining whether or not that buffer has been looked at at 242.
  • A further enhancement can be made to single and multiple buffer implementations. Rather than flushing the whole tag buffer at the point that no unoccupied pixel can be found for a translucent object, only those tags that would be overwritten by the translucent pixel are flushed to the TSU. The main disadvantage of this approach is that it can result in the partial submission of an object to a TSU which can result in it being submitted many times. This leads to additional state fetch and set up costs in the TSU. This could be alleviated by submitting all pixels with the same tag value to the TSU rather than only those that are overlapped by the translucent object. Alternatively, the tag buffer could be subdivided into square/rectangular sections such that when the above condition occurs only the section of the tag buffer containing the conflict would be flushed. This approach also potentially results in multiple submissions of tags to the TSU but to a lesser extent.
  • A block diagram of a preferred embodiment of the invention is shown in FIG. 6. This comprises a parameter fetch unit 50 which reads in per tile lists of triangles and rasterisation state from a per triangle list 52. These are then passed to a scan converter 54 and to a pass spawn control unit 56 respectively. The scan converter 54 generates position, depth and stencil values for each pixel within each triangle and passes them to an HSR unit 58. The HSR unit determines the visibility of each pixel and passes this information onto the pass spawning control unit (PSC) 56. This unit has access to two or more tag buffers 60 which are cycled through in a circular manner. For opaque pixels the PSC unit writes the tag of the triangle to the current tag buffer and invalidates the corresponding location in the other buffer. For a translucent pixel the PSC unit checks to see if the location in the current tag buffer is valid. If it is, then it switches to the other tag buffer. If a tag location is not valid it writes the translucent tag and moves onto the next pixel. If the tag location is valid then the tag buffer is flushed to the texturing and shading unit 62. At this point all locations in the tag buffer are marked as invalid and the translucent tag is written to the buffer. For an opaque punch through pixel the PSC passes all visible pixels directly to the TSU 62. The TSU determines pixel validity as appropriate and returns the status of those pixels to the HSR and PSC units 58, 56. The HSR and PSC units then update depth buffer and invalidate locations in tag buffers respectively for valid pixels. Translucent punch through pixels behave the same as opaque punch through except that the PSC unit will flush all currently valid tags in the tag buffers to the TSU before proceeding. This process is repeated for all pixels in all triangles within the tile.
  • When the parameter fetch unit 50 determines that there are no more triangles in the current tile it signals to the pass spawning unit 56 to flush any remaining valid tags from the tag buffers to the TSU 62. The parameter fetch unit then proceeds to read the parameter list for the next tile and repeats the process until all tiles that make up the final image have been-rendered. It should be noted that all of the units, with the exception of the parameter fetch, can be modified to operate on multiple pixels in parallel, thereby speeding up the process.
  • The HSR unit 58 has access to a depth and stencil buffer 64 in which the depth and stencil values for each pixel within each triangle are stored.

Claims (20)

I claim:
1. A method for rendering an image from 3-dimensional object descriptions, comprising:
determining, for a set of pixels in the image, a list of objects that may be visible in a pixel of the set of pixels;
scan converting an object from the list of objects to produce depth and position information for the object at the pixel;
using the depth and position information to determine whether or not the object is visible at the pixel;
storing a tag indicating the identity of the object, if the object is determined to be visible at the pixel, in a tag buffer of a plurality of tag buffers; and
if the tag is overwriting a location in the tag buffer already occupied by another tag, then passing a tag already occupying a location in the plurality of tag buffers to a texturing and shading unit, and passing data for the object indicated by the tag, that was passed to the texturing and shading unit, to the texturing and shading unit.
2. The method for rendering an image from 3-dimensional object descriptions of claim 1, further comprising determining whether the object is translucent and before storing the tag, determining whether a candidate tag buffer location already includes a valid tag, and if so, then cycling to a subsequent tag buffer and repeating the determining until finding a candidate tag buffer location that has no valid tag for the pixel and then storing the tag in that tag buffer, and otherwise performing the overwriting of the location with the valid tag.
3. The method for rendering an image from 3-dimensional object descriptions of claim 1, further comprising determining whether the object is opaque and responsively invalidating tags stored in other tag buffers of the plurality of tag buffers pertaining to that pixel so that the tag is not overwriting a location in the tag buffer that stores a valid tag.
4. The method for rendering an image from 3-dimensional object descriptions of claim 1, further comprising cycling the plurality of tag buffers when storing the tag.
5. The method for rendering an image from 3-dimensional object descriptions of claim 4, wherein the tag that is passed to the texturing and shading unit is from a first tag buffer in the cycle.
6. A method of image rasterization from a 3-D scene description, comprising:
writing a tag to a location in a current tag buffer of a plurality of tag buffers, responsive to determining that an opaque primitive is visible at a pixel, the tag indicating which primitive is visible at that pixel;
processing a translucent primitive by determining whether the translucent primitive is visible, pixel by pixel, for a set of pixels, and if the translucent primitive is visible, then determining if the location in the current tag buffer is occupied, and
if the location in the current tag buffer is occupied then iteratively moving to a next tag buffer, until identifying a tag buffer in which a tag indicating the translucent primitive can be written or
determining that no tag buffer of the plurality of tag buffers can receive the tag indicating the translucent primitive, and
then responsively spawning a pass to process a primitive identified by a tag in the plurality of tag buffers.
7. A system for rendering an image from a 3-D scene description, comprising:
a source of a list of primitives, that are within one or more pixels of the image;
a parameter fetch unit operable to fetch parameters for a primitive in the list of primitives;
a scan converter coupled to receive parameters for the primitive from the parameter fetch unit and to produce depth and position of each primitive for each pixel of the one or more pixels;
a hidden surface removal unit coupled with the scan converter and operable to remove the primitive from further processing with respect to each of the pixels, if the primitive is not visible at that pixel, and otherwise to retain the primitive, based on the depth and the position of the primitive at each of the pixels;
a Texture and Shading Unit (TSU);
a plurality of tag buffers, wherein the tag buffers store tags that each are a piece of data that indicates a particular primitive, and the plurality of tag buffers are operable to be cycled to provide a current tag buffer;
a pass spawn controller (PSC) coupled to receive rasterization state from the parameter fetch unit and coupled with the plurality of tag buffers, the PSC being operable
to determine whether the primitive is translucent or opaque,
if the primitive is opaque then to store a tag indicating the primitive in the current tag buffer of the plurality of tag buffers, and
if the primitive is translucent then to cycle the plurality of tag buffers and check for presence of a valid tag, until reaching a tag buffer without a valid tag, and then storing a tag identifying the primitive in that tag buffer, or after checking each tag buffer and finding that all tag buffers include a valid tag, then flushing at least one tag from the plurality of tag buffers to the TSU and passing data for the object indicated by the tag to the TSU.
8. The system for rendering an image from a 3-D scene description of claim 7, wherein the tag that is flushed is from a first tag buffer in the cycle.
9. The system for rendering an image from a 3-D scene description of claim 7, wherein each of the TSU and the PSC operate on multiple pixels in parallel.
10. The system for rendering an image from a 3-D scene description of claim 7, wherein the one or more pixels are from a screen-space tile, and the hidden surface removal unit operates on all of the pixels of the screen-space tile before moving to a subsequent screen-space tile.
11. The system for rendering an image from a 3-D scene description of claim 7, wherein the hidden surface removal unit is operable to determine whether a punch-through object is to be treated as being opaque, and responsively to pass data for any pixel where the punch-through object is visible to the TSU, and wherein the TSU is operable to feedback data to the hidden surface removal unit.
12. The system for rendering an image from a 3-D scene description of claim 11, wherein the hidden surface removal unit is operable to update depth buffer locations for pixels identified in the feedback data from the TSU and invalidate locations in the plurality of tag buffers relating to those pixels.
13. The system for rendering an image from a 3-D scene description of claim 7, wherein the hidden surface removal unit is operable to determine whether a punch-through object is to be treated as being at least partially translucent and to flush all tag buffer locations relating to pixels where the punch-through object is visible to the TSU.
14. The system for rendering an image from a 3-D scene description of claim 12, wherein the TSU is operable to split state for the punch-through object between state required to determine punch-through state and that required to render the final image.
15. The method of image rasterization from a 3-D scene description of claim 6, wherein the spawning of the pass to process the primitive identified by the tag in the plurality of tag buffers comprises flushing a section of the current tag buffer in which the tag is to be written.
16. The method of image rasterization from a 3-D scene description of claim 6, wherein the spawning of the pass to process the primitive identified by the tag in the plurality of tag buffers comprises flushing only the tag in a location of the current tag buffer in which the tag is to be written, and writing the tag to the flushed location in the current tag buffer.
17. The method of image rasterization from a 3-D scene description of claim 6, wherein the set of pixels are the pixels of a screen-space tile, and further comprising flushing remaining valid tags in the plurality of tag buffers in response to determining that there are no more triangles that may be visible in the screen-space tile.
18. The method of image rasterization from a 3-D scene description of claim 6, further comprising determining whether a punch-through object is to be treated as being opaque, and responsively to pass data for any pixel where the punch-through object is visible to a texturing and shading unit.
19. The method of image rasterization from a 3-D scene description of claim 18, further comprising updating depth buffer locations for pixels identified in feedback data from the texturing and shading unit, and invalidating locations in the plurality of tag buffers relating to those pixels.
20. The method of image rasterization from a 3-D scene description of claim 6, further comprising determining whether a punch-through object is to be treated as being at least partially translucent and responsively flushing all tag buffer locations relating to pixels where the punch-through object is visible to the TSU.
US13/763,974 2003-07-25 2013-02-11 Method and Apparatus for Rendering Translucent and Opaque Objects Abandoned US20130207977A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/763,974 US20130207977A1 (en) 2003-07-25 2013-02-11 Method and Apparatus for Rendering Translucent and Opaque Objects

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
GB0317479A GB2404316B (en) 2003-07-25 2003-07-25 Three-Dimensional computer graphics system
GB0317479.4 2003-07-25
US10/795,561 US20050017970A1 (en) 2003-07-25 2004-03-05 Three-dimensional computer graphics system
US11/787,893 US8446409B2 (en) 2003-07-25 2007-04-18 Method and apparatus for rendering computer graphic images of translucent and opaque objects
US13/763,974 US20130207977A1 (en) 2003-07-25 2013-02-11 Method and Apparatus for Rendering Translucent and Opaque Objects

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/787,893 Continuation US8446409B2 (en) 2003-07-25 2007-04-18 Method and apparatus for rendering computer graphic images of translucent and opaque objects

Publications (1)

Publication Number Publication Date
US20130207977A1 true US20130207977A1 (en) 2013-08-15

Family

ID=27772710

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/795,561 Abandoned US20050017970A1 (en) 2003-07-25 2004-03-05 Three-dimensional computer graphics system
US11/787,893 Active 2027-03-30 US8446409B2 (en) 2003-07-25 2007-04-18 Method and apparatus for rendering computer graphic images of translucent and opaque objects
US13/763,974 Abandoned US20130207977A1 (en) 2003-07-25 2013-02-11 Method and Apparatus for Rendering Translucent and Opaque Objects

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/795,561 Abandoned US20050017970A1 (en) 2003-07-25 2004-03-05 Three-dimensional computer graphics system
US11/787,893 Active 2027-03-30 US8446409B2 (en) 2003-07-25 2007-04-18 Method and apparatus for rendering computer graphic images of translucent and opaque objects

Country Status (5)

Country Link
US (3) US20050017970A1 (en)
EP (1) EP1649428B1 (en)
JP (1) JP4602334B2 (en)
GB (1) GB2404316B (en)
WO (1) WO2005015503A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10074210B1 (en) * 2017-07-25 2018-09-11 Apple Inc. Punch-through techniques for graphics processing
US10497085B2 (en) 2017-01-04 2019-12-03 Samsung Electronics Co., Ltd. Graphics processing method and system

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070139430A1 (en) * 2005-12-21 2007-06-21 Microsoft Corporation Rendering "gadgets" with a browser
GB2439129B (en) * 2006-06-12 2008-11-12 Imagination Tech Ltd Parameter compaction in a tile based rendering system
JP4370438B2 (en) * 2007-06-27 2009-11-25 Necシステムテクノロジー株式会社 Vector image drawing apparatus, vector image drawing method and program
GB201004673D0 (en) * 2010-03-19 2010-05-05 Imagination Tech Ltd Processing of 3D computer graphics data on multiple shading engines
KR102101834B1 (en) 2013-10-08 2020-04-17 삼성전자 주식회사 Image processing apparatus and method
GB2520366B (en) * 2013-12-13 2015-12-09 Imagination Tech Ltd Primitive processing in a graphics processing system
GB2520365B (en) 2013-12-13 2015-12-09 Imagination Tech Ltd Primitive processing in a graphics processing system
GB2522868B (en) * 2014-02-06 2016-11-02 Imagination Tech Ltd Opacity testing for processing primitives in a 3D graphics processing systemm
US9760968B2 (en) 2014-05-09 2017-09-12 Samsung Electronics Co., Ltd. Reduction of graphical processing through coverage testing
US9842428B2 (en) * 2014-06-27 2017-12-12 Samsung Electronics Co., Ltd. Dynamically optimized deferred rendering pipeline
US9361697B1 (en) 2014-12-23 2016-06-07 Mediatek Inc. Graphic processing circuit with binning rendering and pre-depth processing method thereof
GB2534567B (en) 2015-01-27 2017-04-19 Imagination Tech Ltd Processing primitives which have unresolved fragments in a graphics processing system
GB2546810B (en) 2016-02-01 2019-10-16 Imagination Tech Ltd Sparse rendering
GB2602027B (en) * 2020-12-15 2024-08-21 Samsung Electronics Co Ltd Display apparatus

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5818456A (en) * 1996-04-30 1998-10-06 Evans & Sutherland Computer Corporation Computer graphics system with adaptive pixel multisampler
JP3495189B2 (en) * 1996-06-19 2004-02-09 株式会社ソニー・コンピュータエンタテインメント Drawing apparatus and drawing method
US5923333A (en) * 1997-01-06 1999-07-13 Hewlett Packard Company Fast alpha transparency rendering method
JPH1166340A (en) * 1997-08-20 1999-03-09 Sega Enterp Ltd Device and method for processing image and recording medium recording image processing program
AU5686299A (en) * 1998-08-20 2000-03-14 Raycer, Inc. Method and apparatus for generating texture
GB2343601B (en) * 1998-11-06 2002-11-27 Videologic Ltd Shading and texturing 3-dimensional computer generated images
GB2343600B (en) * 1998-11-06 2003-03-12 Videologic Ltd Depth sorting for use in 3-dimensional computer shading and texturing systems
WO2001001352A1 (en) * 1999-06-28 2001-01-04 Clearspeed Technology Limited Method and apparatus for rendering in parallel a z-buffer with transparency
GB2355633A (en) * 1999-06-28 2001-04-25 Pixelfusion Ltd Processing graphical data
US6457034B1 (en) * 1999-11-02 2002-09-24 Ati International Srl Method and apparatus for accumulation buffering in the video graphics system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10497085B2 (en) 2017-01-04 2019-12-03 Samsung Electronics Co., Ltd. Graphics processing method and system
US10074210B1 (en) * 2017-07-25 2018-09-11 Apple Inc. Punch-through techniques for graphics processing

Also Published As

Publication number Publication date
US20070211048A1 (en) 2007-09-13
GB2404316B (en) 2005-11-30
JP4602334B2 (en) 2010-12-22
WO2005015503A3 (en) 2005-04-07
GB0317479D0 (en) 2003-08-27
US8446409B2 (en) 2013-05-21
EP1649428B1 (en) 2019-06-12
GB2404316A (en) 2005-01-26
EP1649428A2 (en) 2006-04-26
JP2006528811A (en) 2006-12-21
WO2005015503A2 (en) 2005-02-17
US20050017970A1 (en) 2005-01-27

Similar Documents

Publication Publication Date Title
US20130207977A1 (en) Method and Apparatus for Rendering Translucent and Opaque Objects
US12217358B2 (en) Untransformed display lists in a tile based rendering system
JP3657518B2 (en) Graphics processor with deferred shading
US12159347B2 (en) Method and system for multisample antialiasing
US7683905B1 (en) Methods of processing graphics data including reading and writing buffers
CN106296790B (en) Method and apparatus for shading and texturing computer graphics images
EP3346448B1 (en) Graphics processing method and system
US7161603B2 (en) Image rendering device and image rendering method
US20180108174A1 (en) Hidden Culling in Tile-Based Computer Generated Images
US6285348B1 (en) Method and system for providing implicit edge antialiasing
US20010012018A1 (en) Occlusion culling for complex transparent scenes in computer generated graphics
JP5041380B2 (en) Parameter compression in tile-based rendering devices
WO1999052079A1 (en) Object-based anti-aliasing
GB2534567A (en) Processing primitives which have unresolved fragments in a graphics processing system
CN111311717A (en) Computer implemented redundancy overlay discard method
US20030002729A1 (en) System for processing overlapping data
JP2002529870A (en) Shading and texturing of 3D computer generated images
US20030095137A1 (en) Apparatus and method for clipping primitives in a computer graphics system
US12243149B2 (en) Graphics processing
KR19980041796A (en) Efficient rendering with user defined rooms and windows
JP2002529869A (en) Depth sorting for use in computerized 3D shading and texturing systems
US6795072B1 (en) Method and system for rendering macropixels in a graphical image
EP1306811A1 (en) Triangle identification buffer
JPH08212385A (en) CG data creation device and CG animation editing device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION