[go: up one dir, main page]

0% found this document useful (0 votes)
10 views51 pages

08 Geometricqueries Slides

Uploaded by

muradakhter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views51 pages

08 Geometricqueries Slides

Uploaded by

muradakhter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Lecture 8:

Geometric Queries
Interactive Computer Graphics
Stanford CS248, Winter 2019
Geometric queries — motivation

Intersecting rays and triangles


(ray tracing)

Intersecting triangles (collisions)

Closest point on surface queries


Stanford CS248, Winter 2019
Example: closest point queries
Q: Given a point, in space (e.g., a new sample point), how do
we find the closest point on a given surface?
Q: Does implicit/explicit representation make this easier?
Q: Does our half-edge data structure help?
Q: What’s the cost of the naïve algorithm?
Q: How do we find the distance to a single triangle anyway?

???

Stanford CS248, Winter 2019


Many types of geometric queries
Plenty of other things we might like to know:
- Do two triangles intersect?
- Are we inside or outside an object?
- Does one object contain another?
- ...

Data structures we’ve seen so far not really designed for this...
Need some new ideas!
TODAY: come up with simple (aka: slow) algorithms
NEXT TIME: intelligent ways to accelerate geometric queries

Stanford CS248, Winter 2019


Warm up: closest point on point
Given a query point (p1,p2), how do we find the closest point
on the point (a1,a2)?

(p1, p2)

(a1, a2)

Bonus question: what’s the distance?

Stanford CS248, Winter 2019


Slightly harder: closest point on line
Now suppose I have a line NTx = c, where N is the unit normal
How do I find the point on line closest to my query point p?

p
N NTx = c

Many ways to do it:

Stanford CS248, Winter 2019


Harder: closest point on line segment
p
Two cases: endpoint or interior p
p
Already have basic components:
a
- point-to-point p
- point-to-line
Algorithm? p

- find closest point on line p


- check if it is between endpoints b
- if not, take closest endpoint
How do we know if it’s between endpoints? p
p p
- write closest point on line as a+t(b-a)
- if t is between 0 and 1, it’s inside the segment!
Stanford CS248, Winter 2019
Even harder: closest point on triangle in 2D
What are all the possibilities for the closest point?
Almost just minimum distance to three line segments:

Q: What about a point inside the triangle?


Stanford CS248, Winter 2019
Closest point on triangle in 3D
Not so different from 2D case
Algorithm:
- Project point onto plane of triangle
- Use half-space tests to classify point (vs. half plane)
- If inside the triangle, we’re done!
- Otherwise, find closest point on associated vertex or edge

By the way, how do we find closest point on plane?


Same expression as closest point on a line! p + ( c - NTp ) N
Stanford CS248, Winter 2019
Closest point on triangle mesh in 3D?
Conceptually easy:
- loop over all triangles
- compute closest point to current triangle
- keep globally closest point
Q: What’s the cost? p
What if we have billions of faces?
NEXT TIME: Better data structures!

Stanford CS248, Winter 2019


Closest point to implicit surface?
If we change our representation of geometry, algorithms can change
completely
E.g., how might we compute the closest point on an implicit surface
described via its distance function?
One idea:
- start at the query point
- compute gradient of distance
(using, e.g., finite differences)
- take a little step (decrease
distance)
- repeat until we’re at the surface
(zero distance)
Better yet: just store closest point for
each grid cell! (speed/memory trade off)
Stanford CS248, Winter 2019
Different query: ray-mesh intersection
A “ray” is an oriented line starting at a point
Think about a ray of light traveling from the sun
Want to know where a ray pierces a surface
Why?
- GEOMETRY: inside-outside test
- RENDERING: visibility, ray tracing
- ANIMATION: collision detection
Might pierce surface in many places!

Stanford CS248, Winter 2019


Ray equation
Can express ray as origin unit direction

point along ray


“time”

Stanford CS248, Winter 2019


Intersecting a ray with an implicit surface
Recall implicit surfaces: all points x such that f(x) = 0
Q: How do we find points where a ray pierces this surface?
Well, we know all points along the ray: r(t) = o + td
Idea: replace “x” with “r” in 1st equation, and solve for t
Example: unit sphere quadratic formula:

d
2
Note: |d| = 1 since d is a unit vector
o
Why two solutions?
Stanford CS248, Winter 2019
Ray-plane intersection
Suppose we have a plane NTx = c
- N - unit normal
- c - offset
How do we find intersection with ray r(t) = o + td?
Key idea: again, replace the point x with the ray equation t:

Now solve for t:

And plug t back into ray equation:

Stanford CS248, Winter 2019


Ray-triangle intersection
Triangle is in a plane...
Algorithm:
- Compute ray-plane intersection
- Q: What do we do now?

Stanford CS248, Winter 2019


ct 0 0 0
x1 x2 x3x74 x5 x6 x7 x8 b a c a
Barycentric coordinates (as ratio of areas)
f 0
0 znear zf ar x =
0
2⇥zf ar⇥znear 7
zf ar+znear
a
x
71 x2 x3 x4 x5 x6 x7 x8
5
+ (b a) + (c a) = (1 )a + b + c = ↵a +
tan(✓/2) znear zf ar
0 1 0 tan(✓/2)
aspect a b aspect c tan(✓/2)↵ + + = 1
Barycentric coords are signed areas:
aspect ↵ = AA /A
2 f 3 3
0 0
2 0 = A /A 3
0 0 6 aspect 0 f
0 0 7 B
0
a b c 6 0 f 0 aspect
7 0 7
f 0 P = 6 0 6 7 = A /A 7
4 0 0 znear6 zf ar7
zf ar+znear
0 2⇥zf
f ar⇥znear
05 C
0 7
P=6 7
zf ar+znear 02⇥zf0ar⇥znear
znear zf
zf
ar
ar+znear 2⇥zf ar⇥znear 7
0 41 50 0 0znear
Why must coordinates sum to 5
one?
znear
b a c azf ar znear zf ar zf ar znear zf ar
0 1 0 0AA 0 Why1must coordinates0be between 0 and 1?
AB x
Triangles:
a b c
AC
a b c
a b c b a c a
Useful: Heron’s formula:
Area of triangle formed 1
AC = (b a) ⇥ (x a)
by points: a, b, x 2
Stanford CS248, Winter 2019
Ray-triangle intersection
Algorithm:
- Compute ray-plane intersection
- Q: What do we do now?
- A: Compute barycentric coordinates of hit point?
- If barycentric coordinates are all positive, point is in triangle

Many different techniques if you care about efficiency

Stanford CS248, Winter 2019


Another way: ray-triangle intersection
▪ Parameterize triangle given by vertices p0 , p1 , p2 using
barycentric coordinates
f (u, v) = (1 u v)p0 + up1 + vp2

▪ Can think of a triangle as an affine map of the unit triangle


v f (u, v) = p0 + u(p1 p0 ) + v(p2 p0 )
1
p0 , p1 , p2

p0 , p1 , p2 p0 , p1 , p2
u
1

Stanford CS248, Winter 2019


Another way: ray-triangle intersection
Plug parametric ray equation directly into equation for points on triangle:
p0 + u(p1 p0 ) + v(p2 p0 ) = o + td
2 3
Solve for u, v, t: ⇥ ⇤ u
p1 p0 p2 p0 d 4 v 5 = o p0
t
1
p0 , p1 , p2 , M, M
1
M (o p0 )triangle back to unit triangle in u,v plane, and transforms ray’s direction to be
transforms
orthogonal to plane. It’s a point in 2D triangle test now!
2 3
u z 2 3
o, d
p0 , p1 , p2 ⇤ u 1
p0 p2 p0 ⇥
td 4v 5 = o p p0p p ⇤ M (o p0 )
o, d 1 0 2 p 0 y
td 4 v 5 =o p0 v
t t
o, d
1
2 3
p0 , p1 , p2
⇥ ⇤ u
p1 p0 p2 p0 td 4v 5 = o p0
t
p0 , p1 , p2
x 1 u
Stanford CS248, Winter 2019
One more query: mesh-mesh intersection
GEOMETRY: How do we know if a mesh intersects itself?
ANIMATION: How do we know if a collision occurred?

Stanford CS248, Winter 2019


Warm up: point-point intersection
Q: How do we know if p intersects a?
A: ...check if they’re the same point!

(p1, p2)

(a1, a2)

Stanford CS248, Winter 2019


Slightly harder: point-line intersection
Q: How do we know if a point intersects a given line?
A: ...plug it into the line equation!

p
NTx = c

Stanford CS248, Winter 2019


Line-line intersection
Two lines: ax=b and cx=d
Q: How do we find the intersection?
A: See if there is a simultaneous solution
Leads to linear system:

Stanford CS248, Winter 2019


Degenerate line-line intersection?
What if lines are almost parallel?
Small change in normal can lead to big change in intersection!
Instability very common, very important with geometric
predicates. Demands special care (e.g., analysis of matrix).

See for example Shewchuk, “Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates”
Stanford CS248, Winter 2019
Triangle-triangle intersection?
Lots of ways to do it
Basic idea:
- Q: Any ideas?
- One way: reduce to edge-triangle intersection
- Check if each line passes through plane (ray-triangle)
- Then do interval test
What if triangle is moving?
- Important case for animation
- Can think of triangles as prisms in time
- Turns dynamic problem (in nD + time) into purely
geometric problem in (n+1)-dimensions
Stanford CS248, Winter 2019
Ray-scene intersection
Given a scene defined by a set of N primitives and a ray r, find the
closest point of intersection of r with the scene
“Find the first primitive the ray hits”
p_closest = NULL
t_closest = inf
for each primitive p in scene:
t = p.intersect(r)
if t >= 0 && t < t_closest:
t_closest = t
p_closest = p

(Assume p.intersect(r) returns value of t corresponding to


the point of intersection with ray r)

Complexity? O(N )
Can we do better? Of course… but you’ll
have to wait until next class
Stanford CS248, Winter 2019
Rendering via ray casting:
(one common use of ray-scene intersection tests)

Stanford CS248, Winter 2019


Rasterization and ray casting are two
algorithms for solving the same problem:
determining “visibility from a camera”

Stanford CS248, Winter 2019


Recall triangle visibility:
Question 1: what samples does the triangle overlap?
(“coverage”)

Sample Question 2: what triangle is closest to the


camera in each sample? (“occlusion”)
Stanford CS248, Winter 2019
The visibility problem
What scene geometry is visible at each screen sample?
- What scene geometry projects onto screen sample points? (coverage)
- Which geometry is visible from the camera at each sample? (occlusion)

x-axis

x/z
-z axis
Pinhole
Camera (x,z)
(0,0)
Virtual
Sensor

Stanford CS248, Winter 2019


Basic rasterization algorithm
Sample = 2D point
Coverage: 2D triangle/sample tests (does projected triangle cover 2D sample point)
Occlusion: depth buffer
initialize z_closest[] to INFINITY // store closest-surface-so-far for all samples
initialize color[] // store scene color for all samples
for each triangle t in scene: // loop 1: over triangles
t_proj = project_triangle(t)
for each 2D sample s in frame buffer: // loop 2: over visibility samples
if (t_proj covers s)
compute color of triangle at sample
if (depth of t at s is closer than z_closest[s])
update z_closest[s] and color[s]

“Given a triangle, find the samples it covers”


(finding the samples is relatively easy since they are
distributed uniformly on screen)

More efficient hierarchical rasterization:


For each TILE of image
If triangle overlaps tile, check all samples in tile

Stanford CS248, Winter 2019


The visibility problem (described differently)
In terms of casting rays from the camera:
- Is a scene primitive hit by a ray originating from a point on the virtual
sensor and traveling through the aperture of the pinhole camera?
(coverage)
- What primitive is the first hit along that ray? (occlusion)

o, do, d
Pinhole
Camera (x,z)
(0,0)
Virtual
Sensor

Stanford CS248, Winter 2019


Basic ray casting algorithm
Sample = a ray in 3D
Coverage: 3D ray-triangle intersection tests (does ray “hit” triangle)
Occlusion: closest intersection along ray

initialize color[] // store scene color for all samples


for each sample s in frame buffer: // loop 1: over visibility samples (rays)
r = ray from s on sensor through pinhole aperture
r.min_t = INFINITY // only store closest-so-far for current ray
r.tri = NULL;
for each triangle tri in scene: // loop 2: over triangles
if (intersects(r, tri)) { // 3D ray-triangle intersection test
if (intersection distance along ray is closer than r.min_t)
update r.min_t and r.tri = tri;
}
color[s] = compute surface color of triangle r.tri at hit point

Compared to rasterization approach: just a reordering of the loops!


“Given a ray, find the closest triangle it hits.”

Stanford CS248, Winter 2019


Basic rasterization vs. ray casting
Rasterization:
- Proceeds in triangle order (for all triangles)
- Store entire depth buffer (requires access to 2D array of fixed size)
- Do not have to store entire scene geometry in memory
- Naturally supports unbounded size scenes
Ray casting:
- Proceeds in screen sample order (for all rays)
- Do not have to store closest depth so far for the entire screen (just the
current ray)
- This is the natural order for rendering transparent surfaces (process
surfaces in the order the are encountered along the ray: front-to-back)
- Must store entire scene geometry for fast access

Stanford CS248, Winter 2019


In other words…
Rasterization is a efficient implementation of ray casting where:
- Ray-scene intersection is computed for a batch of rays
- All rays in the batch originate from same origin
- Rays are distributed uniformly in plane of projection
(Note: not uniform distribution in angle… angle between rays is smaller
away from view direction)

Stanford CS248, Winter 2019


Generality of ray-scene queries
What object is visible to the camera?
What light sources are visible from a point on a surface (is a surface in shadow?)
What reflection is visible on a surface?

Virtual
Sensor

In contrast, rasterization is a highly-specialized solution for computing visibility for a set of


uniformly distributed rays originating from the same point (most often: the camera)
Stanford CS248, Winter 2019
Direct illumination + reflection + transparency

Image credit: Henrik Wann Jensen Stanford CS248, Winter 2019


Global illumination solution

Image credit: Henrik Wann Jensen Stanford CS248, Winter 2019


Direct illumination

Stanford CS248, Winter 2019


Sixteen-bounce global illumination

Stanford CS248, Winter 2019


Shadows

Image credit: Grand Theft Auto V Stanford CS248, Winter 2019


How to compute if a surface point is in shadow?
L1
Assume you have an
algorithm for ray-scene
intersection… L2

P
x

Stanford CS248, Winter 2019


A simple shadow computation algorithm
Trace ray from point P to L1
location Li of light source
If ray hits scene object L2
before reaching light
source… then P is in
shadow

P
x

Stanford CS248, Winter 2019


Recall rasterization / ray casting relationship
Rasterization is a efficient implementation of ray casting where:
- Ray-scene intersection is computed for a batch of rays
- All rays in the batch originate from same origin
- Rays are distributed uniformly in plane of projection
(Note: not uniform distribution in angle… angle between rays is
smaller away from view direction)

Stanford CS248, Winter 2019


Shadow mapping: ray origin for rasterization
need not be the scene’s camera position [Williams 78]
1. Place
ng [Whitted 1980] camera
and relatedat position of a point
techniques light source
can accurately
iety of2.global
Render illumination effects including
scene to compute depth tohard shad-
closest object to light along
ossible that real-time rendering systems will eventually
uniformly distributed “shadow
cing techniques. However, even with recent progress in rays” (answer stored in depth buffer)
ald et 3. Store precomputed
al. 2003], shadow ray
rendering performance intersection
remains inade-results in a texture
enes containing deformable objects.
mapping [Williams 1978] and many of its variants
and Nicolas “Shadow
1985;map” = depth et
Fernando map
al.from perspective
2001] of aex-
leverage point light. Precomputed
fer hardware (Stores closest intersection
to render shadows withalonghigh
each performance
shadow ray in a texture) shadow rays
scenes. However, existing versions of the technique are
mpling and self-shadowing artifacts that are sufficiently
mit the technique’s use in real applications.
(left) illustrates the shadow map algorithm. The scene
first from the light position (yielding Znear values) and
ed from the eye position. Each pixel in the eye view is
3-space point positioned according to its X / Y posi-
mage plane and its Z value (from the depth buffer), and
ed into light space. This transformation yields a point
pace and a distance ZP between P and the light-view
. The original eye-space pixel is considered to be in
Znear ZP , using an estimated Znear value. The algo-
atesImage
Znear from
credits: Segalthe
et al.Z92, values of one or more light-
NVIDIA
near Stanford CS248, Winter 2019
Shadow texture lookup approximates visibility
result when shading fragment at P
L1 Precomputed shadow rays shown in red:
Distance to closest object in scene is precomputed
and stored in texture map (“shadow map”)

Stanford CS248, Winter 2019


Shadow mapping pseudocode
(this logic would be implemented in fragment
ytracing [Whitted 1980] and related techniques can accurately shader)
r a variety of global illumination effects including hard shad-
It is possible that real-time rendering systems will eventually
raytracingGiven world-space
techniques. However,point Pworld
even with , light
recent progress in
rea [Wald position (L),rendering
et al. 2003], and light directionremains
performance (D) inade-
for scenes containing deformable objects. Pproj
Transform
adow mapping P into
[Williams “light
1978] andspace”,
many of defined
its variants
rcade and Nicolas 1985; Fernando et al. 2001] leverage ex-
by light position at origin and -Z
Z-buffer hardware to render shadows with high performance
aligned
mplex scenes. withexisting
However, D versions of the technique are
to sampling and self-shadowing artifacts that are sufficiently
us to limit Project transformed
the technique’s use in realP applications.
into Pproj Pworld
gure 4 (left) illustrates the shadow map algorithm. The scene
dered firstLookup
from thevalue in shadow
light position mapZat
(yielding near values) and
rendered from
(P the eye position. Each pixel in the eye view is
proj.x, Pproj.y)
d as a 3-space point positioned according to its X / Y posi-
n the imageIf plane
valueandfromits Zshadow
value (from
maptheisdepth buffer), and
less than
nsformed into light space. This transformation yields a point
light space|L-P|,
and athen point
distance ZPPbetween
is in shadow
P and the light-view
e plane. The original eye-space pixel is considered to be in
ow iff Znear ZP , using an estimated Znear value. The algo-
estimates Znear from the Znear values of one or more light-
sample(s) that are nearest to the projection of point P onto the Stanford CS248, Winter 2019
Shadow aliasing due to shadow map undersampling

Shadows computed using shadow map

Correct hard shadows


(result from computing shadow directly using ray
tracing)

Image credit: Johnson et al. TOG 2005 Stanford CS248, Winter 2019
Next time: spatial acceleration data structures
Testing every primitive in scene to find ray-scene intersection
is slow!
Consider linearly scanning through a list vs. binary search
- can apply this same kind of thinking to geometric queries

Stanford CS248, Winter 2019


Acknowledgements
Thanks to Keenan Crane for presentation resources

Stanford CS248, Winter 2019

You might also like