DynamicWeatherSystemForProjectDark Kaming Chan
DynamicWeatherSystemForProjectDark Kaming Chan
Today, we would like to share details on the weather system for our upcoming mobile game, Project:
Dark
First of all, let me introduce our studio.
More Fun is one of the Tencent Games’ studios and she was found around 2010
We had made a number of titles based on Chinese and Japanese anime IPs
Recently, we will launch a mobile FPS, and let’s take a look at the trailer.
This is our first realistic shooting title, and we have learnt a lot during the development process.
Our game is based on heavily customized UE4 to put in a lot of console grade features.
For example, dynamic global illumination, in-game climate and time of day changes;
All these features are trying to make the PVP system more interesting and attractive.
[click]
First, we will focus on the dynamic sky rendering solution in our weather system
This include atmospheric scattering and volumetric clouds and the related optimizations for bring
them to mobile.
Please notice that our techniques also applies to portable gaming consoles like Nintendo switch and
Steam deck.
[click]
After that we will shared some of the weather effects which are related to the sky
[click]
If we still have time, we will cover some bonus slides about the design of our
weather system.
5
First I will take about the atmospheric scattering and the sky background
Atmospheric scattering is complex, which involve lots of physics.
So, I will try to explain it in a simple way.
Our atmosphere consist of different particles or molecules. The sky color is determined by sun light
being scattered by these particles
[click] For molecules smaller than the photon wavelength, Rayleigh scattering occurs which favour
blue frequency lights, and that’s why we have blue sky
[click] For molecules with similar size, Mie scattering occurs and all frequency of lights are scattered
equally, this makes haze or fog appears in white color or pale blue at the bottom of the sky
[click] For molecules that are much larger, Non selective Scattering occurs. For example the water
in liquid or ice form which make up clouds. They scatter all frequencies equally and so clouds appears
white in color while the sky is blue.
[click]
In order render the atmosphere, we have to ray march among the view direction
(say from point B to A), calculate the amount of scattering reaching each
sampling point P_i
And then integrate them together.
However it is too heavy for games and real time applications, so we usually use
3D / 4D look up tables to speed up the process.
7
Let’s take a look at Unreal Engine’s solution
In UE4, she use a series of look up tables to greatly reduce the amount of calculations
Simply put, she will generate 4 essential LUTs using compute shaders in each frame, they correspond
to
[Click] MultiScatter LUT for fast high order multiple scatter lookup
[Click] The sky view LUT which precompute the color for the distance sky.
And there is froxel of pre-computed the scattering & transmittance values mainly use by aerial
perspective effect
8
[Click] The scene is rendered by looking up these LUTs and here is an official
screenshot.
8
So we did some optimizations to UE’s implementation:
[Click]
First we discard the aerial perspective data, since we only concern about the distance sky background
and we use height fog for our scene instead of aerial perspective.
[Click]
All remaining LUTs are 2D, so we can now use pixel shader to update them.
This is very important since many mobile devices still have poor computer shader support.
[Click]
Then we evaluate the each LUT across 16 frames and 48 frames in total, so we only evaluate a few
pixels in each frame and this is acceptable on low end devices.
This is ok for our game since time of day is changing gradually
[Click]
To further reduce the amount of calculations, we use hemi-octahedron
parameterization for the SkyViewLUT and discard everything below the horizon.
This not only save 50% of ray marching and also save the expensive square root
instruction while looking up the sky color.
[Click]
Surprisingly, we found that these updates is fast enough to be done in CPU!
So we did it for legacy devices.
9
Here are some comparisons between the Original and Optimized version, at 3 different time of day.
[click]
[click]
[click]
There are some differences around the sun disk, because we use a relatively low resolution for the
LUT
but our artist think this is acceptable
and the performance is significantly improved as shown in the next slide
We can see hemi octahedron projection saved nearly 40% of GPU time while rendering the sky
And here is one small suggestion about the sky intensity
Some times artist may need to dim the sky for more dramatic effects
And most likely they will come up with adjusting the sun intensity directly
[click]Not only this is physically incorrect, but also this is not friendly to our optimization, which will
causes flickering
[click] Let take a look at this video, the sky color is changing like stop motion.
[click] For the sky we can simply multiply the sky illuminance factor after the LUT lookup
[click] For the scene, our system will first calculate a scene light intensity
according to the sun angle
then we use a separate multiplier for artist to adjust it.
[click] So the flickering is gone and we got a more reasonable scattering results
12
That’s all about the atmosphere
Now, let RC continue with the volumetric cloud rendering, modelling and shading
So here we talk about volumetric cloud.
[click]
First, our work is heavily based on the course from Horizon team.
So if you’re interested in a more detailed explanation of volumetric cloud, please check that out. It’s a
really great course.
For now, I’ll just do a quick review.
[click]
With this method, to get cloud in a specific view direction, you would launch the view ray
[click]
into the cloud layer, And do raymarching inside it,
[click]
For example these points are the raymarching steps.
In each step, we calculate how much light current sample point would receive
[click].
We’ll later talk about how to calculate this.
After knowing how much light current sample receives, we need to calculate how
much of it , will further scatter towards camera
[click]
And third, also very important, such raymarching methods are expected to be
expensive.
[click]
So we’ll talk about how we get it running on mobile phones.
Btw you may notice that, cloud in the intro video, looks pretty different than this.
That’s because the video was recorded pretty early, and we’ve actually improved
the shading many times.
14
So here’s the modeling part.
[click]
[click]
So we add a weather map.
It’s a 2D texture, viewing the scene from top to bottom.
It covers 40 kilometers in our case, and tiles beyond edge.
15
Its pixel contains coverage value, which is directly added to the noise.
So higher value makes the cloud more dense.
And it also has a cloud type value, which is used for next part.
[click]
This is a texture called the cloud profile.
In real life different cloud types have different shape over altitude. And weather
map is only in XY plane.
We use this cloud profile texture to describe how each cloud type looks like,
based on altitude.
So each cloud can have different shape at different height level.
15
And more details about the weather map.
Although the weather map is just a 2D texture
we won’t paint it manually, but rather use a little system to generate it dynamically.
[click]
In our system, the weather map is made of what we call cloud masks.
[click]
It’s a standard unreal actor can be placed in level,
so users can use the built-in transform tool to indicate where the cloud is, as you can see in the right
image.
[click]
Also each cloud mask has an assigned material, to specify the drawing content.
For example in the image on right, you see I’m dragging an cloud mask actor.
This mask, is assigned a material that, output white color, with a sphere mask.
16
And white means higher coverage, so the cloud becomes more dense.
we could also output black color, so the cloud of that position will get erased.
[click]
And also, to control the clouds conveniently for different weather, we have two
global values, named global coverage and global cloud type.
Which is just material parameters passed to the cloud mask material.
So the material will respond to these two global parameters, and change drawing
content correspondingly.
[click]
So, to use this system, we would have one big cloud mask covering the whole sky,
rendering a basic weather map. It could use some Worley noise to create some
basic clouds.
[click]
Next we would use more little cloud masks, like you see in the image, to add or
remove some cloud, to make sure final result looks good.
16
And the next one is cloud profile.
As I mentioned, it’s used for controlling cloud shapes over altitude, for each cloud type.
[click]
The first two channels, we store base noise range, used for remapping the base noise
[click]
B channel stores density scale, so we can have different density based on altitude
[click]
And A channel stores detail noise flip.
This basically allows you flip your detail noise, so on 0.0, you get some sharp detail, and on 1.0 you
get some bubble like details.
We chose these 4 parameters, because we think these affects the shape most. And of course you can
have different parameters.
17
[click]
And these parameters got packed into a tiny LUT for shader to use,
For the texture, Its X is cloud type, and Y is normalized altitude.
In our case its size is 6 by 16.
[click]
Last, we use built-in curve tool in Unreal, so user can create the LUT easily in the
editor.
This is how the tool looks like, you see we have 4 curves for each channel. X is
normalized altitude, and Y is its value.
17
That’s all about cloud modeling.
Let’s talk about shading.
[click]
Basically we have following components for light up the cloud,
[click]
The most basic one, single scattering,
[click]
Then ambient,
[click]
And multiple scattering.
18
[click]
Also there’re atmosphere scattering and emissive, but it won’t be covered here.
That’s because our game doesn’t have full atmospheric scattering calculated for
performance reasons.
Also emissive is straight forward, so it is not covered.
18
Before we go into details, we need to know a physical law named Beer-lambert.
So for example, in this graph, if we shoot a light beam from sun into cloud, assume the intensity of
beam is 1.0,
[click]
What’s the intensity after it reaches the sample point?
[click]
To solve this, we need to know the integration of extinction coefficient, along segment.
Extinction coefficient is a value that linearly scaled with density of cloud.
So denser the cloud is, fewer light will reach our sample point.
[click]
This value is what we called optical depth.
[click]
With optical depth, we can calculate transmittance with a simple exp operator, and this is Beer-
19
Lamber law.
[click]
secondly is when we calculate how much light is reflected towards camera,
as the light need to pass through cloud from the sample point,
it will got attenuated along the path.
19
For single scattering, when is light is directly coming from the sun,
[click].
We calculate the energy by multiplying sun light, and shadow, and phase function
[click]
For shadow, we use 4 samples, to calculate optical depth,
And we distribute samples in a square distance pattern
[click].
Like this in the right image.
So close occlusion is evaluated more accurately.
Also when sampling, detail noise is not considered. And we use higher LOD at every sample.
20
[click]
By mixing two HG functions as final phase function.
And HG is a function that
[click]
, basically you input the dot product of view direction and sun direction, VoL
here, and a parameter g, which controls the overall shape of the function.
[click]
G here is smaller than 1 and bigger than negative 1. When it’s 0, the function is
isotropic, meaning it returns same value for all the direction.
And otherwise it becomes sharper, which means light will likely keep going
forward when scattered. [click]
And for final phase, we just mix two HG functions with different g values.
The first one would have a g value close to 1, which brings a sharp forward
scattering,
This is for simulating the silver lining effect you would see when you see backlit
cloud.
And second has a negative value close to 0, makes cloud in the opposite direction
not too dark.
20
For the ambient part,
First this is nothing physical.
But just apply color based on altitude.
[click]
For the sky ambient, it’s calculated like this,
Basically higher cloud gets more sky color.
Btw this is actually a trick from UE4 volume cloud. We find it work pretty well.
[click]
Here we can see the result without sky ambient,
You can see that, when it’s mostly shadowed, it’s hard to tell the shape.
Everything is just same color.
21
[click]
And with it on.
It adds nice blue tint in the shadowed part,
The overall shape is much more clear.
21
And we also have a ground ambient, for light coming from ground.
This is calculated the same way, while inverting the height.
So lower cloud gets more ground color.
And the color is calculated by treating ground as pure color lambert surface and calculate the
reflection with main light source.
[click]
Here we can see without ground ambient,
You can see the bottom of cloud is kind of too dark.
[click]
And with ground ambient on.
Now the bottom is brighter,
and it’s closer to what we would see in daily life.
22
Next is multiple scattering.
Where part of light is bounced more than once.
And it is essential for correct visual of cloud.
Because cloud is so white, when a light beam hits cloud particle, most of it would bounce away and
keep going inside cloud, rather than absorbed.
And this becomes more intense when it’s deeper in cloud.
This brings some very counter-intuitive effect to cloud.
You should see in this photo that, the inner cloud, which believed to be shadowed more, is actually
brighter than the outer one.
23
So for simulating this effect, we’ve got some expectations.
[click]
First it should be somewhat physical.
We can have some parameters for artist, but we need something solid for artist to start with.
[click]
Second, it should be able to brighten cloud that is not too deep.
[click]
Third we should see dark edge on cloud surface.
[click]
And last, we don’t want to add too many parameters.
24
[click]
In our final solution we have three parts for the multiple scattering,
Multiple Scattering Approximation, which is used for calculating a more correct
overall brightness
Inscattering Probability, for creating dark edge on cloud
And artist scale for more control.
24
So for the first part,
[click]
we use a method proposed by Wrenninge in offline rendering, which is then adapted by Hillaire to
realtime volumetric cloud rendering.
And here’s a quick review of it.
[click]
25
Third, phase function is more isotropic, by making the g value towards 0.
[click]
25
In Frostbite and Unreal implementation, at most 3 octaves are used.
We did a little change that, we fixed the hyper parameters in the method, so we could precompute a
LUT ahead, with any octave number we want.
The LUT is calculated under unit illuminance,
And is indexed by optical depth and sun dot view.
So at runtime, after calculating single scattering, we get optical depth, we can then index into the
LUT to get multiple scattering.
And here’s a quick comparison
26
This is without multiple scattering.
You should see with only single scattering,
the light can’t reach deep inside cloud,
and the cloud looks more like smoke.
27
And this is with multiple scattering enabled.
It’s apparent, the overall brightness is now more accurate, more like the cloud in real life.
But we still don’t have the dark edge effect.
28
So for adding dark edge effect,
we use a method similar to the one in Horizon.
[click] So we have this magic code. Basically it takes the lod sample, outputs a 0~1 value about
scattering intensity.
29
Here’s a quick comparison,
This is without the in-scattering estimation.
Notice the part in circle.
It’s hard to tell the shape in these parts.
30
And we add the in-scattering estimation.
Now you should notice all the edge of cloud is darker, which is expected
And the circle part also has more details now.
31
So that’s everything we need, to calculate scattering at each sample point.
The next part is about how to integrate these samples together. Let’s look at the code.
[click]
In the raymarching shader, we would start with initial value,
final scattering which means final color we see, is zero,
And transmittance to camera, meaning the transmittance between camera and sample point.
Since we haven’t started raymarching, it’s 1.0.
As we raymarching through cloud, scattering value will be greater, and transmittance will be smaller.
Finally is a for loop for stepping into cloud.
[click]
In the loop body, we would first calculate extinction coefficient, which corresponds to our modeling
part.
And final energy, which is sum of all our scattering parts.
[click]
And we would add final scattering like this.
The scattering current camera actually receives, is by multiply these parts
together.
[click]
And then we update transmittance to camera, by taking transmittance of current
segment in.
And that’s it.
[click]
But this integration has a problem that, the result could be inconsistent if we
swap scattering and transmittance calculation.
We at last used a method proposed by hillaire in his great course, which give
consistent result. Be sure to check it out.
32
Km will continue with the optimization techniques in our solution.
Thank You RC, as he explained, rendering realistic cloudscape involves large amount of calculations in
each step of ray marching.
[Click]
For example we need at 32 - 64 steps for each ray, and every step involves so many texture reads
and ALU instructions
It is too just heavy to do it at 1080p in screen space for mobile
[Click]
Again, our friend hemi-octahedron comes to rescue
We project the sky with hemi-octahedron mapping, and cache all the ray marching results in a 512 x
512 2d texture.
[Click]
Since the weather condition or the sun direction is changing gradually in our game, so we can split
the cache update across multiple frames
[Click]
In every frame, we use the cache results to combine cloud scattering with the
sky.
34
So, what is the advantages?
[Click] First, it is View Location and FOV independent, so we can reuse the cache for any dynamic
reflection captures
[Click]
For example, our game use planar reflection for water surfaces, and we don’t need to do an extra ray
marching for the reflected view.
[Click]
Another advantage is that, cloud planar movement is relatively slow in octahedron space, reprojection
works pretty well with it
We can effectively restrict the amount calculations to avoid overheating the mobile GPU.
35
Inspired by the previous checkerboard rendering techniques, they help use to save 50% of rays
needed to be calculated per frame
[Click] Firstly we have one full sized rt called R, which contains the resolved results.
[Click]
And we split the pixels in R into two half-sized rt called E and O, which store the even and odd pixels
for each row in R
We perform ray marching in E or O in round robin manner
[Click] By doing so, we can update only half of the pixels without any stencil masking, so we can
precisely control how much will the GPU write back to main memory.
[Click] And we use this piece of code to convert the SvPosition into ray marching
direction.
36
After we finish updating either E or O, we resolve their results into R immediately, so they are
available for display
[click] We simply copy the results and it is actually good enough.
If there is any different between the original and the reconstructed position, it mean this location
doesn’t belong to the input.
[Click] And this close up video shows the checkerboard update and resolve in action
[wait 5 seconds]
It is hard to notice any artifacts.
37
We can further restrict the amount of calculations by use slicing.
[click] The idea is simple, we use scissor rectangle to constraint which portion of the render target will
be updated.
As you can see the green rectangle in the screen shot
[click] And each render target will be split into 4-16 slices and we only update one slice per frame.
[Click]
For example, each target has 4 slices and we have 4 rows for each target,
So we will take 8 frames to update both E and O
[Click] Let’s start with E and we only ray march one row per frame.
[click] [click] [click] At this movement, the update of E is completed, so we
resolve it to R, so its content will be displayed on screen.
[Click]
38
However the performance gain with slicing doesn’t come for free!
[Click] As slicing will cause stop motion like cloud movement as shown in this video
To solved this problem, we can interpolate the cache sampling direction for the sky box.
[Click] Assume we are look at point B in the current frame and we are now 1 frame ahead the last
update cycle.
[Click] Since the amount of cloud movement is known, so we can trace backward and find point C
39
As we stated earlier, 32 to 64 steps is needed for each ray in order to render beautiful cloudscape,
So we would like to optimized a little bit to save more GPU power for other effects
[click]In each frame, we apply a global offset to every ray and this offset is updated after each ray
marching cycle.
Then we blend the incoming result with the history stored in cache.
40
[Click]
By doing so, we are virtually evaluating many samples for each ray over time,
and the results usually converge within several frames.
[click] In our game, 16 steps per frame can achieve a good result,
40
However, ghosting will appear when cloud is moving too fast (say 100m / seconds);
Again we can apply reprojection on the direction used for looking up the history value.
[Click] During the ray marching process, we calculate a weighted-center for each ray according to the
cloud transmittance.
[Click] Then we subtract the weighted center with the cloud movement and we get point P
[Click] Subtracting P with the viewer location will roughly equals to the pervious ray marching
direction.
[Click] And here is the result for applying the reprojection, we can see ghosting is
greatly reduced.
41
Besides discussing on how to reduce the calculation amounts, we like to share on how to minimize
the required memory storage and bandwidth.
Since both of them are important in mobile.
[Click] Normally we use half float to store the raymarching results, and it takes 2MB for 512 x 512
[Click]This is acceptable for mainstream devices
[Click] However, half float is not so friendly on older devices, so we have to consider using the 8-bit
RGBA format
[Click]First we use physically based lighting, so all input and output are in high dynamic range
[Click]Secondly, there is a very strong phase function peak at the sun direction which make the
situation worst
42
[Click]In meanwhile, we also needed to consider the numerical stability for
temporal up-sampling
42
And we comes up with a “normalization” trick that applies to the scattering outputs.
Firstly, we divide the scattering by a phase term, which could lower the peak value.
[Click]
Please notice that we are not directly dividing by the raw phase function, but blending with the
isotropic version,
This avoid over compressing the shadowed pixels around the peak
[Click]
43
[Click]
Finally, we do a gamma 2.2 encode.
43
And here are some performance measured on iPhone 11 with different scalability profile
In addition we also measured the performance on a GTX 1070 graphics card with the same test
scene.
For the non-optimized version, it takes 10ms for the ray marching on mobile
For highest quality on mobile, we just divide the cache into 4 slices and it takes 0.6ms on mobile and
only 0.09 on desktop.
We double the slice count for the middle quality, and the cost scale down on mobile
For the lowest quality that will be used on legacy devices, we turn on HDR compression and reduce
the cache size to 256 x 256,
It takes 220us and we can see the effect of HDR compression when compare to the non-compressed
one.
For the resolve pass, it take 100us on mobile and only 0.01ms on desktop, which
is ignorable.
44
Now, let’s move to some dynamic weather effects in project: dark
The first one is cloud shadows
Since we already computed the cloud transmittance of the whole sky, so we can project this on the
ground for cloud shadows effect
The idea is simple, first we trace a ray from the shading location towards the sun
[Click]
Then we find the intersection for this ray with the bottom cloud layer
[Click]
After that we subtract this point with the earth’s center and get our sampling direction
[Click]
47
Next part is rain.
Here’s a short clip we recorded in editor.
It contains rain particles, and material wetness effect
[wait finish and click]
[click]
So the rain itself is rendered using particles.
We’ve measured that on GPU, it’s far more performant than using a spindle.
You can see the test result on the right.
[click]
And the surface wetness is created by adjusting the material parameters, using the method from
Waylon.
But we currently removed puddle and ripple effect, because it is costly for mobile.
[click]
Also we support fast occlusion map update, by making use of CSM scrolling.
So the occlusion map is rendered at runtime,
And if it’s too far away from camera center, it will scroll itself and re-use any part if possible.
[click]
However there still exists some major problems.
Firstly, occlusion sampling is pretty heavy.
If we want smooth edge, we will need 4 taps PCF. We’re considering using ESM
which can be pre-filtered.
And also, some material doesn’t have specular in the game. For example the tree
you just saw in the video. Also if graphics settings is low, all the material are set
to be fully rough, and don’t have specular.
And there’re many other problems also, for example, if it’s raining, then the sun
light would be little, and shadow is gone. So the whole scene would look too
simple, without AO, the players just think it too ugly. So we also have to do
something about that.
And what about reflection? Since everything become smooth, without correct
reflection it just feels wrong. And we don’t have SSR for now.
49
Next is lightning.
We use real life references to make the lightning effect as real as possible.
And I’d like to share some key points to make realistic lightning in real time rendering.
The animation on right is taken from a YouTube channel called the slomo guys. It clearly shows the
lightning process.
[click]
First is lightning leader,
which is the spider-web like path, growing from cloud to ground.
In real life this is barely visible for human eyes because it’s so fast.
But it’s cool, so we decided to implement it.
And this stage creates what we call “lightning channel”, when it reaches the ground.
Once the channel is created, huge current flows through the channel
[click]
and this is what we call a return stroke.
The stroke would heat up air to a very high temperature causing flash and
thunder.
[click]
After return stroke, there could be some re-strokes.
It’s some strokes that happen after the return strike, following same channel.
This is usually smaller than return stroke, and happens average 3 to 4 times,
creates the flickering light you would see.
50
To create lightning, we need to model it.
Here we use a fractal algorithm, to create lightning channel.
The algorithm is straight forward, and we start with a straight line from start to end
[click]
Then you pick its middle point as a new node, offset it in the plane perpendicular to the original line
[click]
Then you just do this process recursively on child segments, until the segment is short enough.
But that’s only one branch. We need a spider web like structure to simulate the lightning leader
process.
51
To create branches, we do a random branching test, while splitting.
For example, here is the splitting result from the last page.
Assume we just split the orange segments.
[click]
If the random test passed, we create a new node at the end,
And its parent is current splitting node.
So the new segment is the red one.
[click]
Then we just shorten it by some percent, and rotate it away from the original direction by some
degrees.
[click]
And we do recursive splitting on the created branch as well.
52
Then we convert the result to quad list mesh.
And here is the result.
[click]
Also we store some info in vertex color.
Its R channel, tagging if a vertex is main branch. So we could toggle to show the main branch only
And G channel stores a normalized distance from lightning start. We use this for the cloud-to-ground
growing animation
[click]
And here’s how it looks like when animated. I’ve slow down the speed of lightning leader so we can
see it clearly.
53
So here’s how it looks in the game
And I’ll also share about how the scene and cloud is light up.
[click]
for lighting up the scene,
We simply boost up main light
And calculate the intensity based on squared falloff, using distance between camera and lightning.
So shadow is not correct. But it happens really fast, so it is hard to notice.
[click]
For lighting the cloud,
We add the lighting while render the skybox.
intensity is based on exponential falloff.
Cloud position is calculated by tracing against the cloud layer, thus result is not hundred percent
accurate, but still, good enough for mobile
54
[Check time]
So the next part,
I’d like to share some extra things about user experiences and software engineering, in our system.
55
First is how we define a weather in our system.
So normally we would think, we could just have a parameter struct, contains all the weather
parameters,
For example, cloud coverage, fog density, etc..
Then different weather, is just a different group of values.
But in our case, we want our user could have most flexibility.
So weather parameters are not hard-coded.
[click]
Instead, user can directly add any property into control, by using the level sequence.
And the sequence will be evaluated based on time of day, or sun angle, depending on user setting.
So these weather parameters could changed with time.
[click]
Second, we allow multiple sequences to work at same time, by layering them in an order.
Values in higher sequence will override those in lower sequence.
And each layer has an opacity value. So by controlling the opacity value, you can
fade the sequence in and out.
In such a way, user could group weather elements in different sequence,
For example, one layer for cloud, one layer rain, then user can just combine final
weather using different opacities.
[click]
It should be noted, Level sequence by default doesn’t support blending operation.
We did some coding to parse the level sequence and evaluate the values
manually.
56
So finally, a weather in our system, is just a preset of layer opacity values.
[click]
[click]
First, don’t try to create an out-of-box solution.
Each project would have their very specific needs.
For example, not every project needs volume cloud and physically based atmosphere.
And some project may have some special requirements about material, maybe all the material needs
plug-in some material function etc.
If you’re doing an out-of-box solution, then such requirements make you have to maintain branch for
each project.
And in Unreal, material and blueprint are hard to manage versions since they’re binary. Maintaining
multiple branches will definitely be problematic.
[click]
We ended up with a solution, that try to decouple all the features,
All the features are separated into actor, component, or material functions
And user could just assemble these stuff in the editor, using blueprint editor or
material editor.
This brings more work about setup, but much less headache in the future.
And we also have what we call “example” setup, which contains as many features
as possible. so user could follow the example to start.
60
Today we had shared how our weather system render realistic sky and some basic weather effects.
But it is not the end!
Last but not the least, we would like to say thank you to MoreFun for providing us a lot of supports.
And especially our technical director, Milo, for giving us many good suggestions while preparing this
presentation.
Here are the references, and we had try our best to list all of them.
We are also working on various areas of game technologies such as: GPU driven rendering, Real time
global illumination, Fluid Simulation, Physical Animation
If you are interested, please visit our website / send e-mails to km / simply scan this QR code .
Thank you very much and please remember to fill in the feedbacks for our
session.