As most know, Unity has a built-in LOD (level of detail) system. When specified on a root model it can be used to swap renderers based on the size of the model on the screen. For most cases this is quite fine -- and increasing the scale of the object will automatically made the transition happen from farther away. This also means that smaller objects, such as rocks on the ground, will fade out much sooner than say, entire buildings. Makes sense, and it's pretty easy to fine-tune the fade distance while in the editor.
But wait, what if the objects have to fade out at the same distance? What if you have a complex building made up of several parts -- the foundation, the walls, and a bunch of platforms on top, like just about any structure in Sightseer? With the Unity's LOD system, there are two immediate issues. First, Sightseer's renderers are not available until they are dynamically generated at run-time, meaning their size is not known until a bunch of smaller objects get merged together into one larger one in order to save on draw calls and CPU overhead (culling). Since the dimensions are not available, it's not possible to fine-tune the fade distance, and due to varying sizes of objects, since Unity's LOD is based on the final object's dimensions rather than distance, it means they will fade out at different times.
I noticed it right away in Sightseer with trees even before player outposts were introduced. Trees are split into groups by fixed size cells, and all the trees inside each cell are merged into draw calls. Some cells may be full of trees, while others can only have a couple. Since the dimensions of the final renderer vary greatly, this caused some groups of trees to fade in at the horizon, while others wouldn't appear until the player got very close, even though they were adjacent to each other in the world.
The issue only got worse when player outposts were introduced. Player outposts are made from dozens and sometimes even hundreds of small objects -- foundations, walls, and many other types of props -- and Sightseer's code groups them together by material, then merges them into fewest draw calls possible on (a separate thread as to not impact performance). The end result: a variety of renderer sizes, all of which should fade in and out together. With Unity's LOD system that simply wasn't possible. I had player outposts appear piece by piece as I drove towards them -- often with objects on top of foundations appearing to float in mid-air. Not good.
Another issue I ran into with LODGroup is that since it's based on the size of the object on the screen, it effectively means that as the camera moves around the vehicle in 3rd-person view, or zooms in/out, objects near the player's vehicle would swap their LOD levels, or even fades in/out. This is not ideal for Sightseer, and I imagine for other 3rd person games. Objects fading in and out while the camera moves around a stationary vehicle looks jarring at best. Furthermore it hurts performance as the LOD checks have to be performed all the time. It's actually the same issue I've ran into with Unity's grass, but more on that in a separate post
At first, I experimented with trying to hack the LODGroup to work based on distance. I experimented with what happens when it's added before the renderers, and was actually successful in getting the trees to fade out when I wanted them to. Unfortunately the same trick didn't seem to work with the player outposts. I never did figure out why...
Eventually I decided to write my own system. The most basic example of a LOD system is to have a script on the renderer that checks the distance between the player's avatar and the object itself, and enables/disables the renderer based on that. It's simple and controllable -- but of course this basic approach doesn't include any kind of fading in or out.
As I delved into Unity's code that handles fading between different LOD levels (the same code that fades between renderers), I actually discovered another downside of Unity's LOD system: it requires a texture! Behold, ApplyDitherCrossFade function from UnityCG's include file:
void ApplyDitherCrossFade(half3 ditherScreenPos)
{
half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
projUV.y = frac(projUV.y) * 0.0625 /* 1/16 */ + unity_LODFade.y; // quantized lod fade by 16 levels
clip(tex2D(_DitherMaskLOD2D, projUV).a - 0.5);
}
As you can see, it samples a dithering texture in order to calculate dithering -- something that is trivial to do via code instead. With the number of texture registers being limited at 16 total, that actually hurts quite a bit. Although to be fair, I'm guessing most games won't run into this particular limitation.
When working on my own LOD system I decided to simply add LOD support to all of the Tasharen's shaders. Sightseer doesn't use Unity's shaders due to some issues explained in a previous post, so adding dithering was a trivial matter -- but let's go over it step by step.
First, we need a function that will compute the screen coordinates for dithering. This is Unity's ComputeDitherScreenPos function from UnityCG.cginc:
half3 ComputeDitherScreenPos(float4 clipPos)
{
half3 screenPos = ComputeScreenPos(clipPos).xyw;
screenPos.xy *= _ScreenParams.xy * 0.25;
return screenPos;
}
That function accepts the clipped vertex position -- something everyone already calculates in the Vertex Shader:
o.vertex = UnityObjectToClipPos(v.vertex)
Simply save the coordinates, passing them to the fragment shader:
o.dc = ComputeDitherScreenPos(o.vertex);
The next step is to take these coordinates in the fragment shader, do some magic with them and clip() the result, achieving a dithering effect for fading in the geometry pixel by pixel.
void DitherCrossFade(half3 ditherScreenPos)
{
half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
half dither = _Dither - (projUV.y + projUV.x) * 0.25;
clip(dither);
}
Instead of using an expensive texture sample like Unity does, I use the frac() function to achieve a similar looking effect. The only notable part of the entire function is the "_Dither" value -- a uniform that's basically the fade alpha. In fact, you can use the main color's alpha instead to make it possible to fade out solid objects!
Here's the entire shader, for your convenience.
Shader "Unlit/Dither Test"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Dither("Dither", Range(0, 1)) = 1.0
}
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float3 dc : TEXCOORD1;
};
sampler2D _MainTex;
float4 _MainTex_ST;
fixed _Dither;
half3 ComputeDitherScreenPos(float4 clipPos)
{
half3 screenPos = ComputeScreenPos(clipPos).xyw;
screenPos.xy *= _ScreenParams.xy * 0.25;
return screenPos;
}
void DitherCrossFade(half3 ditherScreenPos)
{
half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
half dither = _Dither - (projUV.y + projUV.x) * 0.25;
clip(dither);
}
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.dc = ComputeDitherScreenPos(o.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
DitherCrossFade(i.dc);
return tex2D(_MainTex, i.uv);
}
ENDCG
}
}
}
So how does the fading between two renderers happen, you may wonder? It's simple: both are drawn for a time it takes for them to fade in/out. You may think "omg, but that's twice the draw calls!", and while that's true, it's only for a short time and doing so doesn't affect the fill rate due to the clip(). Basically the pixels that are drawn by one renderer should be clipped by the other. Here is the modified version of the shader with an additional property: "Dither Side":
Shader "Unlit/Dither Test"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Dither("Dither", Range(0, 1)) = 1.0
_DitherSide("Dither Side", Range(0, 1)) = 0.0
}
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float3 dc : TEXCOORD1;
};
sampler2D _MainTex;
float4 _MainTex_ST;
fixed _Dither;
fixed _DitherSide;
inline half3 ComputeDitherScreenPos(float4 clipPos)
{
half3 screenPos = ComputeScreenPos(clipPos).xyw;
screenPos.xy *= _ScreenParams.xy * 0.25;
return screenPos;
}
inline void DitherCrossFade(half3 ditherScreenPos)
{
half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
half dither = _Dither.x - (projUV.y + projUV.x) * 0.25;
clip(lerp(dither, -dither, _DitherSide));
}
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.dc = ComputeDitherScreenPos(o.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
DitherCrossFade(i.dc);
return tex2D(_MainTex, i.uv);
}
ENDCG
}
}
}
For the renderer that's fading in, pass the dither amount and leave the _DitherSide at 0. For the renderer that's fading out, pass (1.0 - dither amount), and 1.0 for the _DitherSide. I recommend using Material Property Blocks. In fact, in Sightseer I wrote an extension that lets me do renderer.AddOnRender(func), where "func" receives a MaterialpropertyBlock to modify:
using UnityEngine;
/// <summary>
/// Simple per-renderer material block that can be altered from multiple sources.
/// </summary>
public class CustomMaterialBlock : MonoBehaviour
{
Renderer mRen;
MaterialPropertyBlock mBlock;
public OnWillRenderCallback onWillRender;
public delegate void OnWillRenderCallback (MaterialPropertyBlock block);
void Awake ()
{
mRen = GetComponent<Renderer>();
if (mRen == null) enabled = false;
else mBlock
= new MaterialPropertyBlock
(); }
void OnWillRenderObject ()
{
if (mBlock != null)
{
mBlock.Clear();
if (onWillRender != null) onWillRender(mBlock);
mRen.SetPropertyBlock(mBlock);
}
}
}
/// <summary>
/// Allows for renderer.AddOnRender convenience functionality.
/// </summary>
static public class CustomMaterialBlockExtensions
{
static public CustomMaterialBlock AddOnRender (this Renderer ren, CustomMaterialBlock.OnWillRenderCallback callback)
{
UnityEngine.Profiling.Profiler.BeginSample("Add OnRender");
var mb = ren.GetComponent<CustomMaterialBlock>();
if (mb == null) mb = ren.gameObject.AddComponent<CustomMaterialBlock>();
mb.onWillRender += callback;
UnityEngine.Profiling.Profiler.EndSample();
return mb;
}
static public void RemoveOnRender (this Renderer ren, CustomMaterialBlock.OnWillRenderCallback callback)
{
UnityEngine.Profiling.Profiler.BeginSample("Remove OnRender");
var mb = ren.GetComponent<CustomMaterialBlock>();
if (mb != null) mb.onWillRender -= callback;
UnityEngine.Profiling.Profiler.EndSample();
}
}
In the end, while Sightseer's LOD system ended up being a lot more advanced and made to support colliders as well as renderers (after all, expensive concave mesh colliders don't need to be active unless the player is near), at its core the most confusing part was figuring out how to manually fade out renderers. I hope this helps someone else in the future!