Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ArenMook

Pages: [1] 2 3 ... 9
1
Dev Blog / July 21, 2017 - Grass
« on: July 21, 2017, 08:12:37 PM »
In Sightseer, a big part of the game is exploring the large procedurally-generated world, so it has to look as nice as possible. Flat terrains are generally not, so adding some grass only seemed logical. Since I was still using Unity's terrain system, for all its faults, I decided to see if it could redeem itself by offering some nice looking grass.

Well... it did, sort of. The final result did look a little better:



But what about looking at it from the top?



If you can't see any grass in that picture, you're not alone. There are two ways of drawing grass using Unity. First is to use the approach you see in the picture, by simply having grass use the world's up vector as its up vector. It looks consistent from the side, but effectively makes the grass invisible when viewed from above. Another approach is to use screen-aligned quads for grass, where the top of the monitor is considered to be "up", regardless of the direction. This approach is even worse -- not only does the grass turn as the camera rotates / tilts, but it also looks very weird when viewed from above. I'll spare you the pic.

Still, neither of those limitations are as bad as the next one: performance hit:



Since the grass update is based on the position of the camera, in a game featuring a 3rd person camera that orbits around the vehicle such as Sightseer, that grass update happens very, very frequently -- often more than once per second! Predictably a 300+ ms hiccup every time that happens is simply unacceptable, and that's with moderately sparse grass, at that!

Ideally I wanted grass to be more dense, like this:



With grass set to that density, the game was spending more time updating grass than everything else combined while playing it.

At this point I found myself wondering just what kind of use case the original developers of this grass system had in mind with it being so stupidly slow. Maybe it was only meant for extremely small worlds, or extremely sparse grass, or both... I don't know. All I know is that it was simply unusable, and that I had to write my own.

And so I did.

So, how can one do super fast grass? Let's start by examining Unity's grass. With it, for each generated terrain, grass information has to be baked in right away for the entire terrain, like a texture -- the same way splat information is passed to it. If a small part of the grass information changes, the entire texture does. What happens with this data is anyone's guess, as it happens somewhere deep inside Unity and the end developer has no control over it.

Does it have to be this way? Certainly not. First, grass information for anything outside the player's immediate area is completely irrelevant. Who cares what grass is supposed to be like 1 km away? It's not visible, so it's irrelevant. Second, I don't know when or why Unity's updates take so damn long to complete, but grass needs to be split into patches (and as far as I could tell, Unity does that -- at least for drawing the grass). As such, patch-based distance checks to determine if the grass should be updated or not should be extremely fast. Similarly, there is no need to update each patch unless the player goes out of range. When it does go out of range, the patch should be repositioned to the opposite side of the visible "bubble" around the player and re-filled. The "bubble" looks like this:



Last but not least, actual placement information for the grass should be based on some data that's available in a separate thread. Since Sightseer sets the terrain heightmap, the base heightmap data can be used as-is. All that the main thread should be doing is updating the VBOs (draw buffers) after the grass has been generated.

Finally, the grass itself shouldn't be based on quads like Unity's. It should be based on meshes. A simple "bush" of grass made up of 3 quads intersecting in a 3D V-like pattern is the most trivial example. Since it's based on meshes, it's possible to have the said meshes to be of different shapes, complexity, and most importantly -- size. Furthermore, since it's shaped in a V-like pattern, it should look good even when viewed from above. Of course since the grass should end up in a single draw call, it's important to have all those meshes use some kind of a grass atlas, letting them share the same material.

In the end, it took only a few hours to write a basic grass system, then a couple more days to perfect it (and a couple more weeks of playing the game to iron out weird glitches <_<). The end result, performance-wise was obvious within the first few hours, however:



You're seeing that right: 0.2 millisecond to update the grass of much greater density than what was taking Unity's grass 300+ milliseconds per frame. I expected as much, which is why I was so surprised that Unity's grass performed so horribly. This is how it looked in the game:



It looks much more dense than what I had with Unity's grass, and is very much visible from above:



In fact, I was immediately curious how the grass would look like if I enabled shadows on it and increased its size to make it look even more dense:



Very nice indeed, although the shadows are a little too obvious from above:



There is one other thing I did with the grass... and it's pretty important. I colored it based on the underlying terrain. Doing so is simple: I render the terrain into a texture using a top-down camera that's updated when the player moves far enough. This texture is sampled by the grass shader, tinting its normally black-and-white albedo texture with the color of the terrain underneath. This makes the grass always blend 100% perfectly, regardless of what's underneath -- whether it's the sand-blasted savanna or the lush grassland -- without any need for developer input. In fact, since Sightseer's terrain is fully procedural and smoothly transitions from one biome to the next, this part was as extremely important to have as performance.

The end result? See for yourself:



All that for 0.2 ms every ~2-3 seconds while driving around.

2
Dev Blog / July 21, 2017 - Custom LOD system
« on: July 21, 2017, 07:06:46 PM »
As most know, Unity has a built-in LOD (level of detail) system. When specified on a root model it can be used to swap renderers based on the size of the model on the screen. For most cases this is quite fine -- and increasing the scale of the object will automatically made the transition happen from farther away. This also means that smaller objects, such as rocks on the ground, will fade out much sooner than say, entire buildings. Makes sense, and it's pretty easy to fine-tune the fade distance while in the editor.



But wait, what if the objects have to fade out at the same distance? What if you have a complex building made up of several parts -- the foundation, the walls, and a bunch of platforms on top, like just about any structure in Sightseer? With the Unity's LOD system, there are two immediate issues. First, Sightseer's renderers are not available until they are dynamically generated at run-time, meaning their size is not known until a bunch of smaller objects get merged together into one larger one in order to save on draw calls and CPU overhead (culling). Since the dimensions are not available, it's not possible to fine-tune the fade distance, and due to varying sizes of objects, since Unity's LOD is based on the final object's dimensions rather than distance, it means they will fade out at different times.

I noticed it right away in Sightseer with trees even before player outposts were introduced. Trees are split into groups by fixed size cells, and all the trees inside each cell are merged into draw calls. Some cells may be full of trees, while others can only have a couple. Since the dimensions of the final renderer vary greatly, this caused some groups of trees to fade in at the horizon, while others wouldn't appear until the player got very close, even though they were adjacent to each other in the world.

The issue only got worse when player outposts were introduced. Player outposts are made from dozens and sometimes even hundreds of small objects -- foundations, walls, and many other types of props -- and Sightseer's code groups them together by material, then merges them into fewest draw calls possible on (a separate thread as to not impact performance). The end result: a variety of renderer sizes, all of which should fade in and out together. With Unity's LOD system that simply wasn't possible. I had player outposts appear piece by piece as I drove towards them -- often with objects on top of foundations appearing to float in mid-air. Not good.

Another issue I ran into with LODGroup is that since it's based on the size of the object on the screen, it effectively means that as the camera moves around the vehicle in 3rd-person view, or zooms in/out, objects near the player's vehicle would swap their LOD levels, or even fades in/out. This is not ideal for Sightseer, and I imagine for other 3rd person games. Objects fading in and out while the camera moves around a stationary vehicle looks jarring at best. Furthermore it hurts performance as the LOD checks have to be performed all the time. It's actually the same issue I've ran into with Unity's grass, but more on that in a separate post

At first, I experimented with trying to hack the LODGroup to work based on distance. I experimented with what happens when it's added before the renderers, and was actually successful in getting the trees to fade out when I wanted them to. Unfortunately the same trick didn't seem to work with the player outposts. I never did figure out why...

Eventually I decided to write my own system. The most basic example of a LOD system is to have a script on the renderer that checks the distance between the player's avatar and the object itself, and enables/disables the renderer based on that. It's simple and controllable -- but of course this basic approach doesn't include any kind of fading in or out.

As I delved into Unity's code that handles fading between different LOD levels (the same code that fades between renderers), I actually discovered another downside of Unity's LOD system: it requires a texture! Behold, ApplyDitherCrossFade function from UnityCG's include file:
  1.     void ApplyDitherCrossFade(half3 ditherScreenPos)
  2.     {
  3.         half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  4.         projUV.y = frac(projUV.y) * 0.0625 /* 1/16 */ + unity_LODFade.y; // quantized lod fade by 16 levels
  5.         clip(tex2D(_DitherMaskLOD2D, projUV).a - 0.5);
  6.     }
As you can see, it samples a dithering texture in order to calculate dithering -- something that is trivial to do via code instead. With the number of texture registers being limited at 16 total, that actually hurts quite a bit. Although to be fair, I'm guessing most games won't run into this particular limitation.

When working on my own LOD system I decided to simply add LOD support to all of the Tasharen's shaders. Sightseer doesn't use Unity's shaders due to some issues explained in a previous post, so adding dithering was a trivial matter -- but let's go over it step by step.

First, we need a function that will compute the screen coordinates for dithering. This is Unity's ComputeDitherScreenPos function from UnityCG.cginc:
  1. half3 ComputeDitherScreenPos(float4 clipPos)
  2. {
  3.         half3 screenPos = ComputeScreenPos(clipPos).xyw;
  4.         screenPos.xy *= _ScreenParams.xy * 0.25;
  5.         return screenPos;
  6. }
That function accepts the clipped vertex position -- something everyone already calculates in the Vertex Shader:
  1. o.vertex = UnityObjectToClipPos(v.vertex)
Simply save the coordinates, passing them to the fragment shader:
  1. o.dc = ComputeDitherScreenPos(o.vertex);
The next step is to take these coordinates in the fragment shader, do some magic with them and clip() the result, achieving a dithering effect for fading in the geometry pixel by pixel.
  1. void DitherCrossFade(half3 ditherScreenPos)
  2. {
  3.         half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  4.         projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
  5.         half dither = _Dither - (projUV.y + projUV.x) * 0.25;
  6.         clip(dither);
  7. }
Instead of using an expensive texture sample like Unity does, I use the frac() function to achieve a similar looking effect. The only notable part of the entire function is the "_Dither" value -- a uniform that's basically the fade alpha. In fact, you can use the main color's alpha instead to make it possible to fade out solid objects!

Here's the entire shader, for your convenience.
  1. Shader "Unlit/Dither Test"
  2. {
  3.         Properties
  4.         {
  5.                 _MainTex ("Texture", 2D) = "white" {}
  6.                 _Dither("Dither", Range(0, 1)) = 1.0
  7.         }
  8.  
  9.         SubShader
  10.         {
  11.                 Tags { "RenderType" = "Opaque" }
  12.                 LOD 100
  13.  
  14.                 Pass
  15.                 {
  16.                         CGPROGRAM
  17.                         #pragma vertex vert
  18.                         #pragma fragment frag
  19.  
  20.                         #include "UnityCG.cginc"
  21.  
  22.                         struct appdata
  23.                         {
  24.                                 float4 vertex : POSITION;
  25.                                 float2 uv : TEXCOORD0;
  26.                         };
  27.  
  28.                         struct v2f
  29.                         {
  30.                                 float4 vertex : SV_POSITION;
  31.                                 float2 uv : TEXCOORD0;
  32.                                 float3 dc : TEXCOORD1;
  33.                         };
  34.  
  35.                         sampler2D _MainTex;
  36.                         float4 _MainTex_ST;
  37.                         fixed _Dither;
  38.  
  39.                         half3 ComputeDitherScreenPos(float4 clipPos)
  40.                         {
  41.                                 half3 screenPos = ComputeScreenPos(clipPos).xyw;
  42.                                 screenPos.xy *= _ScreenParams.xy * 0.25;
  43.                                 return screenPos;
  44.                         }
  45.  
  46.                         void DitherCrossFade(half3 ditherScreenPos)
  47.                         {
  48.                                 half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  49.                                 projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
  50.                                 half dither = _Dither - (projUV.y + projUV.x) * 0.25;
  51.                                 clip(dither);
  52.                         }
  53.                        
  54.                         v2f vert (appdata v)
  55.                         {
  56.                                 v2f o;
  57.                                 o.vertex = UnityObjectToClipPos(v.vertex);
  58.                                 o.dc = ComputeDitherScreenPos(o.vertex);
  59.                                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
  60.                                 return o;
  61.                         }
  62.                        
  63.                         fixed4 frag (v2f i) : SV_Target
  64.                         {
  65.                                 DitherCrossFade(i.dc);
  66.                                 return tex2D(_MainTex, i.uv);
  67.                         }
  68.                         ENDCG
  69.                 }
  70.         }
  71. }
So how does the fading between two renderers happen, you may wonder? It's simple: both are drawn for a time it takes for them to fade in/out. You may think "omg, but that's twice the draw calls!", and while that's true, it's only for a short time and doing so doesn't affect the fill rate due to the clip(). Basically the pixels that are drawn by one renderer should be clipped by the other. Here is the modified version of the shader with an additional property: "Dither Side":
  1. Shader "Unlit/Dither Test"
  2. {
  3.         Properties
  4.         {
  5.                 _MainTex ("Texture", 2D) = "white" {}
  6.                 _Dither("Dither", Range(0, 1)) = 1.0
  7.                 _DitherSide("Dither Side", Range(0, 1)) = 0.0
  8.         }
  9.  
  10.         SubShader
  11.         {
  12.                 Tags { "RenderType" = "Opaque" }
  13.                 LOD 100
  14.  
  15.                 Pass
  16.                 {
  17.                         CGPROGRAM
  18.                         #pragma vertex vert
  19.                         #pragma fragment frag
  20.  
  21.                         #include "UnityCG.cginc"
  22.  
  23.                         struct appdata
  24.                         {
  25.                                 float4 vertex : POSITION;
  26.                                 float2 uv : TEXCOORD0;
  27.                         };
  28.  
  29.                         struct v2f
  30.                         {
  31.                                 float4 vertex : SV_POSITION;
  32.                                 float2 uv : TEXCOORD0;
  33.                                 float3 dc : TEXCOORD1;
  34.                         };
  35.  
  36.                         sampler2D _MainTex;
  37.                         float4 _MainTex_ST;
  38.                         fixed _Dither;
  39.                         fixed _DitherSide;
  40.  
  41.                         inline half3 ComputeDitherScreenPos(float4 clipPos)
  42.                         {
  43.                                 half3 screenPos = ComputeScreenPos(clipPos).xyw;
  44.                                 screenPos.xy *= _ScreenParams.xy * 0.25;
  45.                                 return screenPos;
  46.                         }
  47.  
  48.                         inline void DitherCrossFade(half3 ditherScreenPos)
  49.                         {
  50.                                 half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  51.                                 projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
  52.                                 half dither = _Dither.x - (projUV.y + projUV.x) * 0.25;
  53.                                 clip(lerp(dither, -dither, _DitherSide));
  54.                         }
  55.                        
  56.                         v2f vert (appdata v)
  57.                         {
  58.                                 v2f o;
  59.                                 o.vertex = UnityObjectToClipPos(v.vertex);
  60.                                 o.dc = ComputeDitherScreenPos(o.vertex);
  61.                                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
  62.                                 return o;
  63.                         }
  64.                        
  65.                         fixed4 frag (v2f i) : SV_Target
  66.                         {
  67.                                 DitherCrossFade(i.dc);
  68.                                 return tex2D(_MainTex, i.uv);
  69.                         }
  70.                         ENDCG
  71.                 }
  72.         }
  73. }
  74.  
For the renderer that's fading in, pass the dither amount and leave the _DitherSide at 0. For the renderer that's fading out, pass (1.0 - dither amount), and 1.0 for the _DitherSide. I recommend using Material Property Blocks. In fact, in Sightseer I wrote an extension that lets me do renderer.AddOnRender(func), where "func" receives a MaterialpropertyBlock to modify:
  1. using UnityEngine;
  2.  
  3. /// <summary>
  4. /// Simple per-renderer material block that can be altered from multiple sources.
  5. /// </summary>
  6.  
  7. public class CustomMaterialBlock : MonoBehaviour
  8. {
  9.         Renderer mRen;
  10.         MaterialPropertyBlock mBlock;
  11.  
  12.         public OnWillRenderCallback onWillRender;
  13.         public delegate void OnWillRenderCallback (MaterialPropertyBlock block);
  14.  
  15.         void Awake ()
  16.         {
  17.                 mRen = GetComponent<Renderer>();
  18.                 if (mRen == null) enabled = false;
  19.                 else mBlock = new MaterialPropertyBlock();
  20.         }
  21.  
  22.         void OnWillRenderObject ()
  23.         {
  24.                 if (mBlock != null)
  25.                 {
  26.                         mBlock.Clear();
  27.                         if (onWillRender != null) onWillRender(mBlock);
  28.                         mRen.SetPropertyBlock(mBlock);
  29.                 }
  30.         }
  31. }
  32.  
  33. /// <summary>
  34. /// Allows for renderer.AddOnRender convenience functionality.
  35. /// </summary>
  36.  
  37. static public class CustomMaterialBlockExtensions
  38. {
  39.         static public CustomMaterialBlock AddOnRender (this Renderer ren, CustomMaterialBlock.OnWillRenderCallback callback)
  40.         {
  41.                 UnityEngine.Profiling.Profiler.BeginSample("Add OnRender");
  42.                 var mb = ren.GetComponent<CustomMaterialBlock>();
  43.                 if (mb == null) mb = ren.gameObject.AddComponent<CustomMaterialBlock>();
  44.                 mb.onWillRender += callback;
  45.                 UnityEngine.Profiling.Profiler.EndSample();
  46.                 return mb;
  47.         }
  48.  
  49.         static public void RemoveOnRender (this Renderer ren, CustomMaterialBlock.OnWillRenderCallback callback)
  50.         {
  51.                 UnityEngine.Profiling.Profiler.BeginSample("Remove OnRender");
  52.                 var mb = ren.GetComponent<CustomMaterialBlock>();
  53.                 if (mb != null) mb.onWillRender -= callback;
  54.                 UnityEngine.Profiling.Profiler.EndSample();
  55.         }
  56. }
In the end, while Sightseer's LOD system ended up being a lot more advanced and made to support colliders as well as renderers (after all, expensive concave mesh colliders don't need to be active unless the player is near), at its core the most confusing part was figuring out how to manually fade out renderers. I hope this helps someone else in the future!

3
Dev Blog / Feb 22, 2017 - Seamless Audio
« on: February 22, 2017, 09:54:44 AM »
This is less of a blog post, more of an instruction tutorial on how to make seamless audio properly as I seem to be finding myself trying to explain this often...  :-\

In games it's often necessary to make audio perfectly loopable -- whether it's a combat music track or simply the hum of the vehicle's engine, and making any sound loopable is actually pretty easy in Audacity -- a free tool for editing audio.

1. Start by opening the track in Audacity and selecting it (CTRL+A).



2. Click on the end of your track and paste the copy (CTRL+V) so that it's effectively duplicated at the end.



3. Hold Shift and use the scroll wheel with the mouse over the time slice to zoom in in order to get a closer look. What you can hear, you can also see -- and if there is a break in the smoothness of the audio waves, then there will be a noticeable discontinuity when the track loops. In my case, there is one, so onto the next step!



4. Choose the Tracks -> Add New -> Stereo Track (or just Audio Track if you're working with a mono sound). This adds a second layer, just like in Photoshop.

5. You can now choose the Time Shift Tool to move the track down and make it overlap a little by dragging it down with the mouse. When working with music, try to match the waves so that they align. Ideally you want to overlap a few seconds of audio if possible. With music it's often easier to align by seconds as it generally has a consistent beat. In my case I overlapped exactly 2 seconds at the end.



6. Select the overlapped section by choosing the Selection Tool again and hit Space to listen to it. Does it sound proper, or is it all disjointed? If it sounds bad, then you didn't align the waves properly. Go back to step 5 and move the second layer around on its timeline until it matches and sounds better.



7. Now it's time to cross-fade the audio, making it blend. Select the top overlapped part and use the Effect -> Cross Fade Out. Repeat the process with the bottom track, but this time choose Cross Fade In. The idea is to make the audio of one track fade out while the audio of the second track fades in.



8. Time to combine the two tracks into a new one: CTRL+SHIFT+M, or choose the Tracks -> Mix and Render to New Track menu option.



9. We now have a track that blends nicely, but the blend happens right in the middle of this new track, and we want it to be at the beginning and the end! Let's select the blended track's section, copy it, then paste it onto the new layer. CTRL+C, zoom out with the scroll wheel and paste the segment on the first layer by clicking on its end -- then use the Time Shift Tool again to snap it into place.



10. Delete the second track. We no longer need it. Just click the "X" button in its top left corner.

11. Select the entire first track's length on both layers. We don't need it anymore either. Just drag with the mouse after choosing the Selection Tool again and click the "DEL" key on your keyboard.

12. Almost there! Zoom in on the end (Shift + scroll wheel) and select the more complete track's section right below the pasted segment and delete it as well (DEL).



13. CTRL+A to select everything, Tracks -> Mix and Render. And there you have it! A perfectly looping track. Export it via the File -> Export Audio menu option.

I hope this explanation helps someone else. I had to figure it out by experimenting, and there's probably a better way -- but this one works for me.

4
NGUI 3 Support / Website migrated to another host
« on: December 02, 2016, 11:40:05 PM »
I got tired of the intermittent slow speed and accessibility issues of my previous web host and moved to a new one. Seems to be faster so far...

5
Dev Blog / November 2nd, 2016
« on: November 02, 2016, 07:14:15 AM »
Since the last dev post, I've been hard at work on the followup game to Windward. So hard at work that I've once again neglected documenting what it is I was actually doing. Ahem. Well, the good news is the game is coming along very nicely and I've started letting a close pre-alpha test group of players have a go at it. The first one was all about exploring the gigantic procedural world. The second play test involved building bases. Now a third play test is on the horizon with the functional stuff added in (resource gathering and processing).

The game does look quite nice now, I'll give it that.





The only issue is finding suitable art for what I have in mind. The Unity's Asset Store is a fantastic place to find things, but every artist has their own distinct style, so simply taking different models and using them in the game is not an option. Plus, since I do have a specific art style in mind (clean, futuristic) and I want the players to be able to customize colors of their own bases, I've had a couple challenges to overcome.

With the need to let players customize the look of their bases, I can't simply use diffuse textures. I also can't specify colors on an existing pre-colored material. That simply won't look well. Instead, what I need is a mask texture -- a special texture that simply defines which parts should be colored by which of the material's colors.

In Windward, my ship textures were also using masks. The Red channel was used to hold the grayscale texture. Green channel was used as the mask -- white pixels meant color A was used, while black pixels meant color B. Blue channel contained the AO mask using secondary UVs. In total, only 3 channels were used (and each ship only used a single 512x512 texture), which was one of the reasons why the entire game was only 120 megabytes in size. The final texture looked like this:



There were several downsides with this approach. First... having so much detail in two of the channels (red and blue) didn't play well with texture compression, resulting in visible artifacts. I could have technically made it better by moving one of them to the Alpha channel, but at the time I simply ended up turning off texture compression for ship textures instead. Second downside was only having one channel for color masking. This meant I could only have 2 colors, which is obviously not much.

For this new game (which still doesn't have a name, by the way!), I wanted to raise the bar a bit. I am targeting PC and Linux with this game (sorry Mac users, but OSX still doesn't support such half a decade-old features like compute shaders!), so all those mobile platform limitations I had to squeeze into with Windward are not an issue here.

First thing I did was split the mask information into a separate texture. To specify values for 4 distinct colors I only need 3 texture channels, with the 4th being calculated as saturate(1 - (r+g+b)), but I also wanted to make it possible to mark certain parts of the texture as not affected by any color, so in the end I ended up using all 4 channels. The actual mask texture is very easy to create by taking advantage of Photoshop's layers. Start with the background color, add layers for second and third channels (red and green). Set all 3 layers to have a Color Overlay modifier for Red, Green and Blue, respectively. This makes it trivial to mark regions, and if necessary, have additional layers for details. It's much easier to work with layers than with channels, that's for sure. For the remaining (4th) color I just add another layer, set to have a Black color overlay. Black, because of (1-rgb) calculation used in the shader. This still leaves the alpha channel free, but as I mentioned I use it to mark certain parts of the texture that should not be color tinted at all. Fine details such as mesh grates get masked like that, and so do any lit regions, if there are any.

So that leaves the diffuse texture... Remember how I said that color tinting the existing diffuse texture doesn't look well? This is why my diffuse textures are very bright. The brighter they are, the better they get tinted. Pure white, for example. White looks great!



But wait, you might ask... that doesn't sound right. Where would the detail come from? Well, that's the thing... why have the detail be baked in the diffuse texture, when it can be separate? Not only does it make it possible to have higher resolution details separate from other textures, but it also makes it possible to have them be shared between different materials, and even better still -- it makes it possible to swap them in and out, based on what the player made the object with. Take a building, for example. What's the difference between a building made out of concrete and one made out of bricks? Same exact shape, same exact ambient occlusion, same masks... the only difference is the base texture material, so why not make it come from a separate texture?

Better still, since I have 4 colors per material, why not have 4 distinct sets of material properties, such as metallic, smoothness, and the detail texture blend values? Well, in the end, that's exactly what I ended up doing:



This approach makes the objects highly player-customizable, and the final result looks excellent as well:



As an added bonus over Windward's approach, since the textures don't mix details in RGB channels, texture compression can be used without any noticeable degradation in quality. In the screenshot above the AO actually comes from diffuse channel's alpha and the normal map was created using a custom normal map maker tool I wrote a few months ago that uses multiple LOD/mipmap levels via downsampling to get a much better looking normal maps than what Unity offers. Not quite Crazy Bump good, but close! I'll probably end up releasing it on the Asset Store at some point if there is interest.

6
TNet 3 Support / Starlink UI kit is now free -- pick up yours today!
« on: August 02, 2016, 10:18:32 AM »
For those that wanted a more extended lobby server example with channel list and all that, Starlink UI kit is now free and it has full TNet integration that handles both LAN and internet server discovery, hosting, channel creation / channel list, as well as in-game and lobby chat. Although it does use NGUI for UI, so it assumes you have that.

7
TNet 3 Support / How to use the WorkerThread script
« on: August 01, 2016, 02:54:40 PM »
One of my most useful tools has always been the WorkerThread class, and although not really relevant to anything in TNet itself, I decided to include it in the package in case you too find it useful.

In simplest terms, it's a thread pool that's really easy to use.
  1. WorkerThread.Create(delegate ()
  2. {
  3.     // Do something here that takes up a lot of time
  4. },
  5. delegate ()
  6. {
  7.     Debug.Log("Worker thread finished its long operation!");
  8. });
In the code above, the first delegate is going to be executed on one of the worker threads created by the WorkerThread class. The class will automatically create several, and will reuse them for all of your future jobs. As such, there are no memory allocations happening at run-time. The second delegate is optional, and will execute on the main thread (in the Update() function) when the first delegate has completed its execution.

This dual delegate approach trivializes creation of complex jobs. To pass arguments, you can simply take advantage of how anonymous delegates work. For example this code will take the current terrain and flip it upside-down:
  1. var td = Terrain.activeTerrain.terrainData;
  2. var size = td.heightmapResolution;
  3. var heightmap = td.GetHeights(0, 0, size, size);
  4.  
  5. WorkerThread.Create(delegate ()
  6. {
  7.     for (int y = 0; y < size; ++y)
  8.     {
  9.         for (int x = 0; x < size; ++x)
  10.         {
  11.             heightmap[y, x] = 1f - heightmap[y, x];
  12.         }
  13.     }
  14. },
  15. delegate ()
  16. {
  17.     td.SetHeights(0, 0, heightmap);
  18. });
The worker thread class will work both at run time and edit time, but edit time means it will execute both delegates right away. Currently the project I'm working on uses WorkerThread everywhere -- from ocean height sampling, to merging trees, to generating procedural terrain and textures.

Questions? Ask away.

8
Dev Blog / July 24, 2016 - Windy detour
« on: July 24, 2016, 10:12:18 AM »
June was an interesting month. I randomly wondered if I could add a dragon to Windward just for the fun of it. It took only a few minutes to find a suitable model on the Unity's Asset Store and about half an hour to rig it up to animate based on movement vectors. I then grabbed a flying ship from Windward, replaced its mesh with a dragon and gave it a shot. It immediately looked fun, so somehow I ended up spending the next several weeks adding tough dragon boss fight encounters to Windward, complete with unique loot that changed the look of some key in-game effects based on what the player has equipped. The dragon fights themselves were a fun feature to add and made me think back to the days of raiding in WoW. Ah, memories.



With the odd detour out of the way I had another look at the various prototypes I had working so far and decided to narrow the scope of the next game a bit. First, I'm not going to go with the whole planetary scale orbit-to-surface stuff. Reason being the size of it all, mainly. The difficulties in dealing with massive planetary scales aside, if a game world is the size of Earth, even at 1/10th the scale, there's going to be a tremendous amount of emptiness out there. Think driving across the state for a few hours. Entertaining? To some maybe. But in a game? Certainly not.

But anyway... game design decisions aren't worth talking about just yet. Once the game project is out of the prototype and design stage, maybe then.

The past two weeks I actually spent working on integrating a pair of useful packages together -- Ceto ocean, and Time of Day. Both are quite excellent and easy to modify. Ceto ocean kit in particular occupied most of my time -- from optimizations to tweaks. I integrated it with the custom BRDF I made earlier, fixed a variety of little issues and wrote a much better and robust buoyancy script, which is a far, far better way of doing ship mechanics than the weird invisible wheel approach I was taking for Windward. I'll likely post a video about it later.

With my focus on optimizations, I've been keeping an eye on what it would take to have an endless terrain streamed in around the player and the results have been promising. In Windward, the trees were actually generated by instantiating a ton of individual trees in the right places, then subdividing the region into smaller squares, followed by merging all the trees in each square into a group. The process worked fine, but had two drawbacks.

First, it was using Unity's Mesh.CombineMeshes() function, which while works well, requires the objects to be present and doesn't allow per-vertex modifications for things like varying colors between trees. Second, with the merging process taking just over 100 milliseconds, it's really not suitable for streamed terrains. A 100 millisecond pause is very noticeable during gameplay. And so, I set out to optimize it.



The first approach I tried was using custom mesh merging to see the effect. It was predictably slower -- almost 170 ms.



Next I spread the actual code that was performing combining of mesh data into a separate thread:



While spread out across multiple frames 95 ms on the first frame was still way too much. Thinking about it I first focused on the small stuff. I first replaced mesh.colors with mesh.colors32 and then moved the matrix creation code into the part that's done on a separate thread instead of out in the main one. With a few other minor changes, such as replacing Generic.List with TNet.List, the update was down to below 70 ms:



Getting closer. The next step was to eliminate the interim instantiation of objects. After all, if all I want is the final merged mesh, why instantiate game objects first only to merge and remove them? It makes a lot more sense to skip the instantiation part altogether, and just go straight to merging, doesn't it? The mesh data can be retrieved from the original prefabs. This also fixes another issue I noticed: I was apparently pulling the mesh data from every object individually by calling mesh.vertices and other functions on each of the MeshFilters' meshes. Adding a cache system into place would save a ton of memory. Perhaps you've noticed those 15 MB+ memory allocations in the profile snapshots above -- and this was the reason.

With the changes in place, the cost of merging 2205 trees was down to 16.9 milliseconds with memory usage down below half a meg:



In this case the trees themselves are just an example as they are very simple and I will likely replace them with something that looks better. Still, for the sake of a test, they were perfect. Who knows what I may end up using this script for? Random vegetation? Rocks? Even just debris in the city ruins -- either way, this multi-threaded optimized merging script should now be up for the task and the extra variation in color hues makes this approach look much better than Unity's built-in mesh merging. All in all, another handy tool.


9
Dev Blog / May 23, 2016 - The ugly shores
« on: May 23, 2016, 08:04:32 PM »
The worst part about using textures is that there is always a limit to how detailed they can get. Even using an 8k texture for the Earth's texture map it was still looking quite ugly even at the orbit height of the International Space Station (~400 km), let alone closer to the ground. Take a screenshot from a 20 km altitude, for example:



That blurry mess is supposed to be the shoreline. Needless to say, improvements were direly needed. First I tried the most trivial and naive approach: increasing the resolution of the textures. It occurred to me that the 8192x4096 cylindrical projection texture I was using can be split up into 6 separate ones -- one per quad sphere's face. Not only would this give me better resolution to work with, but it would also make it possible to greatly increase the detail around the poles while reducing the memory usage. Better still, I could also skew the pixels the same way I was skewing the vertices of the generated quad sphere mesh -- doing so would improve the pixel density in the center, while reducing it in the corners. This is useful because by default the corners end up with a higher than normal pixel density and the center ends up with a lower one -- my code simply balances them out.

I quickly wrote a quick tool that was capable of splitting a single cylindrical projection texture into 6 square ones and immediately noticed that with 6 2048x2048 textures I get equal or better pixel density around the equator, and a much, much improved quality around the poles. A single 8k by 4k texture takes 21.3 MB in DXT1 format, while 6 2k textures take a combined 16.2 MB. Better quality and reduced size? Yes please!

Unfortunately increasing the pixel density, even raising the 6 textures to 8k size, ultimately always failed to produce better results past certain zoom level. The 8k textures still started to look pretty bad below 20 km altitude, which made perfect sense -- it's only 4x4 pixels where it was 1x1 before. If it was looking awful at 20 km altitude, it would look exactly as awful at 5 km with the increased textures. Considering that I was looking for something that can go all the way to ground level, more work was clearly needed.

The next thing I tried was to add a visible water mesh that would then be drawn on top of the terrain. The problem with that was that the terrain was actually extremely flat in many places of the world, and indeed the heightmap's vast majority of values resides in 0-5 range, with the other 5-255 taking up the rest. Worse still, the heightmap wasn't providing any values below the sea level. NASA actually has two separate sets of heightmaps for that. One is for above the sea level, and the other set is for below. Unfortunately the underwater heightmaps lacked resolution and were indeed quite blurry, but just out of curiosity I was able to merge them into a single heightmap to see the effect. This effectively reduced the above-ground heightmap resolution in half and still had the same issues with large parts of the world being so close to the sea level that they were not indicated as being above-ground by the heightmap.

At this point I asked myself: why am I bothering with the detail under the sea? Why have a heightmap for that at all? The game isn't set underwater, so why bother?

Grumbling at myself for wasting time I grabbed a different texture from NASA -- one that is a simple black-and-white representation of the continent maps with white representing the landmasses and black representing the water. I simply assumed that the value of 0.5 means sea level and modified the sampled vertices' heights so that they were not only affected by the heightmap, but by this landmass mask as well. Everything below 0.5 would get smoothly lowered, and everything above 0.5 would get smoothly raised, resulting in a visible shoreline. All that was left was to slap a water sphere on top, giving me this:



Better. Next step was to get rid of the blue from the terrain texture. This was a bit more annoying and involved repeated Photoshop layer modifications, but the end result was better still:



Now there was an evident shoreline, crisp and clear all the way from orbit to the surface. Unfortunately the polygon-based approach suffered from a few... drawbacks. First was the Z-fighting, which I fully expected. It was manageable by adjusting the near clip plane or by using multiple cameras in order to improve the near to far clip precision. The other problem was less straightforward. Take the following screenshot of a texture-based approach for example:



While a bit on the blurry side even from such a high altitude, it still looks better than the polygon-based approach:



Why is that? Two reasons. First, the polygon resolution decreases as the camera gets farther and farther from the planet, resulting in bigger triangles, which in turn lead to less defined shores. Second, the triangulation is constant and due to the way vertex interpolation works, edges that follow the triangulation look different than edges that are perpendicular to the said triangulation. This is the reason why the north-east and south-west parts of the Arabian peninsula looks all jagged while the south-east part looks fine.

Fortunately triangulation is easy enough to fix by simply adding code that ensures that the triangulation follows the shores.



The bottom-left part of the texture still looks jagged though, but this is because of inadequate mesh resolution at high distances to the planet. Solving that one is a simple matter of lowering the observer transform so that it's closer to the ground while remaining underneath the camera:



This approach looks sharp both up high in orbit and close to the ground:



Here's a side-by-side comparison shot of the North American Great Lakes from an altitude of 300 km:



And another one a little bit more zoomed in:



Now, up to this point the water was simply rendered using a solid color shader that was writing to depth. The fun part came when I decided to add some transparency to the shallow water in order to soften the edges a bit when close to the ground. While the transparency was easily achievable by comparing the difference in the pixel's sampled depth, I quickly ran into issues with other effects that required depth, such as post-processed fog. Since the transparent water wasn't writing to depth, I was suddenly faced with the underwater terrain being shaded like this:



The most obvious way to fix this would be to draw the terrain and other opaque geometry, draw the transparent water then draw the water again but this time filling only the depth buffer, followed by the remaining transparent objects. Unfortunately as far as I can tell, this is not possible with Unity. All opaque objects are always drawn first before all the transparent objects, regardless of their render queue. It doesn't seem possible to insert a transparent-rendered object in an opaque geometry pass, so I had to resort to less-than ideal hacks.

I tried various solutions to address it, from modifying the water shader to the fog's shader itself, but in the end I settled on the simplest approach: ray-sphere intersection. I made the fragment shader do a ray intersection with the planet's sphere to determine the near intersection point and the point on the ray closest to the sphere's center. If the closest point lies below the water level and the near intersection point lies in front of the sampled depth value, then I move the sampled depth back to the near intersection point:



While this approach works fine for now, but I can imagine it breaking as the planet gets larger and floating point values start dropping precision... I'll just have to keep my mind open for other potential ways to address this issue in the future.

10
Dev Blog / May 9th - Your Own Reflection
« on: May 09, 2016, 12:00:41 AM »
A few months ago I was working on the ship builder functionality of the upcoming game I'm working on and around the same time I was playing around with reflection -- the ability to automatically find functions on all classes with specific attributes on them to be precise. I needed this particular feature for TNet 3: I wanted to eliminate the need to have to register RCCs (custom object instantiation functions). I added that feature without any difficulty: simply get all assemblies, run through each class and then functions of that class, then simply keep a list of ones that have a specific attribute. Implementing it got me thinking though... what if I was to expand on this idea a bit? Why not use the same approach to add certain game functionality? Wouldn't it be cool if I could right-click an object in game, and have the game code automatically get all flagged custom functionality on that object and display it somehow? Or better yet, make it interactable?

Picture this: a modder adds a new part to the game. For example some kind of sensor. Upon right-clicking on this part, a window can be brought up that shows that part's properties: a toggle for whether the part is active, a slider for its condition, a label showing how much power it's currently consuming, etc. There aren't that many types of data that can be shown. There's the toggle, slider, label... Other types may include a button (for a function instead of a property), or maybe an input field for an editable property. So how can this be done? Well, quite easily, as it turns out.

First, there needs to be custom attribute that can be used to flag functionality that should be displayed via UI components. I called it simply "GameOption":
  1. [AttributeUsage(AttributeTargets.Field | AttributeTargets.Property, AllowMultiple = false)]
  2. public class GameOption : Attribute
  3. {
  4.         public MonoBehaviour target;
  5.         public FieldOrProperty property;
  6.        
  7.         public virtual object value { get { return Get(target); } set { Set(target, value); } }
  8.  
  9.         public object Get (object target)
  10.         {
  11.                 if (target != null && property != null) return property.GetValue(target);
  12.                 return null;
  13.         }
  14.  
  15.         public T Get<T> () { return Get<T>(target); }
  16.  
  17.         public T Get<T> (object target)
  18.         {
  19.                 if (target != null && property != null) return property.GetValue<T>(target);
  20.                 return default(T);
  21.         }
  22.  
  23.         public virtual void Set (object target, object val)
  24.         {
  25.                 if (isReadOnly || target == null) return;
  26.                 if (property != null) property.SetValue(target, val);
  27.         }
  28. }
Next, there needs to be a function that can be used to retrieve all game options on the desired type:
  1. // Caching the result is always a good idea!
  2. static Dictionary<Type, List<GameOption>> mOptions = new Dictionary<Type, List<GameOption>>();
  3.  
  4. static public List<GameOption> GetOptions (this Type type)
  5. {
  6.         List<GameOption> list = null;
  7.  
  8.         if (!mOptions.TryGetValue(type, out list))
  9.         {
  10.                 list = new List<GameOption>();
  11.                 mOptions[type] = list;
  12.  
  13.                 var flags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic;
  14.                 var fields = type.GetFields(flags);
  15.  
  16.                 for (int b = 0, bmax = fields.Length; b < bmax; ++b)
  17.                 {
  18.                         var field = fields[b];
  19.                        
  20.                         if (field.IsDefined(typeof(GameOption), true))
  21.                         {
  22.                                 GameOption opt = (GameOption)field.GetCustomAttributes(typeof(GameOption), true)[0];
  23.                                 opt.property = FieldOrProperty.Create(type, field);
  24.                                 list.Add(opt);
  25.                         }
  26.                 }
  27.  
  28.                 var props = type.GetProperties(flags);
  29.  
  30.                 for (int b = 0, bmax = props.Length; b < bmax; ++b)
  31.                 {
  32.                         var prop = props[b];
  33.                         if (!prop.CanRead) continue;
  34.                        
  35.                         if (prop.IsDefined(typeof(GameOption), true))
  36.                         {
  37.                                 GameOption opt = (GameOption)prop.GetCustomAttributes(typeof(GameOption), true)[0];
  38.                                 opt.property = FieldOrProperty.Create(type, prop);
  39.                                 list.Add(opt);
  40.                         }
  41.                 }
  42.         }
  43.         return list;
  44. }
Of course it's even more handy to have this on the Game Object:
  1. static public List<GameOption> GetOptions (this GameObject go)
  2. {
  3.         return go.GetOptions<GameOption>();
  4. }
  5.  
  6. static public List<T> GetOptions<T> (this GameObject go) where T : GameOption
  7. {
  8.         List<T> options = new List<T>();
  9.         MonoBehaviour[] mbs = go.GetComponents<MonoBehaviour>();
  10.  
  11.         for (int i = 0, imax = mbs.Length; i < imax; ++i)
  12.         {
  13.                 MonoBehaviour mb = mbs[i];
  14.                 List<GameOption> list = mb.GetType().GetOptions();
  15.  
  16.                 for (int b = 0; b < list.size; ++b)
  17.                 {
  18.                         GameOption opt = list[b] as T;
  19.  
  20.                         if (opt != null)
  21.                         {
  22.                                 opt = opt.Clone();
  23.                                 opt.target = mb;
  24.                                 options.Add(opt);
  25.                         }
  26.                 }
  27.         }
  28.         return options;
  29. }
So now I can have a property like this in a custom class:
  1. public class CustomClass : MonoBehaviour
  2. {
  3.     [GameOption]
  4.     public float someValue { get; set; }
  5. }
...and I can do this:
  1. var options = gameObject.GetOptions<GameOption>();
  2. foreach (var opt in options)
  3. {
  4.     opt.value = 123.45f;
  5.     Debug.Log(opt.value);
  6. }
Better still, I can inherit a custom attribute from GameOption and have custom code handle both the getter and the setter. I could filter exactly what kind of custom attribute is retrieved using the gameObject.GetOptions<DesiredAttributeType>() call. With the way of retrieving custom properties set, all that's left is to draw them automatically after some action.

That is actually quite trivial using NGUI. I simply registered a generic UICamera.onClick delegate, and inside it I collect the options using gameObject.GetOptions then display them using an appropriate prefab. For example
  1. if (opt.value is float) // draw it as a slider
I also register an event listener to the appropriate UI element itself (in the case above -- a slider), so that when the value changes, I simply set the opt.value to the new one. So there -- the mod content maker no longer needs to worry about creating custom UI elements at all. All he needs to do is mark desired fields or properties as [GameOption], and they will show up via right-click. Simple!

Of course I then went on to make it more advanced than that -- adding an optional sorting index and category values (so that the order of properties that show up can be controlled via the index, and filtered using the category). I also added support for buttons -- that is, I simply expanded the attribute to include methods:
  1. AttributeTargets.Field | AttributeTargets.Property | AttributeTargets.Method
...and added a MethodInfo to go with the FieldOrProperty attribute as well as an Invoke() function to trigger it. I also added support for Range(min, max) property for sliders, popup lists for multiple selection drop-down lists... I can go on, but there is no need to complicate the explanation further. Point is -- this approach is highly customizable and very powerful:

C# reflection is fun!

11
Dev Blog / Apr 17, 2016 - Once Upon a BRDF...
« on: April 17, 2016, 08:02:20 PM »
As I predicted in my previous post this week was spent texturing the planetary terrain. Little did I expect to delve as deeply into Unity's graphics as I did though. It was a classic example of "one thing leads to another"...

The week started with me simply writing code to generate cylindrical mapping UVs alongside the vertices in order to get the basic texture stretched over the terrain to see how it looks. It wasn't bad per say, but it wasn't quite good either:





The shadows were way too dark, which makes perfect sense really: there was no ambient lighting aside from the one coming from the milkyway skybox. It was most noticeable during sunset/sunrise. Take the following image for example:



As you can see, the entire terrain is just a black blob. The sky is still fairly well lit, meaning there should be plenty of indirect illumination coming from it that should light the terrain. So the question was, how to address it? Simple solution would be to change the ambient lighting from the space skybox to a color instead, but this approach wouldn't work well because it would give everything a flat, uniform color.

Another approach within Unity is to specify a gradient. This would work well, wouldn't it? Sure would, if the terrain was flat. Perhaps I'm wrong, but I saw no matrix in there that could be set in order to actually transform the direction of the gradient. In other words, since there is no way to choose an "up" vector, this approach wouldn't work either.

The same problem prevented me from using Unity's "default skybox" also. There is simply no way to change the "up" vector.

This leaves just the static cubemap skybox-based approach that I was already using. If only there was a way to update the contents of this skybox in real-time based on what's around the player, right? Well, turns out there is. The legacy approach, which I started with first, was to just render into a cubemap and then use this cubemap in the shader somewhere. It was promising, but unfortunately cubemaps rendered this way don't have mip-maps, meaning LOD downsampling wasn't working. In case you wonder why this is necessary... think of indirect lighting as a very blurry 360 degree photo. If you don't blur it, then you will be able to see reflections of the world instead of soft ambient lighting. In fact, it's that last part -- reflections -- that triggered a thought within my head: what about reflection probes?

Unity now has a feature called reflection probes that you can sprinkle liberally throughout your scene in order to get much more realistic looking reflections on objects in your scene. The way it works is by rendering cubemaps at multiple points in the scene, and then simply blending between them in order to apply more realistic reflections to objects moving through the scene. There is something called "light probe group" as well, but it didn't seem to actually do anything when I tried to use it, and in any case based on what I've read it seems they are geared more toward static scenes.

Reflection probes though -- they are most certainly not limited to being static. In fact, simply adding a probe to a blank scene immediately changes how objects using the Standard shader looks (assuming the material is smooth). Better still, it was possible to sample the reflection probe's cubemap at reduced LOD levels, thus getting that blurry ambient lighting cubemap I was looking for! So that's when I thought to myself: why not just have a real-time reflection probe follow the main camera around, rendering only the really big objects into it, such as the planet (using a simple sphere), its atmosphere, and the skybox itself? The result was immediately promising:





Unfortunately as I quickly learned, there were downsides that ultimately forced to me to write a custom BRDF. In technical terms, BRDF stands for "Bidirectional Reflectance Distribution Function". In more common terms, it's just a function that accepts various parameters related to the material and light and spits out the final color the material should have. Unity 5 started using them and there are 3 different versions available. I tried all 3, but neither had the look I was looking for. The default BRDF was most promising for well-lit scenes, but its handling of specular lighting was just flat out weird. Take this picture for example:



The tops of those hills are all uniform greyish color. In fact, as the sun's angle gets more and more shallow and the specular grazing angle increases, the material becomes more and more washed out and there is no way to turn this off. Take the following 100% black sphere as another example. Albedo and specular are both pure black, and yet when you look at it from a shallow angle to the light, starts becoming brighter and brighter:



Where is that color coming from? The light -- its specular contribution. But why isn't there any way to get rid of it? Not every material has such strong specular reflections. What about materials like vantablack when it's time to create stealth coating? Or the more common case, just to eliminate specular highlights from terrain while leaving them intact on the water like this:



Worse still, the standard BRDF's ambient lighting comes from what seems to be a static source. That is, if the light moves creating different lighting conditions at run-time, the standard shader-lit materials don't seem to benefit. Take the following sphere placed in a blank scene with the default skybox and a reflection probe set to render only that skybox, for example. I simply hit Play then changed the angle of the light to that of a sunset:





The sphere is still lit like it was day time! That's obviously completely unacceptable. Playing around with the settings reveals that the reflections come from the reflection probe and look proper, but the global illumination term seems to come from a static texture of some kind, and as such doesn't change at run-time.

But hey, the best part about the new rendering pipeline is that it's fully exposed to be modified, so I got started on my own custom lighting path. The first thing I did was to simply scrap the "ShadeSHPerPixel" call in my copy of the UnityGI_Base function found in UnityGlobalIllumination.cginc with a call to sample the reflection probe's cubemap. Since I only have one reflection probe and don't plan on having more, I didn't need to do any blending and only sampled the first one:
  1. #if UNITY_SHOULD_SAMPLE_SH
  2.         // BUG: This flat out doesn't work when a skybox-based ambient lighting is used with the skybox updated at run-time
  3.         //gi.indirect.diffuse = ShadeSHPerPixel (normalWorld, data.ambient);
  4.  
  5.         // This gives similar results and also happens to work properly
  6.         float4 diffuse = UNITY_SAMPLE_TEXCUBE_LOD(unity_SpecCube0, normalWorld, 0.8 * UNITY_SPECCUBE_LOD_STEPS);
  7.         gi.indirect.diffuse = DecodeHDR(diffuse, unity_SpecCube0_HDR).rgb;
  8. #endif
This gave nice enough results, but just to make it extra blurry and eliminate the visible corners in the cubemap, I wrote code to do instant blurring:
  1. // Fast, imprecise version:
  2. //float4 diffuse = UNITY_SAMPLE_TEXCUBE_LOD(unity_SpecCube0, normalWorld, 0.8 * UNITY_SPECCUBE_LOD_STEPS).rgb;
  3. //gi.indirect.diffuse = DecodeHDR(diffuse, unity_SpecCube0_HDR).rgb;
  4.  
  5. // Smoother but slower version:
  6. float3 right = normalize(cross(float3(0.0, 1.0, 0.0), normalWorld));
  7. float3 up = normalize(cross(normalWorld, right));
  8. const float sampleFactor = 0.9 * UNITY_SPECCUBE_LOD_STEPS;
  9. const float jitterFactor = 0.3;
  10.        
  11. float4 diffuse = (UNITY_SAMPLE_TEXCUBE_LOD(unity_SpecCube0, normalWorld, sampleFactor) +
  12.         UNITY_SAMPLE_TEXCUBE_LOD(unity_SpecCube0, lerp(normalWorld,  up, jitterFactor), sampleFactor) +
  13.         UNITY_SAMPLE_TEXCUBE_LOD(unity_SpecCube0, lerp(normalWorld, -up, jitterFactor), sampleFactor) +
  14.         UNITY_SAMPLE_TEXCUBE_LOD(unity_SpecCube0, lerp(normalWorld,  right, jitterFactor), sampleFactor) +
  15.         UNITY_SAMPLE_TEXCUBE_LOD(unity_SpecCube0, lerp(normalWorld, -right, jitterFactor), sampleFactor)) * 0.2;
  16.  
  17. gi.indirect.diffuse = DecodeHDR(diffuse, unity_SpecCube0_HDR).rgb;
With the global illumination taken care of, it was time to tackle the specular issues. I took care of that by creating a custom BRDF and extending the output struct's specular color to be a half4 instead of 3 -- that is, I simply wanted to take advantage of its unused alpha channel. In the last part of the BRDF function I made good use of it:
  1.         specularTerm = max(0, specularTerm * specColor.a * specColor.a * nl);
  2.  
  3.         half diffuseTerm = disneyDiffuse * nl;
  4.         half grazingTerm = saturate(oneMinusRoughness * specColor.a + (1.0 - oneMinusReflectivity));
  5.     return half4(diffColor * (gi.diffuse + light.color * diffuseTerm) +
  6.                 light.color * FresnelTerm (specColor.rgb, lh) * specularTerm +
  7.                 surfaceReduction * gi.specular * FresnelLerp (specColor.rgb, grazingTerm, nv) * specColor.a, 1.0);
In simple terms, what this is doing is merely attenuates the specular's strength by the specular color's alpha, thus making it possible to get rid of it completely. It's merely based on tweaking observation to get the results I want. Now, with a simple custom specular shader that uses the texture's alpha channel to control what should be affected by specular highlights and what shouldn't, I was able to get the "custom" results in the screenshots above and below.

The last thing I did was to write a custom fog version that also sampled the same exact diffuse term in order to have the fog be colored by the ambient lighting as well. This created a colored fog that blends much better with the environment around it.





I'm attaching the BRDF + specular shader for those that need it.


12
Dev Blog / Apr 10, 2016 - Planetary terrain
« on: April 10, 2016, 12:28:44 PM »
With the atmospheric shaders in solid shape I decided it was time to go back to planetary terrain generation. In the previous post on this topic I explained how I succeeded in reducing memory usage down to a fraction of what it used to be, but I still had a long way to go before the planetary terrain is anything close to being usable.

First, the seams. Due to how graphics work, when vertices on one mesh don't perfectly overlap vertices on an adjacent mesh, seams occur which are easily visible when in the game. In the last screenshot of the abovementioned post I show exactly that issue -- there are vertices on one mesh that don't have a corresponding vertex on an adjacent mesh. Fortunately it's easy enough to fix by trimming edge triangles on the denser of the two meshes if it's adjacent to a mesh with a lower subdivision level:



With that out of the way moved the entire subdivision process onto worker threads using the handy WorkerThread class I created a while back that's capable of spawning threads up to the limit of 2*CPU cores, after which point it queues up functions and executes them as threads free up. The only thing that still needed to be done on the main thread was the actual setting of mesh geometry and collider mesh. Unfortunately the latter was always the performance hog, taking up twice as long as all the other operations executed in the same frame combined:



How to speed this up? Well, first, I don't actually need colliders on any but the deepest-most subdivision patches. This eliminated the majority of them right away. The rest? I simply staggered them out -- rather than creating all colliders when the data becomes available, I made it so that only one collider can be created per frame. This only takes about 2 milliseconds per frame, which is perfectly acceptable. This effectively makes the entire planet generate seamlessly and without any hiccups all the way down to the 10th subdivision:



The next step was to actually generate a terrain based on some useful data. Naturally, since I already had a high-res height map of Earth (8192x4096), I simply wrote a script that samples its height values:

  1. using UnityEngine;
  2. using TNet;
  3.  
  4. [RequireComponent(typeof(QuadSphere))]
  5. public class EquirectangularHeightmap : MonoBehaviour
  6. {
  7.         public Texture2D texture;
  8.  
  9.         public double lowestPoint = 0d;
  10.         public double highestPoint = 8848d;
  11.         public double planetRadius = 6371000d;
  12.  
  13.         const float invPi = 1f / Mathf.PI;
  14.         const float invTwoPi = 0.5f / Mathf.PI;
  15.  
  16.         void Awake ()
  17.         {
  18.                 if (texture != null)
  19.                 {
  20.                         float[] data;
  21.                         {
  22.                                 Color[] cols = texture.GetPixels();
  23.                                 data = new float[cols.Length];
  24.                                 for (int i = 0, imax = cols.Length; i < imax; ++i) data[i] = cols[i].r;
  25.                         }
  26.  
  27.                         int width = texture.width;
  28.                         int height = texture.height;
  29.                         var sphere = GetComponent<QuadSphere>();
  30.                         var min = (float)(sphere.radius * lowestPoint / planetRadius);
  31.                         var max = (float)(sphere.radius * highestPoint / planetRadius);
  32.  
  33.                         sphere.onSampleHeight = delegate(ref Vector3d normal)
  34.                         {
  35.                                 var longtitude = Mathf.Atan2((float)normal.x, (float)normal.z);
  36.                                 var latitude = Mathf.Asin((float)normal.y);
  37.                                 longtitude = Mathf.Repeat(0.5f - longtitude * invTwoPi, 1f);
  38.                                 latitude = latitude * invPi + 0.5f;
  39.                                 return Mathf.Lerp(min, max, Interpolation.BicubicClamp(data, width, height, longtitude, latitude));
  40.                         };
  41.                 }
  42.         }
  43. }
  44.  

It looked alright from high orbit, but what about zooming in? Not so much:



Fortunately I was working on a game prototype a couple of years ago for which I wrote various interpolation techniques. The screenshot above uses the basic Bilinear filtering. Switching the code to use Bicubic filtering proves a much more pleasant result:



Hermite spline filtering is even better:



Still very blurry though, and increasing texture resolution is not an option. Solution? Add some noise! I opted to go with a pair of noises: a 5 octave ridged multifractal, and a 5 octave Perlin noise that results in this terrain:



Combining the hermite filtered sampled texture with the noise results in this:



It looks excellent all the way down to ground level with 16 subdivions:



At that subdivision level the vertex resolution is 19.2 meters. Since I'll most likely go with a planetary scale of 1/10th of their actual size for gameplay reasons, that will give me a resolution of under 2 meters per vertex, which should make it possible to have a pretty detailed terrain.

The downside of this approach right now is having to sample that massive texture... 8192*4096*4 = 134.2 megabytes needs to be allocated just to parse it via texture.GetColors32(). Another 134.2 MB is needed because I have to convert Color32 to float before it can be used for interpolation. Not nice... I wish there was a way to texture.GetColor() on a specific channel... The obvious way around it would be to use smaller textures, but that would cause even more details to be lost. I'm thinking of simply caching it by saving the result in a file after parsing the texture once.

In case you're wondering, it takes 50 milliseconds to generate a planet going down to 16th subdivision level in the Unity Editor:



The memory usage goes up by 300 MB when generating the planet, and since 134*2=268 MB of that is due to sampling the epic heightmap texture, that means the entire planet only takes ~30 MB in memory. Not bad!

I'm looking forward to seeing how it will look when I put some textures on it -- but that's going to be my focus next week.

Speaking of textures, I also tweaked the atmospheric shader from the previous post a little, making it more vibrant and improving the day/night transition to make it more realistic:



I achieved the improved fidelity by splitting up the scattering from just one color to two. One color is used for atmospheric scattering, and another is used for scattering close to the ground (terrain, clouds). The reason I had to do it was because when using just one color it doesn't seem possible to have a vibrant blue sky and a reddish tint night time cloud transition. Blue sky causes the night time transition to look yellow. To get a reddish transition I had to make the sky light turquoise colored which obviously looks pretty terrible. Using two scattering colors gave me the best of both worlds:










13
Dev Blog / Apr 2, 2016 - In pursuit of better skies
« on: April 02, 2016, 11:55:36 AM »
In my previous post I mentioned that the atmosphere was "good enough" to move on to other tasks. But... the transition from day to night really bugged me... Simply put, I thought the transition looked too washed out, and I wanted to fix that.

My solution involved restarting from scratch and redoing the shaders, paying closer attention to the scattering process. The finalized effect proved to be much better than my first attempt, although with the downside of being limited to the 2.5% atmosphere height again. The visual thickness of 2.5% using the new shader was actually thicker than my first attempt set to 5% though, so I just left it like that. The visual benefit was well worth it:



The next step was fixing the clouds... The cylindrical texture I used before was not only low quality, but it looked like crap around the poles. I needed something that not only looked good, but was capable of zooming in all the way down to the ground level without looking seriously blurred out. Of course this isn't something that's possible using higher resolution textures. Some basic math: the circumference of Earth is 40,075 km. Max texture size in Unity is 8192. This gives the equatorial resolution of 4.89 km per pixel. That's pretty terrible. Another approach was needed.

After some thought, I decided to turn the cylindrical cloud map texture into a tileable square with the base size of 4096x4096. I used the basic projection logic (same one I used for the quad sphere) to calculate the UV coordinates both horizontally, and at the poles. This gave me better per-km resolution than using a 8192 texture with the added benefit of looking great both at the equator and at the poles:



If only it would still look great when zoomed in, right? Well... the most obvious way of adding detail at zoomed levels is to use detail textures. But... we're dealing with clouds, so what would the detail texture be? Why... the same exact texture! Instead of adding detail to an existing blurred texture, I decided to blend between the same exact texture, but sampled at two different resolutions (1x zoom and 4x zoom). The result was just denser clouds that could be zoomed in farther, but eventually still looked washed out when zoomed in enough.

That's when I thought to myself: why not do this continuously? Use the camera's height to determine the 2 closest zoom levels and feed these values to the shader. The shader will then use the 2 UVs to sample the cloud texture, then blend between them using the height-based blending value. I can basically continuously blend between 2 textures in an ever-increasing zoom level based on the camera's height:
  1.                 if (heightFactor < 0.5f)
  2.                 {
  3.                         atmosphericBlending.x = 16f;    // Tex0 UV coordinate multiplier
  4.                         atmosphericBlending.y = 1f;     // Tex 0 blending weight
  5.                         atmosphericBlending.z = 8f;     // Tex1 UV coordinate multiplier
  6.                         atmosphericBlending.w = 0f;     // Tex1 blending weight
  7.                 }
  8.                 else if (heightFactor < 2f)
  9.                 {
  10.                         float f = (heightFactor - 0.5f) / 1.5f;
  11.                         atmosphericBlending.x = 16f;
  12.                         atmosphericBlending.y = 1f - f;
  13.                         atmosphericBlending.z = 8f;
  14.                         atmosphericBlending.w = f;
  15.                 }
  16.                 else if (heightFactor < 6f)
  17.                 {
  18.                         float f = (heightFactor - 2f) / 4f;
  19.                         atmosphericBlending.x = 8f;
  20.                         atmosphericBlending.y = 1f - f;
  21.                         atmosphericBlending.z = 4f;
  22.                         atmosphericBlending.w = f;
  23.                 }
  24.                 else if (heightFactor < 18f)
  25.                 {
  26.                         float f = (heightFactor - 6f) / 12f;
  27.                         atmosphericBlending.x = 4f;
  28.                         atmosphericBlending.y = 1f - f;
  29.                         atmosphericBlending.z = 2f;
  30.                         atmosphericBlending.w = f;
  31.                 }
  32.                 else if (heightFactor < 54f)
  33.                 {
  34.                         float f = (heightFactor - 18f) / 36f;
  35.                         atmosphericBlending.x = 2f;
  36.                         atmosphericBlending.y = 1f - f;
  37.                         atmosphericBlending.z = 1f;
  38.                         atmosphericBlending.w = f;
  39.                 }
  40.                 else
  41.                 {
  42.                         atmosphericBlending.x = 2f;
  43.                         atmosphericBlending.y = 0f;
  44.                         atmosphericBlending.z = 1f;
  45.                         atmosphericBlending.w = 1f;
  46.                 }
And just like that, I had clouds that could be zoomed in all the way to the ground that looked fantastic both far away and up close. The blending between them was very subtle when traveling at the speed of a real-world rocket -- so subtle as to not be noticeable. Even with the camera test I set up that was moving very quickly the transitions were all smooth:



I made the clouds be affected by the scattering color for a more natural looking transition:



Next up came the night lights. The square nature of pixels was very noticeable up close. Something had to be done. The approach I settled on simply took the night lights and blurred them slightly, resulting in smoother edges. I then used a detail texture blended with the night lights map to create a much higher-than-original night lights map:





It still looks great even from far away:



The last thing I did was duplicate the cloud detail code in the shader and made it sample a noise map texture that I then applied to the terrain itself. This added a subtle variation to the terrain's texture that improved how it looks when zoomed in. The snow-covered mountain peaks still looked rather bad when zoomed in because there was a lot of contrast between the color of the pixels, but I addressed that by simply adding extra snow to make the transition less jarring. I did that by sampling the height map, disturbing it a little by using my trusty LOD-based noise map explained above, and adding extra whiteness around the cliffs:



Unfortunately there was nothing it could do to make the shoreline less blurry when zoomed in... but I have some ideas on how I can address that that I will explore further in a future post. I'm sure simply adding terrain deformations based on a heightmap + noise will improve the look quite a bit. But in the meantime I have some new decent looking 3440x1440 backgrounds!






14
Dev Blog / Mar 27, 2016 - Why is the sky blue?
« on: March 27, 2016, 07:07:07 AM »
After the strenuous trials of of the basics of planetary generation I decided that I've had enough math for a bit and it was time to work on something shiny instead.

Coincidentally, while I was banging my head against the planetary wall, Neil was learning the basics of atmospheric scattering. He, like many before him, started with the GPU Gems 2 article by Sean O'Neil from over a decade ago. It certainly seemed promising. No need for texture lookups or any kind of pre-generated data not only means the planetary atmospheric properties can be easily modded by players, but also gives the possibility of terraforming planets over time. Who's to say that without a concentrated player effort it shouldn't be possible to terraform Mars from its CO2 atmosphere into a Nitrogen-Oxygen mix? Doing so would certainly change its reddish hue to a blue one. Would be nice to make it change gradually over time, wouldn't it?

Anyhow, I digress... So Neil started with the GPU gems approach... and quickly ran into several limitations. First, the shader assumes that the atmosphere's radius is always going to be 2.5% of the planet's radius, and I thought it would be much nicer to not have this limitation. 5% looks better, and I'm still not fully committed to a 1:1 scale of planets -- but that's something I'll delve into in a later post.

Another limitation is that Eric's approach uses several shaders: one set for inside the atmosphere, and another for observer being outside the atmosphere. While not a big issue to swap between the two, there are enough differences between the shaders to make the change noticeable. That, to me, is unacceptable. All transition must be completely seamless.

The last limitation was something I discovered when I've delved into the task of atmospheric rendering myself -- the inability to control the colors properly -- especially during the sunset. In the screenshots on the GPU Gems page it shows a proper sunset, but no matter what I tried I couldn't get it to look quite like that.

Of course I've also purveyed the available assortment of similar solutions on the Asset Store. The most promising of them -- AS3 atmospheric scatter -- was broken in Unity 5, used per-vertex shading along with pre-generated textures, and looked even worse than the GPU Gems approach. It also still suffered from the visible shader transition from the outer to inner atmosphere. Worse still, adjusting the light properties at run-time had no effect on how the planet looked. Apparently light's color and intensity gets baked into whatever lookup textures it uses. So in other words -- scrap:



Years ago I also picked up FORGE3D's planets. I'll admit, it was -- and still is -- a nice pack. It has a nice assortment of beautiful planets with thoughtful little details, such as having separate cloud maps for the sides of the planet and top/bottom to avoid the visual artifacts. I'm sure I'll be able to use some of the textures from it in the future. The way the shaders are structured also makes them quite suitable for procedurally generated content -- just specify the textures to use for the shader, and you've got a beautiful planet. The downside? Their planets are all external-view only, and everything is very art-side -- meaning lots of very high res textures.

Other kits were less useful. Etherea1 is completely broken in Unity 5. Space Graphics Toolkit is also partially broken in Unity 5. The Blacksmith atmospheric scattering is planet-side only, is not realistic, and overall rather unwieldy... I can go on, but needless to say, I was not able to find anything that actually does the job I was looking for, so I had to give it a go myself.

So far I've spent just over a week on it. I continued what Neil started with the GPU Gems approach as it's the one that was "almost" there and got it to a state that both simplified the code and also made it more robust. I was able to eliminate the 2.5% limit I mentioned and fix the outer-inner transition issue (in fact, only one shader is needed to draw the atmosphere now, not two)... and I got it all looking moderately acceptable. The most challenging part for me was getting my head around the whole "scattering" deal -- and that was because it was difficult for me to visualize. I had to resort to writing a shader to show me what happens:



At the time of this writing, the atmosphere is still very much a work in progress. Unfortunately from my experience the GPU Gems shader-based scattering is not very flexible. There is still a matter of discontinuity between the atmosphere and surface shaders that I want to eliminate (I want one shader to "just work" for both), and it's rather difficult to get the effects I want. I've also not been successful in getting the sunset to look realistic. I am not seeing an orange sky until after the sun has set, and even then it's limited to the horizon where the sun disappeared, while the rest of the sky remains completely black. I will keep at it and see if I can improve it still, but for now it does look acceptable enough to move forward.

One thing's for sure though... I'm going to need to get some better textures.








15
Dev Blog / Mar 27, 2016 - How (not) to generate a planet
« on: March 27, 2016, 04:40:27 AM »
Has it really been a month already since the last log entry? Time certainly flew by...

Let's see what happened since then... After a bit of struggle, I finally managed to migrate to Unity 5. The reason for that ended up being the desire to have more control over the rendering process. There were visual rendering artifacts with the tri-planar approach in Unity 5 when specularity was added to the mix and I wasn't able to get it fixed on Unity 4's side, but I did fix it in Unity 5 by normalizing... the s.Normal in the lighting function, I think? I don't even remember anymore... I updated the shader I included at the end of the previous post after I got it working.

Curiously enough, terrain generation code is a lot slower in Unity 5 -- but only in the editor. In Unity 4 it was taking an average of 1150 milliseconds to generate the cratered terrain from the first post. The same task takes Unity 5 anywhere from 1650 to 3600 milliseconds to complete. Oddly enough, doing a build produces the opposite results. Unity 5 64-bit stand-alone build created the terrain 15% faster than Unity 4 -- which is why I ultimately decided to ignore it.

Moving forward I created two sphere generation algorithms -- one using a quad sphere described in the previous post, and another using an icosahedron. As it turned out, the icosahedron's triangles are not perfect after all, and they do get skewed -- which is something I should have realized, in hindsight. There's less skewing, but it still happens:



Quadsphere has a nice advantage over the icosahedron sphere: its UV coordinates are very simple, and if desired, one could even map 6 square textures to it without using any kind of projection or blending just because it's always about working with quads. There was still the outstanding issue of vertices near the corners being skewed to the point of being less than quarter of the size of ones in the center, but I was able to resolve it by simply pre-skewing all vertices using a simple mathematical operation:
  1. x = (x * 0.7 + x * x * 0.3);
The hardest part was figuring out the inverse of that operation, as I needed to be able to take any 3D world position on the sphere and convert it to a 2D position on the sphere's side for the purpose of determining how to subdivide the sphere as well as which regions the player should currently listen to (multiplayer). After busting my head a bit over this highschool math problem that was so far in the past as to almost appear beyond me, I was able to figure it out:
  1.         static double InverseTransform (double x)
  2.         {
  3.                 const double d233 = 0.7 / 0.3;
  4.                 const double d116 = d233 * 0.5;
  5.                 const double d136 = d116 * d116;
  6.  
  7.                 if (x < 0.0) -(System.Math.Sqrt(-x / 0.3 + d136) - d116);
  8.                 else System.Math.Sqrt(x / 0.3 + d136) - d116;
  9.         }
After I was done with that math problem, I went straight into... more math. How to subdivide the quad sphere into patches? How to do it so that no two adjacent patches exceed each other by more than one size difference? And more importantly, how to do it in a memory-efficient manner? Naturally I didn't want to do any of this math stuff... too much math already. I figured I'd just start coding first, do that silly unnecessary math later.

I first started with the most naive approach, just to see it working. I figured -- hey, KSP used a 10 level-deep subdivision in their planets. I'll just go-ahead and pre-generate all the nodes right away, making it easy to know which node has which neighbor and parent, so traversing them will be a breeze!

Well, here's how it went... First time I hit play, I sat there twiddling thumbs for over 10 seconds while it generated those nodes. They weren't game objects, mind you -- not at all. They were simple instances of a custom QuadSphereNode class I created that had a bunch of references (to siblings, to potential renderers, meshes, etc). No geometry was ever generated -- not yet. I was merely creating the nodes. So naturally, the 10 second wait to generate enough nodes for just 1 planet was a surprise to me. And then I looked at my memory usage... Unity was at 3.5 gigabytes of RAM. Ouch!

That's when I decided to get my head out of my ass and do some math, again.
  1. 6 nodes, 10 subdivisions:
  2. 0 = 6
  3. 1 = 24
  4. 2 = 96
  5. 3 = 384
  6. 4 = 1536
  7. 5 = 6144
  8. 6 = 24576
  9. 7 = 98304
  10. 8 = 393216
  11. 9 = 1572864
  12. Total = 2,097,150 nodes
So to generate 2 million nodes, it was taking Unity 10 seconds, and it was eating up over 3 gigs of RAM, leading me to guesstimate that each class was adding ~1500 bytes RAM. The word "unacceptable" didn't quite cover it.

Not willing to delve into the whole "doing subdivision properly" logic just yet I decided to see if I could reduce the size of my classes first. That's when I learned that SizeOf() doesn't work properly in Unity. If you ever need to use it, use it in Visual Studio instead. It doesn't take into account unassigned fields pointing to class types. Long story short, by simply moving the geometry-related stuff out into its own class, and then leaving a null-by-default field pointing to that geometry for each node I was able to reduce memory usage down to 33% (1 gigabyte), and the generation time down to 2.3 seconds.

Of course that was still too much to be used in practice, and so I finally decided to delve into proper subdivision logic. On the bright side, it didn't take as long as I thought it would to get the "what's neighbor of what" code working in a very efficient manner and generate meshes on the fly on demand. On the downside, at the time of this writing, I still haven't actually finished per say. What the code does right now is given the directional vector and distance to the surface of the planet, it figures out what subdivision level is required. It then dives right into the subdivision logic, creating only the nodes that are needed to reach the 4 nodes closest to the observer. After that's done, it propagates outwards from those 4 nodes, creating higher and higher level neighbors until the entire sphere is covered. The code had to take into account neighbor sides on the sphere (and account for their varied rotation) and ensure that no patch is next to another patch that's more than 1 level above or below it in subdivisions.

Long story short, memory usage is negligible now, and generation time takes a mere 134 milliseconds to generate a planet going down all the way to the 10th subdivision -- including all the meshes necessary to render it, all the renderers and all the colliders. Over half of that time is spent baking PhysX collision data according to the profiler:



You may notice that the memory usage in there as well. A mere 12.3 MB were allocated during the planet's creation. So yes... from 10,000 milliseconds and 3 gigabytes not including geometry, down to 63 milliseconds and 12.3 megabytes that includes the geometry. Not bad! The final mesh looks like this:


Pages: [1] 2 3 ... 9