Author Topic: May 23, 2016 - The ugly shores  (Read 4223 times)

ArenMook

  • Administrator
  • Hero Member
  • *****
  • Thank You
  • -Given: 337
  • -Receive: 1171
  • Posts: 22,128
  • Toronto, Canada
    • View Profile
May 23, 2016 - The ugly shores
« on: May 23, 2016, 08:04:32 PM »
The worst part about using textures is that there is always a limit to how detailed they can get. Even using an 8k texture for the Earth's texture map it was still looking quite ugly even at the orbit height of the International Space Station (~400 km), let alone closer to the ground. Take a screenshot from a 20 km altitude, for example:



That blurry mess is supposed to be the shoreline. Needless to say, improvements were direly needed. First I tried the most trivial and naive approach: increasing the resolution of the textures. It occurred to me that the 8192x4096 cylindrical projection texture I was using can be split up into 6 separate ones -- one per quad sphere's face. Not only would this give me better resolution to work with, but it would also make it possible to greatly increase the detail around the poles while reducing the memory usage. Better still, I could also skew the pixels the same way I was skewing the vertices of the generated quad sphere mesh -- doing so would improve the pixel density in the center, while reducing it in the corners. This is useful because by default the corners end up with a higher than normal pixel density and the center ends up with a lower one -- my code simply balances them out.

I quickly wrote a quick tool that was capable of splitting a single cylindrical projection texture into 6 square ones and immediately noticed that with 6 2048x2048 textures I get equal or better pixel density around the equator, and a much, much improved quality around the poles. A single 8k by 4k texture takes 21.3 MB in DXT1 format, while 6 2k textures take a combined 16.2 MB. Better quality and reduced size? Yes please!

Unfortunately increasing the pixel density, even raising the 6 textures to 8k size, ultimately always failed to produce better results past certain zoom level. The 8k textures still started to look pretty bad below 20 km altitude, which made perfect sense -- it's only 4x4 pixels where it was 1x1 before. If it was looking awful at 20 km altitude, it would look exactly as awful at 5 km with the increased textures. Considering that I was looking for something that can go all the way to ground level, more work was clearly needed.

The next thing I tried was to add a visible water mesh that would then be drawn on top of the terrain. The problem with that was that the terrain was actually extremely flat in many places of the world, and indeed the heightmap's vast majority of values resides in 0-5 range, with the other 5-255 taking up the rest. Worse still, the heightmap wasn't providing any values below the sea level. NASA actually has two separate sets of heightmaps for that. One is for above the sea level, and the other set is for below. Unfortunately the underwater heightmaps lacked resolution and were indeed quite blurry, but just out of curiosity I was able to merge them into a single heightmap to see the effect. This effectively reduced the above-ground heightmap resolution in half and still had the same issues with large parts of the world being so close to the sea level that they were not indicated as being above-ground by the heightmap.

At this point I asked myself: why am I bothering with the detail under the sea? Why have a heightmap for that at all? The game isn't set underwater, so why bother?

Grumbling at myself for wasting time I grabbed a different texture from NASA -- one that is a simple black-and-white representation of the continent maps with white representing the landmasses and black representing the water. I simply assumed that the value of 0.5 means sea level and modified the sampled vertices' heights so that they were not only affected by the heightmap, but by this landmass mask as well. Everything below 0.5 would get smoothly lowered, and everything above 0.5 would get smoothly raised, resulting in a visible shoreline. All that was left was to slap a water sphere on top, giving me this:



Better. Next step was to get rid of the blue from the terrain texture. This was a bit more annoying and involved repeated Photoshop layer modifications, but the end result was better still:



Now there was an evident shoreline, crisp and clear all the way from orbit to the surface. Unfortunately the polygon-based approach suffered from a few... drawbacks. First was the Z-fighting, which I fully expected. It was manageable by adjusting the near clip plane or by using multiple cameras in order to improve the near to far clip precision. The other problem was less straightforward. Take the following screenshot of a texture-based approach for example:



While a bit on the blurry side even from such a high altitude, it still looks better than the polygon-based approach:



Why is that? Two reasons. First, the polygon resolution decreases as the camera gets farther and farther from the planet, resulting in bigger triangles, which in turn lead to less defined shores. Second, the triangulation is constant and due to the way vertex interpolation works, edges that follow the triangulation look different than edges that are perpendicular to the said triangulation. This is the reason why the north-east and south-west parts of the Arabian peninsula looks all jagged while the south-east part looks fine.

Fortunately triangulation is easy enough to fix by simply adding code that ensures that the triangulation follows the shores.



The bottom-left part of the texture still looks jagged though, but this is because of inadequate mesh resolution at high distances to the planet. Solving that one is a simple matter of lowering the observer transform so that it's closer to the ground while remaining underneath the camera:



This approach looks sharp both up high in orbit and close to the ground:



Here's a side-by-side comparison shot of the North American Great Lakes from an altitude of 300 km:



And another one a little bit more zoomed in:



Now, up to this point the water was simply rendered using a solid color shader that was writing to depth. The fun part came when I decided to add some transparency to the shallow water in order to soften the edges a bit when close to the ground. While the transparency was easily achievable by comparing the difference in the pixel's sampled depth, I quickly ran into issues with other effects that required depth, such as post-processed fog. Since the transparent water wasn't writing to depth, I was suddenly faced with the underwater terrain being shaded like this:



The most obvious way to fix this would be to draw the terrain and other opaque geometry, draw the transparent water then draw the water again but this time filling only the depth buffer, followed by the remaining transparent objects. Unfortunately as far as I can tell, this is not possible with Unity. All opaque objects are always drawn first before all the transparent objects, regardless of their render queue. It doesn't seem possible to insert a transparent-rendered object in an opaque geometry pass, so I had to resort to less-than ideal hacks.

I tried various solutions to address it, from modifying the water shader to the fog's shader itself, but in the end I settled on the simplest approach: ray-sphere intersection. I made the fragment shader do a ray intersection with the planet's sphere to determine the near intersection point and the point on the ray closest to the sphere's center. If the closest point lies below the water level and the near intersection point lies in front of the sampled depth value, then I move the sampled depth back to the near intersection point:



While this approach works fine for now, but I can imagine it breaking as the planet gets larger and floating point values start dropping precision... I'll just have to keep my mind open for other potential ways to address this issue in the future.