Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ArenMook

Pages: [1] 2 3 ... 10
1
Dev Blog / Sep 16, 2018 - Rendering Planetoids
« on: September 16, 2018, 05:13:37 PM »
As part of R&D for Project 5: Sightseer, I was looking into various ways of replacing Unity's terrain with something more flexible. Among my options was revisiting the planetary rendering system I wrote two years ago. Of course adapting a spherical planetary terrain to replace a flat one of a game that has been in development for 2.5 years is probably not the best idea in itself... and I'm still on the fence about using it... but I did get something interesting out of it: a way to generate planetoids.

The original idea was inspired by a Star Citizen video from a few years back where one of the developers was editing a planetary terrain by basically dragging splat textures onto it -- mountains and such, and the system would update immediately. Of course nothing like that exists in Unity, or on its Asset Store, so the only way to go was to try developing this kind of system myself.

So I started with a simple thought... what's the best way to seamlessly texture a sphere? Well, to use a cube map of course! A cube map is just a texture projected to effectively envelop the entire sphere without seams, so if one was to render a cube map from the center of the planet, it would be possible to add details to it in form of textured quads.

I started with a simple GPU-generated simplex noise shader that uniformly textured my sphere.



Doesn't look like much, but that's just the source object that's supposed to act as a spherical height map. Now taking that and rendering it into a cube map gives a cube map heightmap that can then be used to texture the actual planet. Of course using the rendered cube map as-is wouldn't look very good. It would simply be the same sphere as above, but with lighting applied to it. More work is needed to get it to look more interesting.

First -- this is a height map, so height displacement should happen. This is simple enough to do just by displacing the vertex along the normal based on the sampled height value in the vertex shader.

  1. v.vertex.xyz += v.normal * texCUBElod(_Cube, float4(v.normal, 0.0)).r * _Displacement;

This adds some height variance to the renderer, and combined with the basic assignment of the sampled texture to Albedo this makes the planetoid a little bit more interesting:



Of course the lighting is not correct at this point. Although the vertex values get offset, normals do not. Plus, the vertex resolution is a lot lower than the source height map texture. So what ideally should happen, is the normal map should get calculated at run-time by sampling the height map values around each pixel. This process is different from the normal bump mapping technique because our texture is cubic rather than 2D. In a way, it's actually more simple. The challenge, as I learned, lies in calculating the normal itself.

With 2D textures, calculating normals from a heightmap is trivial: sample the height difference on +X to -X, then another height difference on +Y to -Y, use that as X and Y normal map, with Z being resolved based on the other two. With triangle-based meshes this is also simple: loop through triangles, calculate each triangle's normal, add to each of the 3 vertices, then normalize the result at the end. But with cube map textures? There is technically no left/right or up/down. Unprojecting each of the 6 faces would result in 2D textures, but then they wouldn't blend correctly with their neighbors. I spent a lot of time trying to generate the "perfect" normal map from a cube map heightmap texture, and in the end never got it to be quite perfect. I think the best solution would be to handle sides (+X, -X, +Z, -Z) and top/bottom (+Y, -Y) separately, then blend the result, but in my case I just did it without blending, simply using each normal along with the side's tangent (a simple right-pointing vector that I pass to each side's shader) to calculate the binormal. I then use the tangent and binormal to rotate the normal, thus creating the 4 sampling points used to calculate the modified normal.
  1. inline float4 AngleAxis (float radians, float3 axis)
  2. {
  3.         radians *= 0.5;
  4.         axis = axis * sin(radians);
  5.         return float4(axis.x, axis.y, axis.z, cos(radians));
  6. }
  7.  
  8. inline float3 Rotate (float4 rot, float3 v)
  9. {
  10.         float3 a = rot.xyz * 2.0;
  11.         float3 r0 = rot.xyz * a;
  12.         float3 r1 = rot.xxy * a.yzz;
  13.         float3 r2 = a.xyz * rot.w;
  14.  
  15.         return float3(
  16.                 dot(v, float3(1.0 - (r0.y + r0.z), r1.x - r2.z, r1.y + r2.y)),
  17.                 dot(v, float3(r1.x + r2.z, 1.0 - (r0.x + r0.z), r1.z - r2.x)),
  18.                 dot(v, float3(r1.y - r2.y, r1.z + r2.x, 1.0 - (r0.x + r0.y))));
  19. }
  20.  
  21. inline float SampleHeight (float3 normal) { return texCUBE(_Cube, normal).r; }
  22.  
  23. float3 CalculateNormal (float3 n, float4 t, float textureSize)
  24. {
  25.         float pixel = 3.14159265 / textureSize;
  26.         float3 binormal = cross(n, t.xyz) * (t.w * unity_WorldTransformParams.w);
  27.         float3 x0 = Rotate(AngleAxis(-pixel, binormal), n);
  28.         float3 x1 = Rotate(AngleAxis(pixel, binormal), n);
  29.         float3 z0 = Rotate(AngleAxis(-pixel, t.xyz), n);
  30.         float3 z1 = Rotate(AngleAxis(pixel, t.xyz), n);
  31.  
  32.         float4 samp;
  33.         samp.x = SampleHeight(x0);
  34.         samp.y = SampleHeight(x1);
  35.         samp.z = SampleHeight(z0);
  36.         samp.w = SampleHeight(z1);
  37.         samp = samp * _Displacement + 1.0;
  38.  
  39.         x0 *= samp.x;
  40.         x1 *= samp.y;
  41.         z0 *= samp.z;
  42.         z1 *= samp.w;
  43.  
  44.         float3 right = (x1 - x0);
  45.         float3 forward = (z1 - z0);
  46.         float3 normal = cross(right, forward);
  47.         normal = normalize(normal);
  48.  
  49.         if (dot(normal, n) <= 0.0) normal = -normal;
  50.         return normal;
  51. }
The world normal is calculated in the fragment shader and is then used in the custom lighting function instead of the .Normal.
  1. float3 worldNormal = normalize(IN.worldNormal);
  2. float3 objectNormal = normalize(mul((float3x3)unity_WorldToObject, worldNormal));
  3. float height = SampleHeight(objectNormal);
  4. objectNormal = CalculateNormal(objectNormal, _Tangent, 2048.0);
  5. o.WorldNormal = normalize(mul((float3x3)unity_ObjectToWorld, objectNormal));
  6. o.Albedo = height.xxx;
  7. o.Alpha = 1.0;
The result looks like this:



Now in itself, this is already quite usable as a planetoid / asteroid for space games, but it would be great to add some other biomes, craters and hills to it. The biomes can be done by changing the noise shader, or by adding a second noise on top of the first one, that will alpha blend with the original. Remember: in our case the height map texture is a rendered texture, so we can easily add additional alpha-blended elements to it, including an entire sphere.

This is how a secondary "biome" sphere looks like that adds some lowlands:



Blended together it looks like this on the source sphere:



When rendered into a cube map and displayed on the planetoid, it looks like this:



Adding craters and other features to the surface at this point is as simple as finding good height map textures, assigning them to a quad, and placing them on the surface of the sphere:



The final result looks excellent from far away:



Unfortunately zooming in, especially in areas with flatter terrain, obvious ridges form:



This happens because 8-bit colors are simply inadequate when it comes to handling the height variance found in terrains. So what can be done? The first obvious thing I tried was to pack the height value into 2 channels using Unity's EncodeFloatRG function in the original biome shaders:
  1. // Instead of this:
  2. return half4((noise * 0.5 + 0.5).xxx, 1.0);
  3. // I did this:
  4. return half4(EncodeFloatRG(noise * 0.5 + 0.5), 0.0, 1.0);
The SampleHeight function was then changed to:
  1. inline float SampleHeight (float3 normal) { return DecodeFloatRG(texCUBE(_Cube, normal).rg); }
This certainly helped the biomes look smooth, but the stamp textures (craters, mountains, etc) are still regular textures, so there is obviously no benefit to taking this approach with them. Worse still, the alpha blending is still 8 bit! You can see that the transition walls (the slope on the right side of the picture below) are still pixelated, because alpha blending doesn't have the advantage of taking the 16-bit approach of EncodeFloatRG.



Worse still, the source sphere became unrecognizable:



So what can be done to fix this? Well, since the stamp source heightmap textures are always using 8 bits per color, there isn't a whole lot that can be done here, short of creating your own custom heightmaps and saving them in 16-bit per color textures, which sadly means ruling out all the height maps readily available online like the ones I used. The transitions between biomes can be eliminated by using only a single shader that would contain all the biomes and blend them together before outputting the final color in RG format. So, technically if one was to avoid stamp textures, it's possible to have no visible jaggies.

At this point, some of you reading may wonder, "why don't you just change the render texture format to float, you noob?" -- I tried... Never mind the RGBA format 2048 size cube map is already 128 MB in size, there is no option to make it be anything else. It's not possible to make it bigger, and it's not possible to change it to be anything other than RGBA:



I tried doing this via code -- RFloat format flat out failed. Unity did seem to accept the RGBAFloat format, but then the final texture was always black, even though there was no errors or warnings in the console log. Plus, the texture size of 512 MB was kind of... excessive to begin with. And so what's left to try at this point? I suppose I could change the process to render 6 2D textures instead of the single cube map, which would allow me to use the RFloat format, but this will also mean visible seams between the quadrants, since the process of generating normals won't be able to sample adjacent pixels.

I could also try using the RGBAFloat format, rendering height into R, and normal XY into G & B channels. This would, in theory, be the best approach -- as the final planetoid won't need to keep calculating normal maps on the fly. Of course the memory usage at this point will be 512 MB per planetoid with texture resolution of 2048... So again back to being rather excessive.

If you have any thoughts on the matter, and especially if you've done this kind of stuff before, let me know! I can be found on the Tasharen's Discord.

2
NGUI's support forums are a wealth of information that have been filled by 6 years of dedicated support, but times are changing and so should the support. The best place to ask new questions that haven't already been answered on the forums is in Discord: https://discord.gg/tasharen -- look for the #ngui-support section.

For those that don't know, Discord is a text / voice communication program, and it's accessible via your browser.

I hate forums. Always did. Chat is much easier, and with much better response times to boot as I'm always there when I'm awake.

3
TNet 3 Support / Integrating TNet with Steam Networking
« on: November 23, 2017, 03:08:47 AM »
Wrote this in the Dev Blog forum as it's rather descriptive (and long). If you're a Steamworks developer using TNet, and you want to be able to use Steam's networking with your TNet-powered game, you will find this useful:

http://www.tasharen.com/forum/index.php?topic=15512.0

4
Dev Blog / Nov 23, 2017 -- Integrating TNet with Steam Networking
« on: November 23, 2017, 03:07:24 AM »
Back in Windward days, I would sometimes get comments from players saying they can't join their friends' games, no matter what they tried. Let's face it, while us devs find it a trivial task to open up a port on the router, and TNet does indeed us UPnP to do this automatically, the players are less savvy and can sometimes be behind such firewalls that even UPnP can't breach. Fortunately, Steam has ways around it, and they've indeed taken care of all of this with their networking API. It uses UDP to simulate TCP-like functionality, but since it's UDP, NAT punchthrough is easy to do. Better still, if NAT punchthrough fails, Steam allows using its servers as relays to still make it possible for two players to play together.

Of course there are limitations: first, both players must be using Steam. But hey, let's face it -- Steam is the best platform out there for gamers. Is there a reason NOT to use it? Second, the packet size is limited to just over 1 MB -- but quite frankly if your game is sending out packets greater than 1 MB in size, you're probably doing something wrong. And last but not least, the API itself is a little... weird. To explain just what I mean by that, let's look at the steps required with the latest version of TNet (from the Pro repository as of this writing).

First, you will want to grab Steamworks.NET here: https://github.com/rlabrecque/Steamworks.NET

Next, let's start by making a new controller / wrapper class. I called mine "Steam" for simplicity.
  1. using UnityEngine;
  2. using Steamworks;
  3. using TNet;
  4.  
  5. public partial class Steam
  6. {
  7.     CSteamID userID;
  8.  
  9.     void Awake ()
  10.     {
  11.         SteamAPI.Init();
  12.         SteamUserStats.RequestCurrentStats();
  13.         userID = SteamUser.GetSteamID();
  14.         DontDestroyOnLoad(gameObject);
  15.     }
  16.  
  17.     void OnDestroy () { SteamAPI.Shutdown(); }
  18.  
  19.     void Update () { SteamAPI.RunCallbacks(); }
  20. }
With the script attached to a game object in your first scene, Steamworks API will be initialized, and it will be shut down when your application does. The Update() function simply lets Steam do its thing.

Next, we need to create a special connection wrapper class for Steam to use with TNet. By default TNet will use its sockets for communication, but since we'll be using Steam here, we should bypass that. Fortunately the latest Pro version of TNet has a way to specify an IConnection object for every single TcpProtocol which essentially inserts its operations in between of TNet's sockets, making all of this possible. I chose to make this class inside the Steam class, but it's up to you where you place it.
  1. public partial class Steam
  2. {
  3.         [System.NonSerialized] static System.Collections.Generic.Dictionary<CSteamID, TcpProtocol> mOpen = new System.Collections.Generic.Dictionary<CSteamID, TcpProtocol>();
  4.         [System.NonSerialized] static System.Collections.Generic.HashSet<CSteamID> mClosed = new System.Collections.Generic.HashSet<CSteamID>();
  5.  
  6.         class P2PConnection : IConnection
  7.         {
  8.                 public CSteamID id;
  9.                 public bool connecting = false;
  10.                 public bool disconnected = false;
  11.  
  12.                 public bool isConnected { get { return !disconnected; } }
  13.  
  14.                 public bool SendPacket (Buffer buffer) { return SteamNetworking.SendP2PPacket(id, buffer.buffer, (uint)buffer.size, EP2PSend.k_EP2PSendReliable); }
  15.  
  16.                 public void ReceivePacket (out Buffer buffer) { buffer = null; }
  17.  
  18.                 public void OnDisconnect ()
  19.                 {
  20.                         if (!disconnected)
  21.                         {
  22.                                 disconnected = true;
  23.  
  24.                                 var buffer = Buffer.Create();
  25.                                 buffer.BeginPacket(Packet.Disconnect);
  26.                                 buffer.EndPacket();
  27.                                 SteamNetworking.SendP2PPacket(id, buffer.buffer, (uint)buffer.size, EP2PSend.k_EP2PSendReliable);
  28.                                 buffer.Recycle();
  29.  
  30.                                 lock (mOpen)
  31.                                 {
  32.                                         mOpen.Remove(id);
  33.                                         if (!mClosed.Contains(id)) mClosed.Add(id);
  34.                                 }
  35.  
  36.                                 if (TNManager.custom == this) TNManager.custom = null;
  37.                         }
  38.                 }
  39.         }
  40. }
So what does the P2PConnection class do? Not much.  It keeps the Steam ID identifier since that's the "address" for each "connection". I use both terms in quotations because instead of addresses, Steam's packets are sent directly to players, and players are identified by their Steam ID. Likewise, there are no "connections" established with Steam's API. Remember how I said that the API itself is a bit weird? Well, this right here is what I meant. Instead of the expected workflow where a connection must first be established and acknowledged before packets start flowing, Steam's approach is different. You simply start sending packets to your friend like you're the best buddies in the world. Your friend's client gets the packets along with a special notification of whether to accept the incoming packets or not. If the client chooses to accept the packets, they can be received immediately. There is no "decline" option. In fact, no notification is sent back to the first player at all, and trying to do so will actually auto-accept the packets! So the options are: accept packets, or ignore them, leaving the other player wondering.

Anyway, so back to P2PConnection. ReceivePacket() can't be handled here, because packets don't arrive via sockets. Instead they arrive in one place, and must then be queued in the right place -- which we'll get to in a bit. For now, the only two useful functions in that class are SendPacket -- which simply calls the appropriate SteamNetworking API function, and the OnDisconnect notification. This one needs some explanation.

Since there is no concept of "connections" with Steam's API, we have to account for this ourselves. So to keep it short, we're simply sending a Disconnect packet to the other player when we're done. We're also keeping a list of known open and closed "connections" (and I'm going to tire of using quotation marks by the end of this post...). So to sum it up, when TNet says that the connection is closed, we still send out a Disconnect packet to the other player, ensuring that they know to stop sending us packets.

Moving on -- we have the custom connection class for TNet. We should now use it. Let's start by writing the Connect function:
  1.         void ConnectP2P (CSteamID id)
  2.         {
  3.                 if (TNManager.custom == null && !TNManager.isConnected && !TNManager.isTryingToConnect && mInst != null)
  4.                 {
  5.                         CancelInvoke("CancelConnect");
  6.  
  7.                         var p2p = new P2PConnection();
  8.                         p2p.id = id;
  9.                         p2p.connecting = true;
  10.                         TNManager.custom = p2p;
  11.                         TNManager.client.stage = TcpProtocol.Stage.Verifying;
  12.  
  13.                         var buffer = Buffer.Create();
  14.                         var writer = buffer.BeginPacket(Packet.RequestID);
  15.                         writer.Write(Player.version);
  16.                         writer.Write(TNManager.playerName);
  17.                         writer.Write(TNManager.playerData);
  18.                         var size = buffer.EndPacket();
  19.                         SteamNetworking.SendP2PPacket(id, buffer.buffer, (uint)size, EP2PSend.k_EP2PSendReliable);
  20.                         buffer.Recycle();
  21.  
  22.                         Invoke("CancelConnect", 8f);
  23.                 }
  24. #if UNITY_EDITOR
  25.                 else Debug.Log("Already connecting, ignoring");
  26. #endif
  27.         }
Inside the ConnectP2P function we create our custom P2PConnection object and assign it as TNManager.custom -- meaning it will be used by TNManager's TcpProtocol for all communication instead of sockets. We also immediately send out a packet requesting the ID. TNet does this whenever a TCP connection is established, so we should follow the same path. This packet will be received by the other player (the one hosting the game server), and a response will be sent back, actually activating the connection.

One other thing the function does is it calls the "CancelConnect" function via a delayed invoke, which will simply act as a time-out:
  1.         void CancelConnect ()
  2.         {
  3.                 var p2p = TNManager.custom as P2PConnection;
  4.  
  5.                 if (p2p != null && p2p.connecting)
  6.                 {
  7.                         TNManager.client.stage = TcpProtocol.Stage.NotConnected;
  8.                         TNManager.onConnect(false, "Unable to connect");
  9.                         TNManager.custom = null;
  10.                 }
  11.         }
It's also useful to have a String-accepting version of the Connect function, for convenience:
  1.         static public bool Connect (string str)
  2.         {
  3.                 ulong steamID;
  4.  
  5.                 if (mInst != null && isActive && !str.Contains(".") && ulong.TryParse(str, out steamID))
  6.                 {
  7.                         mInst.ConnectP2P(new Steamworks.CSteamID(steamID));
  8.                         return true;
  9.                 }
  10.                 return false;
  11.         }
So -- we now have a way to start the connection with a remote player. We now need to handle this operation on the other side. To do that, we need to subscribe to a few events. First is the P2PSessionRequest_t callback -- this is the notification that effectively asks you if you want to receive packets from the other player. Ignoring it is one option, but simply calling AcceptP2PSessionWithUser is more useful. Just in case though, we only do it if there is a game server running. We also need to handle the error notification:
  1.         // Callbacks are added to a list so they don't get discarded by GC
  2.         List<object> mCallbacks = new List<object>();
  3.  
  4.         void Start ()
  5.         {
  6.                 // P2P connection request
  7.                 mCallbacks.Add(Callback<P2PSessionRequest_t>.Create(delegate (P2PSessionRequest_t val)
  8.                 {
  9.                         if (TNServerInstance.isListening) SteamNetworking.AcceptP2PSessionWithUser(val.m_steamIDRemote);
  10.                 }));
  11.  
  12.                 // P2P connection error
  13.                 mCallbacks.Add(Callback<P2PSessionConnectFail_t>.Create(delegate (P2PSessionConnectFail_t val)
  14.                 {
  15.                         Debug.LogError("P2P Error: " + val.m_steamIDRemote + " (" + val.m_eP2PSessionError + ")");
  16.                         CancelInvoke("CancelConnect");
  17.                         CancelConnect();
  18.                 }));
  19.         }
With this done, the server-hosting client is now able to start accepting the packets. We now need to actually receive them. To do that, let's expand the Update() function:
  1.         // Buffer used to receive data
  2.         static byte[] mTemp;
  3.  
  4.         void Update ()
  5.         {
  6.                 SteamAPI.RunCallbacks();
  7.  
  8.                 uint size;
  9.                 if (!SteamNetworking.IsP2PPacketAvailable(out size)) return;
  10.  
  11.                 CSteamID id;
  12.  
  13.                 lock (mOpen)
  14.                 {
  15.                         for (;;)
  16.                         {
  17.                                 if (mTemp == null || mTemp.Length < size) mTemp = new byte[size < 4096 ? 4096 : size];
  18.  
  19.                                 if (SteamNetworking.ReadP2PPacket(mTemp, size, out size, out id))
  20.                                         AddPacketP2P(id, mTemp, size);
  21.  
  22.                                 if (!SteamNetworking.IsP2PPacketAvailable(out size))
  23.                                 {
  24.                                         UnityEngine.Profiling.Profiler.EndSample();
  25.                                         return;
  26.                                 }
  27.                         }
  28.                 }
  29.         }
The code above simply checks -- is there a packet to process? If so, it enters the receiving loop where data is read into a temporary buffer, and then placed into individual buffers that TNet expects. Basically the stuff that TNet does under the hood when receiving packets. Since we're doing the receiving, we also need to do the splitting. Each packet is added to the appropriate queue by calling the AddPacketP2P function which we will write now:
  1.         static void AddPacketP2P (CSteamID id, byte[] data, uint size)
  2.         {
  3. #if UNITY_EDITOR
  4.                 if (!Application.isPlaying) return;
  5. #endif
  6.                 TcpProtocol tcp;
  7.  
  8.                 if (mOpen.TryGetValue(id, out tcp))
  9.                 {
  10.                         // Existing connection
  11.                         var p2p = tcp.custom as P2PConnection;
  12.                         if (p2p != null && p2p.connecting) p2p.connecting = false;
  13.                 }
  14.                 else if (TNServerInstance.isListening)
  15.                 {
  16.                         // New connection
  17.                         var p2p = new P2PConnection();
  18.                         p2p.id = id;
  19.  
  20.                         lock (mOpen)
  21.                         {
  22.                                 tcp = TNServerInstance.AddPlayer(p2p);
  23.                                 mOpen[id] = tcp;
  24.                                 mClosed.Remove(id);
  25.                         }
  26.                 }
  27.                 else if (TNManager.custom != null)
  28.                 {
  29.                         // New connection
  30.                         var p2p = TNManager.custom as P2PConnection;
  31.                         if (p2p == null) return;
  32.  
  33.                         p2p.id = id;
  34.                         tcp = TNManager.client.protocol;
  35.  
  36.                         lock (mOpen)
  37.                         {
  38.                                 mOpen[id] = tcp;
  39.                                 mClosed.Remove(id);
  40.                         }
  41.                 }
  42.                 else return;
  43.  
  44.                 tcp.OnReceive(data, 0, (int)size);
  45.         }
The AddPacketP2P function checks if it's an existing connection first. If it is, the connection is marked as no longer trying to connect, and the packet is added to the TcpProtocol's receiving queue. If the connection is not yet open, we check to see if a game server is running. If it is, a new P2PConnection is created and a new player gets created on the server. This player won't have an IP address or an open TCP socket. Instead, it has the reference to the P2PConnection which it will use for communication.

Last but not least, if the game server is not running, the function checks to see if the TNManager has its own P2P reference set. We assigned it in ConnectP2P(), so this means that this check effectively makes sure that we are trying to connect. If this check passes, the packet is added to the TNManager client's incoming queue.

If all else fails, the packet is simply ignored.

So that's that! This is all you need to be able to effectively overwrite TNet's networking functionality with Steam Networking. Before you go though, you may want to make it possible for people to be able to right-click their friends in the Steam friends list and use the "Join game" option. To do this, you need to set the rich presence's "+connect" key:
  1.         public void AllowFriendsToJoin (bool allow)
  2.         {
  3.                 if (allow) SteamFriends.SetRichPresence("connect", "+connect " + userID);
  4.                 else SteamFriends.SetRichPresence("connect", "");
  5.         }
Simply call Steam.AllowFriendsToJoin(true) when you want to make it possible for them to join. Personally, I placed it inside a function called from TNManager.onConnect, but it's up to you where you need it to be.

You will also need to subscribe to the Join Request like so:
  1.                 // Join a friend
  2.                 mCallbacks.Add(Callback<GameRichPresenceJoinRequested_t>.Create(delegate (GameRichPresenceJoinRequested_t val)
  3.                 {
  4.                         var addr = val.m_rgchConnect;
  5.                         addr = addr.Replace("+connect ", "");
  6.                         if (!Connect(addr)) TNManager.Connect(addr);
  7.                 }));
When a player chooses the "Join Friend" option from within a game, GameRichPresenceJoinRequested_t will be triggered. When a player uses the same option while the game isn't launched, GameRichPresenceJoinRequested_t won't be sent. Instead a "+connect <string>" command-line option will be sent to the game's executable -- so you will want to handle that yourself.

Anyway, that's it! This is all you need to make your TNet-powered game be able to use Steam Networking. I hope this helps someone!

You can grab the full file here.

5
Dev Blog / Nov 11, 2017 -- How to edit UnityEngine.dll
« on: November 11, 2017, 09:41:34 AM »
I'm currently working on performance optimizations for Project 5: Sightseer, and a part of that involved editing the UnityEngine.dll to fix a nasty bug Unity introduced back in ~2014 that they seem to refuse to fix. That bug, is an absurd amount of GC allocations coming from the AttributeHelperEngine.

The bug in question is glaringly obvious to anyone even with a little proficiency in C#, and stems from the lack of caching of expensive GetAttributes calls: https://github.com/MattRix/UnityDecompiled/blob/master/UnityEngine/UnityEngine/AttributeHelperEngine.cs

Amusingly, the bug has been reported to Unity years ago, alongside the code required to fix it (https://fogbugz.unity3d.com/default.asp?746364_pjnmdhk7c9imgdsk)... and yet -- Unity refused to do anything about it, claiming that a future redesign of the system will fix it. Well, guys -- fast forward to several years later -- the bug is still there, all the way in Unity 2017.2, and no actions have been taken to address it.

Here's the thing about closing bugs that are affecting people today hoping for a future redesign to fix it later -- until this "later" comes, the problem will still keep affecting all 4.5 million Unity developers, and it can be (and usually is) years before gets resolved! And if someone submits a bug report with actual code on how to fix it -- why not fix it? Boggles my mind...

Anyway -- this post isn't meant to be a rant about Unity's choices -- I'll do that in another one. Instead, let me explain how you -- the developer -- can fix this problem yourself, to an extent. Fortunately, this particular problem comes from the side of Unity that lives in the UnityEngine.dll file, and C# files are quite easy to modify. The first thing we need to do is make a new C# project in Visual Studio.

I was editing Unity 5.6.4f1 -- so I made the application target .NET Framework 3.5. "The Output type" needs to be a Class Library -- as we need to create a DLL with the edited functions first.

Compile this code into a DLL:
  1. using System;
  2.  
  3. namespace UnityEngine
  4. {
  5.         internal class AttributeHelperEngine
  6.         {
  7.                 private static Type GetParentTypeDisallowingMultipleInclusion (Type type)
  8.                 {
  9.                         return null;
  10.                 }
  11.  
  12.                 private static Type[] GetRequiredComponents (Type klass)
  13.                 {
  14.                         return null;
  15.                 }
  16.         }
  17. }
This simple DLL will not be referencing any Unity classes, so there is no need to reference the UnityEngine.dll. Compile the DLL (I targeted Release) and move it into the solution folder, or somewhere you can find it. I called mine FixUnityEngine.dll. If you choose to fix it by adding caching instead, like in the bug report's suggested fix code, you will need to reference the UnityEngine.dll. Personally, I saw no adverse effects of simply returning 'null' in Sightseer. Worked just fine.

The next step is to create a program that will replace the code in one DLL (UnityEngine.dll) with code from another (FixUnityEngine.dll). Since I no longer needed the code above, I simply commented it out, choosing to reuse the project instead of making a new one -- but if you plan on editing your replacement code you may want to create a separate VS solution.

The API that lets us devs replace C# code is part of Mono.Cecil, but interestingly enough it's actually a part of the Visual Studio installation, at least in the current version (2017). Here's all the code needed to edit the DLL:
  1. using System;
  2. using Mono.Cecil;
  3.  
  4. public class Application
  5. {
  6.         static MethodDefinition Extract (AssemblyDefinition asm, string type, string func)
  7.         {
  8.                 var mod = asm.MainModule;
  9.                 if (mod == null) return null;
  10.  
  11.                 var existingType = mod.GetType(type);
  12.                 if (existingType == null) return null;
  13.  
  14.                 var methods = existingType.Methods;
  15.  
  16.                 foreach (var method in methods)
  17.                 {
  18.                         if (method.Name == func)
  19.                         {
  20.                                 return method;
  21.                         }
  22.                 }
  23.                 return null;
  24.         }
  25.  
  26.         static bool Replace (AssemblyDefinition original, AssemblyDefinition replacement, string type, string func)
  27.         {
  28.                 var method0 = Extract(original, type, func);
  29.                 var method1 = Extract(replacement, type, func);
  30.  
  31.                 if (method0 != null && method1 != null)
  32.                 {
  33.                         method0.Body = method1.Body;
  34.                         Console.WriteLine("Replaced " + type + "." + func);
  35.                         return true;
  36.                 }
  37.  
  38.                 Console.WriteLine("Unable to replace " + type + "." + func);
  39.                 return false;
  40.         }
  41.  
  42.         static int Main (string[] args)
  43.         {
  44.                 var dll0 = "C:/Projects/FixUnityEngine/UnityEngine.dll";
  45.                 var dll1 = "C:/Projects/FixUnityEngine/FixUnityEngine.dll";
  46.  
  47.                 var asm0 = AssemblyDefinition.ReadAssembly(dll0);
  48.                 var asm1 = AssemblyDefinition.ReadAssembly(dll1);
  49.  
  50.                 Replace(asm0, asm1, "UnityEngine.AttributeHelperEngine", "GetParentTypeDisallowingMultipleInclusion");
  51.                 Replace(asm0, asm1, "UnityEngine.AttributeHelperEngine", "GetRequiredComponents");
  52.  
  53.                 asm0.Write("C:/Projects/FixUnityEngine/UnityEngine_edited.dll");
  54.  
  55.                 Console.ReadKey();
  56.                 return 0;
  57.         }
  58. }
You may notice that I'm referencing a local copy of UnityEngine.dll -- I chose to copy it to the project's folder, but you can reference it all the way in Program Files if you like. Its default location is "C:\Program Files\Unity\Editor\Data\Managed\UnityEngine.dll".

So what does the code do? It simply reads the two DLLs and replaces the body of one function with another! In the replacement DLL I kept the same namespace, class name and function names for consistency (but as far as I can tell this isn't actually necessary). In my case though, since I did, the code to perform the replacement ended up being shorter.

Once you compile and run the program, it will spit out an edited version of the DLL (UnityEngine_edited.dll). Simply close all instances of Unity and replace C:\Program Files\Unity\Editor\Data\Managed\UnityEngine.dll with this version. That's it.

Want to test the result? Here's a test program for you:
  1. using UnityEngine;
  2.  
  3. public class Test : MonoBehaviour
  4. {
  5.     private void Update ()
  6.     {
  7.         var go = gameObject;
  8.         if (Input.GetKeyDown(KeyCode.Alpha1))
  9.         {
  10.             for (int i = 0; i < 1000; ++i)
  11.             {
  12.                 var t2 = go.AddComponent<Test2>();
  13.                 Destroy(t2);
  14.             }
  15.         }
  16.     }
  17. }
  1. using UnityEngine;
  2.  
  3. public class Test2 : MonoBehaviour {}
  4.  
The actual GC amounts and timings will vary greatly with project complexity (the more C# scripts you have, the slower the whole process becomes thanks to Unity), but this is what I was seeing before the edit and after:



There's nothing I can do about Unity calling this useless functions, and indeed in my project Unity doing so wastes 0.16 ms per call to AddComponent... but at least the 325 MB of memory allocation is gone. Yay for small victories.

6
Dev Blog / July 21, 2017 - Grass
« on: July 21, 2017, 08:12:37 PM »
In Sightseer, a big part of the game is exploring the large procedurally-generated world, so it has to look as nice as possible. Flat terrains are generally not, so adding some grass only seemed logical. Since I was still using Unity's terrain system, for all its faults, I decided to see if it could redeem itself by offering some nice looking grass.

Well... it did, sort of. The final result did look a little better:



But what about looking at it from the top?



If you can't see any grass in that picture, you're not alone. There are two ways of drawing grass using Unity. First is to use the approach you see in the picture, by simply having grass use the world's up vector as its up vector. It looks consistent from the side, but effectively makes the grass invisible when viewed from above. Another approach is to use screen-aligned quads for grass, where the top of the monitor is considered to be "up", regardless of the direction. This approach is even worse -- not only does the grass turn as the camera rotates / tilts, but it also looks very weird when viewed from above. I'll spare you the pic.

Still, neither of those limitations are as bad as the next one: performance hit:



Since the grass update is based on the position of the camera, in a game featuring a 3rd person camera that orbits around the vehicle such as Sightseer, that grass update happens very, very frequently -- often more than once per second! Predictably a 300+ ms hiccup every time that happens is simply unacceptable, and that's with moderately sparse grass, at that!

Ideally I wanted grass to be more dense, like this:



With grass set to that density, the game was spending more time updating grass than everything else combined while playing it.

At this point I found myself wondering just what kind of use case the original developers of this grass system had in mind with it being so stupidly slow. Maybe it was only meant for extremely small worlds, or extremely sparse grass, or both... I don't know. All I know is that it was simply unusable, and that I had to write my own.

And so I did.

So, how can one do super fast grass? Let's start by examining Unity's grass. With it, for each generated terrain, grass information has to be baked in right away for the entire terrain, like a texture -- the same way splat information is passed to it. If a small part of the grass information changes, the entire texture does. What happens with this data is anyone's guess, as it happens somewhere deep inside Unity and the end developer has no control over it.

Does it have to be this way? Certainly not. First, grass information for anything outside the player's immediate area is completely irrelevant. Who cares what grass is supposed to be like 1 km away? It's not visible, so it's irrelevant. Second, I don't know when or why Unity's updates take so damn long to complete, but grass needs to be split into patches (and as far as I could tell, Unity does that -- at least for drawing the grass). As such, patch-based distance checks to determine if the grass should be updated or not should be extremely fast. Similarly, there is no need to update each patch unless the player goes out of range. When it does go out of range, the patch should be repositioned to the opposite side of the visible "bubble" around the player and re-filled. The "bubble" looks like this:



Last but not least, actual placement information for the grass should be based on some data that's available in a separate thread. Since Sightseer sets the terrain heightmap, the base heightmap data can be used as-is. All that the main thread should be doing is updating the VBOs (draw buffers) after the grass has been generated.

Finally, the grass itself shouldn't be based on quads like Unity's. It should be based on meshes. A simple "bush" of grass made up of 3 quads intersecting in a 3D V-like pattern is the most trivial example. Since it's based on meshes, it's possible to have the said meshes to be of different shapes, complexity, and most importantly -- size. Furthermore, since it's shaped in a V-like pattern, it should look good even when viewed from above. Of course since the grass should end up in a single draw call, it's important to have all those meshes use some kind of a grass atlas, letting them share the same material.

In the end, it took only a few hours to write a basic grass system, then a couple more days to perfect it (and a couple more weeks of playing the game to iron out weird glitches <_<). The end result, performance-wise was obvious within the first few hours, however:



You're seeing that right: 0.2 millisecond to update the grass of much greater density than what was taking Unity's grass 300+ milliseconds per frame. I expected as much, which is why I was so surprised that Unity's grass performed so horribly. This is how it looked in the game:



It looks much more dense than what I had with Unity's grass, and is very much visible from above:



In fact, I was immediately curious how the grass would look like if I enabled shadows on it and increased its size to make it look even more dense:



Very nice indeed, although the shadows are a little too obvious from above:



There is one other thing I did with the grass... and it's pretty important. I colored it based on the underlying terrain. Doing so is simple: I render the terrain into a texture using a top-down camera that's updated when the player moves far enough. This texture is sampled by the grass shader, tinting its normally black-and-white albedo texture with the color of the terrain underneath. This makes the grass always blend 100% perfectly, regardless of what's underneath -- whether it's the sand-blasted savanna or the lush grassland -- without any need for developer input. In fact, since Sightseer's terrain is fully procedural and smoothly transitions from one biome to the next, this part was as extremely important to have as performance.

The end result? See for yourself:



All that for 0.2 ms every ~2-3 seconds while driving around.

7
Dev Blog / July 21, 2017 - Custom LOD system
« on: July 21, 2017, 07:06:46 PM »
As most know, Unity has a built-in LOD (level of detail) system. When specified on a root model it can be used to swap renderers based on the size of the model on the screen. For most cases this is quite fine -- and increasing the scale of the object will automatically made the transition happen from farther away. This also means that smaller objects, such as rocks on the ground, will fade out much sooner than say, entire buildings. Makes sense, and it's pretty easy to fine-tune the fade distance while in the editor.



But wait, what if the objects have to fade out at the same distance? What if you have a complex building made up of several parts -- the foundation, the walls, and a bunch of platforms on top, like just about any structure in Sightseer? With the Unity's LOD system, there are two immediate issues. First, Sightseer's renderers are not available until they are dynamically generated at run-time, meaning their size is not known until a bunch of smaller objects get merged together into one larger one in order to save on draw calls and CPU overhead (culling). Since the dimensions are not available, it's not possible to fine-tune the fade distance, and due to varying sizes of objects, since Unity's LOD is based on the final object's dimensions rather than distance, it means they will fade out at different times.

I noticed it right away in Sightseer with trees even before player outposts were introduced. Trees are split into groups by fixed size cells, and all the trees inside each cell are merged into draw calls. Some cells may be full of trees, while others can only have a couple. Since the dimensions of the final renderer vary greatly, this caused some groups of trees to fade in at the horizon, while others wouldn't appear until the player got very close, even though they were adjacent to each other in the world.

The issue only got worse when player outposts were introduced. Player outposts are made from dozens and sometimes even hundreds of small objects -- foundations, walls, and many other types of props -- and Sightseer's code groups them together by material, then merges them into fewest draw calls possible on (a separate thread as to not impact performance). The end result: a variety of renderer sizes, all of which should fade in and out together. With Unity's LOD system that simply wasn't possible. I had player outposts appear piece by piece as I drove towards them -- often with objects on top of foundations appearing to float in mid-air. Not good.

Another issue I ran into with LODGroup is that since it's based on the size of the object on the screen, it effectively means that as the camera moves around the vehicle in 3rd-person view, or zooms in/out, objects near the player's vehicle would swap their LOD levels, or even fades in/out. This is not ideal for Sightseer, and I imagine for other 3rd person games. Objects fading in and out while the camera moves around a stationary vehicle looks jarring at best. Furthermore it hurts performance as the LOD checks have to be performed all the time. It's actually the same issue I've ran into with Unity's grass, but more on that in a separate post

At first, I experimented with trying to hack the LODGroup to work based on distance. I experimented with what happens when it's added before the renderers, and was actually successful in getting the trees to fade out when I wanted them to. Unfortunately the same trick didn't seem to work with the player outposts. I never did figure out why...

Eventually I decided to write my own system. The most basic example of a LOD system is to have a script on the renderer that checks the distance between the player's avatar and the object itself, and enables/disables the renderer based on that. It's simple and controllable -- but of course this basic approach doesn't include any kind of fading in or out.

As I delved into Unity's code that handles fading between different LOD levels (the same code that fades between renderers), I actually discovered another downside of Unity's LOD system: it requires a texture! Behold, ApplyDitherCrossFade function from UnityCG's include file:
  1.     void ApplyDitherCrossFade(half3 ditherScreenPos)
  2.     {
  3.         half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  4.         projUV.y = frac(projUV.y) * 0.0625 /* 1/16 */ + unity_LODFade.y; // quantized lod fade by 16 levels
  5.         clip(tex2D(_DitherMaskLOD2D, projUV).a - 0.5);
  6.     }
As you can see, it samples a dithering texture in order to calculate dithering -- something that is trivial to do via code instead. With the number of texture registers being limited at 16 total, that actually hurts quite a bit. Although to be fair, I'm guessing most games won't run into this particular limitation.

When working on my own LOD system I decided to simply add LOD support to all of the Tasharen's shaders. Sightseer doesn't use Unity's shaders due to some issues explained in a previous post, so adding dithering was a trivial matter -- but let's go over it step by step.

First, we need a function that will compute the screen coordinates for dithering. This is Unity's ComputeDitherScreenPos function from UnityCG.cginc:
  1. half3 ComputeDitherScreenPos(float4 clipPos)
  2. {
  3.         half3 screenPos = ComputeScreenPos(clipPos).xyw;
  4.         screenPos.xy *= _ScreenParams.xy * 0.25;
  5.         return screenPos;
  6. }
That function accepts the clipped vertex position -- something everyone already calculates in the Vertex Shader:
  1. o.vertex = UnityObjectToClipPos(v.vertex)
Simply save the coordinates, passing them to the fragment shader:
  1. o.dc = ComputeDitherScreenPos(o.vertex);
The next step is to take these coordinates in the fragment shader, do some magic with them and clip() the result, achieving a dithering effect for fading in the geometry pixel by pixel.
  1. void DitherCrossFade(half3 ditherScreenPos)
  2. {
  3.         half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  4.         projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
  5.         half dither = _Dither - (projUV.y + projUV.x) * 0.25;
  6.         clip(dither);
  7. }
Instead of using an expensive texture sample like Unity does, I use the frac() function to achieve a similar looking effect. The only notable part of the entire function is the "_Dither" value -- a uniform that's basically the fade alpha. In fact, you can use the main color's alpha instead to make it possible to fade out solid objects!

Here's the entire shader, for your convenience.
  1. Shader "Unlit/Dither Test"
  2. {
  3.         Properties
  4.         {
  5.                 _MainTex ("Texture", 2D) = "white" {}
  6.                 _Dither("Dither", Range(0, 1)) = 1.0
  7.         }
  8.  
  9.         SubShader
  10.         {
  11.                 Tags { "RenderType" = "Opaque" }
  12.                 LOD 100
  13.  
  14.                 Pass
  15.                 {
  16.                         CGPROGRAM
  17.                         #pragma vertex vert
  18.                         #pragma fragment frag
  19.  
  20.                         #include "UnityCG.cginc"
  21.  
  22.                         struct appdata
  23.                         {
  24.                                 float4 vertex : POSITION;
  25.                                 float2 uv : TEXCOORD0;
  26.                         };
  27.  
  28.                         struct v2f
  29.                         {
  30.                                 float4 vertex : SV_POSITION;
  31.                                 float2 uv : TEXCOORD0;
  32.                                 float3 dc : TEXCOORD1;
  33.                         };
  34.  
  35.                         sampler2D _MainTex;
  36.                         float4 _MainTex_ST;
  37.                         fixed _Dither;
  38.  
  39.                         half3 ComputeDitherScreenPos(float4 clipPos)
  40.                         {
  41.                                 half3 screenPos = ComputeScreenPos(clipPos).xyw;
  42.                                 screenPos.xy *= _ScreenParams.xy * 0.25;
  43.                                 return screenPos;
  44.                         }
  45.  
  46.                         void DitherCrossFade(half3 ditherScreenPos)
  47.                         {
  48.                                 half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  49.                                 projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
  50.                                 half dither = _Dither - (projUV.y + projUV.x) * 0.25;
  51.                                 clip(dither);
  52.                         }
  53.                        
  54.                         v2f vert (appdata v)
  55.                         {
  56.                                 v2f o;
  57.                                 o.vertex = UnityObjectToClipPos(v.vertex);
  58.                                 o.dc = ComputeDitherScreenPos(o.vertex);
  59.                                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
  60.                                 return o;
  61.                         }
  62.                        
  63.                         fixed4 frag (v2f i) : SV_Target
  64.                         {
  65.                                 DitherCrossFade(i.dc);
  66.                                 return tex2D(_MainTex, i.uv);
  67.                         }
  68.                         ENDCG
  69.                 }
  70.         }
  71. }
So how does the fading between two renderers happen, you may wonder? It's simple: both are drawn for a time it takes for them to fade in/out. You may think "omg, but that's twice the draw calls!", and while that's true, it's only for a short time and doing so doesn't affect the fill rate due to the clip(). Basically the pixels that are drawn by one renderer should be clipped by the other. Here is the modified version of the shader with an additional property: "Dither Side":
  1. Shader "Unlit/Dither Test"
  2. {
  3.         Properties
  4.         {
  5.                 _MainTex ("Texture", 2D) = "white" {}
  6.                 _Dither("Dither", Range(0, 1)) = 1.0
  7.                 _DitherSide("Dither Side", Range(0, 1)) = 0.0
  8.         }
  9.  
  10.         SubShader
  11.         {
  12.                 Tags { "RenderType" = "Opaque" }
  13.                 LOD 100
  14.  
  15.                 Pass
  16.                 {
  17.                         CGPROGRAM
  18.                         #pragma vertex vert
  19.                         #pragma fragment frag
  20.  
  21.                         #include "UnityCG.cginc"
  22.  
  23.                         struct appdata
  24.                         {
  25.                                 float4 vertex : POSITION;
  26.                                 float2 uv : TEXCOORD0;
  27.                         };
  28.  
  29.                         struct v2f
  30.                         {
  31.                                 float4 vertex : SV_POSITION;
  32.                                 float2 uv : TEXCOORD0;
  33.                                 float3 dc : TEXCOORD1;
  34.                         };
  35.  
  36.                         sampler2D _MainTex;
  37.                         float4 _MainTex_ST;
  38.                         fixed _Dither;
  39.                         fixed _DitherSide;
  40.  
  41.                         inline half3 ComputeDitherScreenPos(float4 clipPos)
  42.                         {
  43.                                 half3 screenPos = ComputeScreenPos(clipPos).xyw;
  44.                                 screenPos.xy *= _ScreenParams.xy * 0.25;
  45.                                 return screenPos;
  46.                         }
  47.  
  48.                         inline void DitherCrossFade(half3 ditherScreenPos)
  49.                         {
  50.                                 half2 projUV = ditherScreenPos.xy / ditherScreenPos.z;
  51.                                 projUV.xy = frac(projUV.xy + 0.001) + frac(projUV.xy * 2.0 + 0.001);
  52.                                 half dither = _Dither.x - (projUV.y + projUV.x) * 0.25;
  53.                                 clip(lerp(dither, -dither, _DitherSide));
  54.                         }
  55.                        
  56.                         v2f vert (appdata v)
  57.                         {
  58.                                 v2f o;
  59.                                 o.vertex = UnityObjectToClipPos(v.vertex);
  60.                                 o.dc = ComputeDitherScreenPos(o.vertex);
  61.                                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
  62.                                 return o;
  63.                         }
  64.                        
  65.                         fixed4 frag (v2f i) : SV_Target
  66.                         {
  67.                                 DitherCrossFade(i.dc);
  68.                                 return tex2D(_MainTex, i.uv);
  69.                         }
  70.                         ENDCG
  71.                 }
  72.         }
  73. }
  74.  
For the renderer that's fading in, pass the dither amount and leave the _DitherSide at 0. For the renderer that's fading out, pass (1.0 - dither amount), and 1.0 for the _DitherSide. I recommend using Material Property Blocks. In fact, in Sightseer I wrote an extension that lets me do renderer.AddOnRender(func), where "func" receives a MaterialpropertyBlock to modify:
  1. using UnityEngine;
  2.  
  3. /// <summary>
  4. /// Simple per-renderer material block that can be altered from multiple sources.
  5. /// </summary>
  6.  
  7. public class CustomMaterialBlock : MonoBehaviour
  8. {
  9.         Renderer mRen;
  10.         MaterialPropertyBlock mBlock;
  11.  
  12.         public OnWillRenderCallback onWillRender;
  13.         public delegate void OnWillRenderCallback (MaterialPropertyBlock block);
  14.  
  15.         void Awake ()
  16.         {
  17.                 mRen = GetComponent<Renderer>();
  18.                 if (mRen == null) enabled = false;
  19.                 else mBlock = new MaterialPropertyBlock();
  20.         }
  21.  
  22.         void OnWillRenderObject ()
  23.         {
  24.                 if (mBlock != null)
  25.                 {
  26.                         mBlock.Clear();
  27.                         if (onWillRender != null) onWillRender(mBlock);
  28.                         mRen.SetPropertyBlock(mBlock);
  29.                 }
  30.         }
  31. }
  32.  
  33. /// <summary>
  34. /// Allows for renderer.AddOnRender convenience functionality.
  35. /// </summary>
  36.  
  37. static public class CustomMaterialBlockExtensions
  38. {
  39.         static public CustomMaterialBlock AddOnRender (this Renderer ren, CustomMaterialBlock.OnWillRenderCallback callback)
  40.         {
  41.                 UnityEngine.Profiling.Profiler.BeginSample("Add OnRender");
  42.                 var mb = ren.GetComponent<CustomMaterialBlock>();
  43.                 if (mb == null) mb = ren.gameObject.AddComponent<CustomMaterialBlock>();
  44.                 mb.onWillRender += callback;
  45.                 UnityEngine.Profiling.Profiler.EndSample();
  46.                 return mb;
  47.         }
  48.  
  49.         static public void RemoveOnRender (this Renderer ren, CustomMaterialBlock.OnWillRenderCallback callback)
  50.         {
  51.                 UnityEngine.Profiling.Profiler.BeginSample("Remove OnRender");
  52.                 var mb = ren.GetComponent<CustomMaterialBlock>();
  53.                 if (mb != null) mb.onWillRender -= callback;
  54.                 UnityEngine.Profiling.Profiler.EndSample();
  55.         }
  56. }
In the end, while Sightseer's LOD system ended up being a lot more advanced and made to support colliders as well as renderers (after all, expensive concave mesh colliders don't need to be active unless the player is near), at its core the most confusing part was figuring out how to manually fade out renderers. I hope this helps someone else in the future!

8
Dev Blog / Feb 22, 2017 - Seamless Audio
« on: February 22, 2017, 09:54:44 AM »
This is less of a blog post, more of an instruction tutorial on how to make seamless audio properly as I seem to be finding myself trying to explain this often...  :-\

In games it's often necessary to make audio perfectly loopable -- whether it's a combat music track or simply the hum of the vehicle's engine, and making any sound loopable is actually pretty easy in Audacity -- a free tool for editing audio.

1. Start by opening the track in Audacity and selecting it (CTRL+A).



2. Click on the end of your track and paste the copy (CTRL+V) so that it's effectively duplicated at the end.



3. Hold Shift and use the scroll wheel with the mouse over the time slice to zoom in in order to get a closer look. What you can hear, you can also see -- and if there is a break in the smoothness of the audio waves, then there will be a noticeable discontinuity when the track loops. In my case, there is one, so onto the next step!



4. Choose the Tracks -> Add New -> Stereo Track (or just Audio Track if you're working with a mono sound). This adds a second layer, just like in Photoshop.

5. You can now choose the Time Shift Tool to move the track down and make it overlap a little by dragging it down with the mouse. When working with music, try to match the waves so that they align. Ideally you want to overlap a few seconds of audio if possible. With music it's often easier to align by seconds as it generally has a consistent beat. In my case I overlapped exactly 2 seconds at the end.



6. Select the overlapped section by choosing the Selection Tool again and hit Space to listen to it. Does it sound proper, or is it all disjointed? If it sounds bad, then you didn't align the waves properly. Go back to step 5 and move the second layer around on its timeline until it matches and sounds better.



7. Now it's time to cross-fade the audio, making it blend. Select the top overlapped part and use the Effect -> Cross Fade Out. Repeat the process with the bottom track, but this time choose Cross Fade In. The idea is to make the audio of one track fade out while the audio of the second track fades in.



8. Time to combine the two tracks into a new one: CTRL+SHIFT+M, or choose the Tracks -> Mix and Render to New Track menu option.



9. We now have a track that blends nicely, but the blend happens right in the middle of this new track, and we want it to be at the beginning and the end! Let's select the blended track's section, copy it, then paste it onto the new layer. CTRL+C, zoom out with the scroll wheel and paste the segment on the first layer by clicking on its end -- then use the Time Shift Tool again to snap it into place.



10. Delete the second track. We no longer need it. Just click the "X" button in its top left corner.

11. Select the entire first track's length on both layers. We don't need it anymore either. Just drag with the mouse after choosing the Selection Tool again and click the "DEL" key on your keyboard.

12. Almost there! Zoom in on the end (Shift + scroll wheel) and select the more complete track's section right below the pasted segment and delete it as well (DEL).



13. CTRL+A to select everything, Tracks -> Mix and Render. And there you have it! A perfectly looping track. Export it via the File -> Export Audio menu option.

I hope this explanation helps someone else. I had to figure it out by experimenting, and there's probably a better way -- but this one works for me.

9
NGUI 3 Support / Website migrated to another host
« on: December 02, 2016, 11:40:05 PM »
I got tired of the intermittent slow speed and accessibility issues of my previous web host and moved to a new one. Seems to be faster so far...

10
Dev Blog / November 2nd, 2016
« on: November 02, 2016, 07:14:15 AM »
Since the last dev post, I've been hard at work on the followup game to Windward. So hard at work that I've once again neglected documenting what it is I was actually doing. Ahem. Well, the good news is the game is coming along very nicely and I've started letting a close pre-alpha test group of players have a go at it. The first one was all about exploring the gigantic procedural world. The second play test involved building bases. Now a third play test is on the horizon with the functional stuff added in (resource gathering and processing).

The game does look quite nice now, I'll give it that.





The only issue is finding suitable art for what I have in mind. The Unity's Asset Store is a fantastic place to find things, but every artist has their own distinct style, so simply taking different models and using them in the game is not an option. Plus, since I do have a specific art style in mind (clean, futuristic) and I want the players to be able to customize colors of their own bases, I've had a couple challenges to overcome.

With the need to let players customize the look of their bases, I can't simply use diffuse textures. I also can't specify colors on an existing pre-colored material. That simply won't look well. Instead, what I need is a mask texture -- a special texture that simply defines which parts should be colored by which of the material's colors.

In Windward, my ship textures were also using masks. The Red channel was used to hold the grayscale texture. Green channel was used as the mask -- white pixels meant color A was used, while black pixels meant color B. Blue channel contained the AO mask using secondary UVs. In total, only 3 channels were used (and each ship only used a single 512x512 texture), which was one of the reasons why the entire game was only 120 megabytes in size. The final texture looked like this:



There were several downsides with this approach. First... having so much detail in two of the channels (red and blue) didn't play well with texture compression, resulting in visible artifacts. I could have technically made it better by moving one of them to the Alpha channel, but at the time I simply ended up turning off texture compression for ship textures instead. Second downside was only having one channel for color masking. This meant I could only have 2 colors, which is obviously not much.

For this new game (which still doesn't have a name, by the way!), I wanted to raise the bar a bit. I am targeting PC and Linux with this game (sorry Mac users, but OSX still doesn't support such half a decade-old features like compute shaders!), so all those mobile platform limitations I had to squeeze into with Windward are not an issue here.

First thing I did was split the mask information into a separate texture. To specify values for 4 distinct colors I only need 3 texture channels, with the 4th being calculated as saturate(1 - (r+g+b)), but I also wanted to make it possible to mark certain parts of the texture as not affected by any color, so in the end I ended up using all 4 channels. The actual mask texture is very easy to create by taking advantage of Photoshop's layers. Start with the background color, add layers for second and third channels (red and green). Set all 3 layers to have a Color Overlay modifier for Red, Green and Blue, respectively. This makes it trivial to mark regions, and if necessary, have additional layers for details. It's much easier to work with layers than with channels, that's for sure. For the remaining (4th) color I just add another layer, set to have a Black color overlay. Black, because of (1-rgb) calculation used in the shader. This still leaves the alpha channel free, but as I mentioned I use it to mark certain parts of the texture that should not be color tinted at all. Fine details such as mesh grates get masked like that, and so do any lit regions, if there are any.

So that leaves the diffuse texture... Remember how I said that color tinting the existing diffuse texture doesn't look well? This is why my diffuse textures are very bright. The brighter they are, the better they get tinted. Pure white, for example. White looks great!



But wait, you might ask... that doesn't sound right. Where would the detail come from? Well, that's the thing... why have the detail be baked in the diffuse texture, when it can be separate? Not only does it make it possible to have higher resolution details separate from other textures, but it also makes it possible to have them be shared between different materials, and even better still -- it makes it possible to swap them in and out, based on what the player made the object with. Take a building, for example. What's the difference between a building made out of concrete and one made out of bricks? Same exact shape, same exact ambient occlusion, same masks... the only difference is the base texture material, so why not make it come from a separate texture?

Better still, since I have 4 colors per material, why not have 4 distinct sets of material properties, such as metallic, smoothness, and the detail texture blend values? Well, in the end, that's exactly what I ended up doing:



This approach makes the objects highly player-customizable, and the final result looks excellent as well:



As an added bonus over Windward's approach, since the textures don't mix details in RGB channels, texture compression can be used without any noticeable degradation in quality. In the screenshot above the AO actually comes from diffuse channel's alpha and the normal map was created using a custom normal map maker tool I wrote a few months ago that uses multiple LOD/mipmap levels via downsampling to get a much better looking normal maps than what Unity offers. Not quite Crazy Bump good, but close! I'll probably end up releasing it on the Asset Store at some point if there is interest.

11
TNet 3 Support / Starlink UI kit is now free -- pick up yours today!
« on: August 02, 2016, 10:18:32 AM »
For those that wanted a more extended lobby server example with channel list and all that, Starlink UI kit is now free and it has full TNet integration that handles both LAN and internet server discovery, hosting, channel creation / channel list, as well as in-game and lobby chat. Although it does use NGUI for UI, so it assumes you have that.

12
TNet 3 Support / How to use the WorkerThread script
« on: August 01, 2016, 02:54:40 PM »
One of my most useful tools has always been the WorkerThread class, and although not really relevant to anything in TNet itself, I decided to include it in the package in case you too find it useful.

In simplest terms, it's a thread pool that's really easy to use.
  1. WorkerThread.Create(delegate ()
  2. {
  3.     // Do something here that takes up a lot of time
  4. },
  5. delegate ()
  6. {
  7.     Debug.Log("Worker thread finished its long operation!");
  8. });
In the code above, the first delegate is going to be executed on one of the worker threads created by the WorkerThread class. The class will automatically create several, and will reuse them for all of your future jobs. As such, there are no memory allocations happening at run-time. The second delegate is optional, and will execute on the main thread (in the Update() function) when the first delegate has completed its execution.

This dual delegate approach trivializes creation of complex jobs. To pass arguments, you can simply take advantage of how anonymous delegates work. For example this code will take the current terrain and flip it upside-down:
  1. var td = Terrain.activeTerrain.terrainData;
  2. var size = td.heightmapResolution;
  3. var heightmap = td.GetHeights(0, 0, size, size);
  4.  
  5. WorkerThread.Create(delegate ()
  6. {
  7.     for (int y = 0; y < size; ++y)
  8.     {
  9.         for (int x = 0; x < size; ++x)
  10.         {
  11.             heightmap[y, x] = 1f - heightmap[y, x];
  12.         }
  13.     }
  14. },
  15. delegate ()
  16. {
  17.     td.SetHeights(0, 0, heightmap);
  18. });
The worker thread class will work both at run time and edit time, but edit time means it will execute both delegates right away. Currently the project I'm working on uses WorkerThread everywhere -- from ocean height sampling, to merging trees, to generating procedural terrain and textures.

Questions? Ask away.

13
Dev Blog / July 24, 2016 - Windy detour
« on: July 24, 2016, 10:12:18 AM »
June was an interesting month. I randomly wondered if I could add a dragon to Windward just for the fun of it. It took only a few minutes to find a suitable model on the Unity's Asset Store and about half an hour to rig it up to animate based on movement vectors. I then grabbed a flying ship from Windward, replaced its mesh with a dragon and gave it a shot. It immediately looked fun, so somehow I ended up spending the next several weeks adding tough dragon boss fight encounters to Windward, complete with unique loot that changed the look of some key in-game effects based on what the player has equipped. The dragon fights themselves were a fun feature to add and made me think back to the days of raiding in WoW. Ah, memories.



With the odd detour out of the way I had another look at the various prototypes I had working so far and decided to narrow the scope of the next game a bit. First, I'm not going to go with the whole planetary scale orbit-to-surface stuff. Reason being the size of it all, mainly. The difficulties in dealing with massive planetary scales aside, if a game world is the size of Earth, even at 1/10th the scale, there's going to be a tremendous amount of emptiness out there. Think driving across the state for a few hours. Entertaining? To some maybe. But in a game? Certainly not.

But anyway... game design decisions aren't worth talking about just yet. Once the game project is out of the prototype and design stage, maybe then.

The past two weeks I actually spent working on integrating a pair of useful packages together -- Ceto ocean, and Time of Day. Both are quite excellent and easy to modify. Ceto ocean kit in particular occupied most of my time -- from optimizations to tweaks. I integrated it with the custom BRDF I made earlier, fixed a variety of little issues and wrote a much better and robust buoyancy script, which is a far, far better way of doing ship mechanics than the weird invisible wheel approach I was taking for Windward. I'll likely post a video about it later.

With my focus on optimizations, I've been keeping an eye on what it would take to have an endless terrain streamed in around the player and the results have been promising. In Windward, the trees were actually generated by instantiating a ton of individual trees in the right places, then subdividing the region into smaller squares, followed by merging all the trees in each square into a group. The process worked fine, but had two drawbacks.

First, it was using Unity's Mesh.CombineMeshes() function, which while works well, requires the objects to be present and doesn't allow per-vertex modifications for things like varying colors between trees. Second, with the merging process taking just over 100 milliseconds, it's really not suitable for streamed terrains. A 100 millisecond pause is very noticeable during gameplay. And so, I set out to optimize it.



The first approach I tried was using custom mesh merging to see the effect. It was predictably slower -- almost 170 ms.



Next I spread the actual code that was performing combining of mesh data into a separate thread:



While spread out across multiple frames 95 ms on the first frame was still way too much. Thinking about it I first focused on the small stuff. I first replaced mesh.colors with mesh.colors32 and then moved the matrix creation code into the part that's done on a separate thread instead of out in the main one. With a few other minor changes, such as replacing Generic.List with TNet.List, the update was down to below 70 ms:



Getting closer. The next step was to eliminate the interim instantiation of objects. After all, if all I want is the final merged mesh, why instantiate game objects first only to merge and remove them? It makes a lot more sense to skip the instantiation part altogether, and just go straight to merging, doesn't it? The mesh data can be retrieved from the original prefabs. This also fixes another issue I noticed: I was apparently pulling the mesh data from every object individually by calling mesh.vertices and other functions on each of the MeshFilters' meshes. Adding a cache system into place would save a ton of memory. Perhaps you've noticed those 15 MB+ memory allocations in the profile snapshots above -- and this was the reason.

With the changes in place, the cost of merging 2205 trees was down to 16.9 milliseconds with memory usage down below half a meg:



In this case the trees themselves are just an example as they are very simple and I will likely replace them with something that looks better. Still, for the sake of a test, they were perfect. Who knows what I may end up using this script for? Random vegetation? Rocks? Even just debris in the city ruins -- either way, this multi-threaded optimized merging script should now be up for the task and the extra variation in color hues makes this approach look much better than Unity's built-in mesh merging. All in all, another handy tool.


14
Dev Blog / May 23, 2016 - The ugly shores
« on: May 23, 2016, 08:04:32 PM »
The worst part about using textures is that there is always a limit to how detailed they can get. Even using an 8k texture for the Earth's texture map it was still looking quite ugly even at the orbit height of the International Space Station (~400 km), let alone closer to the ground. Take a screenshot from a 20 km altitude, for example:



That blurry mess is supposed to be the shoreline. Needless to say, improvements were direly needed. First I tried the most trivial and naive approach: increasing the resolution of the textures. It occurred to me that the 8192x4096 cylindrical projection texture I was using can be split up into 6 separate ones -- one per quad sphere's face. Not only would this give me better resolution to work with, but it would also make it possible to greatly increase the detail around the poles while reducing the memory usage. Better still, I could also skew the pixels the same way I was skewing the vertices of the generated quad sphere mesh -- doing so would improve the pixel density in the center, while reducing it in the corners. This is useful because by default the corners end up with a higher than normal pixel density and the center ends up with a lower one -- my code simply balances them out.

I quickly wrote a quick tool that was capable of splitting a single cylindrical projection texture into 6 square ones and immediately noticed that with 6 2048x2048 textures I get equal or better pixel density around the equator, and a much, much improved quality around the poles. A single 8k by 4k texture takes 21.3 MB in DXT1 format, while 6 2k textures take a combined 16.2 MB. Better quality and reduced size? Yes please!

Unfortunately increasing the pixel density, even raising the 6 textures to 8k size, ultimately always failed to produce better results past certain zoom level. The 8k textures still started to look pretty bad below 20 km altitude, which made perfect sense -- it's only 4x4 pixels where it was 1x1 before. If it was looking awful at 20 km altitude, it would look exactly as awful at 5 km with the increased textures. Considering that I was looking for something that can go all the way to ground level, more work was clearly needed.

The next thing I tried was to add a visible water mesh that would then be drawn on top of the terrain. The problem with that was that the terrain was actually extremely flat in many places of the world, and indeed the heightmap's vast majority of values resides in 0-5 range, with the other 5-255 taking up the rest. Worse still, the heightmap wasn't providing any values below the sea level. NASA actually has two separate sets of heightmaps for that. One is for above the sea level, and the other set is for below. Unfortunately the underwater heightmaps lacked resolution and were indeed quite blurry, but just out of curiosity I was able to merge them into a single heightmap to see the effect. This effectively reduced the above-ground heightmap resolution in half and still had the same issues with large parts of the world being so close to the sea level that they were not indicated as being above-ground by the heightmap.

At this point I asked myself: why am I bothering with the detail under the sea? Why have a heightmap for that at all? The game isn't set underwater, so why bother?

Grumbling at myself for wasting time I grabbed a different texture from NASA -- one that is a simple black-and-white representation of the continent maps with white representing the landmasses and black representing the water. I simply assumed that the value of 0.5 means sea level and modified the sampled vertices' heights so that they were not only affected by the heightmap, but by this landmass mask as well. Everything below 0.5 would get smoothly lowered, and everything above 0.5 would get smoothly raised, resulting in a visible shoreline. All that was left was to slap a water sphere on top, giving me this:



Better. Next step was to get rid of the blue from the terrain texture. This was a bit more annoying and involved repeated Photoshop layer modifications, but the end result was better still:



Now there was an evident shoreline, crisp and clear all the way from orbit to the surface. Unfortunately the polygon-based approach suffered from a few... drawbacks. First was the Z-fighting, which I fully expected. It was manageable by adjusting the near clip plane or by using multiple cameras in order to improve the near to far clip precision. The other problem was less straightforward. Take the following screenshot of a texture-based approach for example:



While a bit on the blurry side even from such a high altitude, it still looks better than the polygon-based approach:



Why is that? Two reasons. First, the polygon resolution decreases as the camera gets farther and farther from the planet, resulting in bigger triangles, which in turn lead to less defined shores. Second, the triangulation is constant and due to the way vertex interpolation works, edges that follow the triangulation look different than edges that are perpendicular to the said triangulation. This is the reason why the north-east and south-west parts of the Arabian peninsula looks all jagged while the south-east part looks fine.

Fortunately triangulation is easy enough to fix by simply adding code that ensures that the triangulation follows the shores.



The bottom-left part of the texture still looks jagged though, but this is because of inadequate mesh resolution at high distances to the planet. Solving that one is a simple matter of lowering the observer transform so that it's closer to the ground while remaining underneath the camera:



This approach looks sharp both up high in orbit and close to the ground:



Here's a side-by-side comparison shot of the North American Great Lakes from an altitude of 300 km:



And another one a little bit more zoomed in:



Now, up to this point the water was simply rendered using a solid color shader that was writing to depth. The fun part came when I decided to add some transparency to the shallow water in order to soften the edges a bit when close to the ground. While the transparency was easily achievable by comparing the difference in the pixel's sampled depth, I quickly ran into issues with other effects that required depth, such as post-processed fog. Since the transparent water wasn't writing to depth, I was suddenly faced with the underwater terrain being shaded like this:



The most obvious way to fix this would be to draw the terrain and other opaque geometry, draw the transparent water then draw the water again but this time filling only the depth buffer, followed by the remaining transparent objects. Unfortunately as far as I can tell, this is not possible with Unity. All opaque objects are always drawn first before all the transparent objects, regardless of their render queue. It doesn't seem possible to insert a transparent-rendered object in an opaque geometry pass, so I had to resort to less-than ideal hacks.

I tried various solutions to address it, from modifying the water shader to the fog's shader itself, but in the end I settled on the simplest approach: ray-sphere intersection. I made the fragment shader do a ray intersection with the planet's sphere to determine the near intersection point and the point on the ray closest to the sphere's center. If the closest point lies below the water level and the near intersection point lies in front of the sampled depth value, then I move the sampled depth back to the near intersection point:



While this approach works fine for now, but I can imagine it breaking as the planet gets larger and floating point values start dropping precision... I'll just have to keep my mind open for other potential ways to address this issue in the future.

15
Dev Blog / May 9th - Your Own Reflection
« on: May 09, 2016, 12:00:41 AM »
A few months ago I was working on the ship builder functionality of the upcoming game I'm working on and around the same time I was playing around with reflection -- the ability to automatically find functions on all classes with specific attributes on them to be precise. I needed this particular feature for TNet 3: I wanted to eliminate the need to have to register RCCs (custom object instantiation functions). I added that feature without any difficulty: simply get all assemblies, run through each class and then functions of that class, then simply keep a list of ones that have a specific attribute. Implementing it got me thinking though... what if I was to expand on this idea a bit? Why not use the same approach to add certain game functionality? Wouldn't it be cool if I could right-click an object in game, and have the game code automatically get all flagged custom functionality on that object and display it somehow? Or better yet, make it interactable?

Picture this: a modder adds a new part to the game. For example some kind of sensor. Upon right-clicking on this part, a window can be brought up that shows that part's properties: a toggle for whether the part is active, a slider for its condition, a label showing how much power it's currently consuming, etc. There aren't that many types of data that can be shown. There's the toggle, slider, label... Other types may include a button (for a function instead of a property), or maybe an input field for an editable property. So how can this be done? Well, quite easily, as it turns out.

First, there needs to be custom attribute that can be used to flag functionality that should be displayed via UI components. I called it simply "GameOption":
  1. [AttributeUsage(AttributeTargets.Field | AttributeTargets.Property, AllowMultiple = false)]
  2. public class GameOption : Attribute
  3. {
  4.         public MonoBehaviour target;
  5.         public FieldOrProperty property;
  6.        
  7.         public virtual object value { get { return Get(target); } set { Set(target, value); } }
  8.  
  9.         public object Get (object target)
  10.         {
  11.                 if (target != null && property != null) return property.GetValue(target);
  12.                 return null;
  13.         }
  14.  
  15.         public T Get<T> () { return Get<T>(target); }
  16.  
  17.         public T Get<T> (object target)
  18.         {
  19.                 if (target != null && property != null) return property.GetValue<T>(target);
  20.                 return default(T);
  21.         }
  22.  
  23.         public virtual void Set (object target, object val)
  24.         {
  25.                 if (isReadOnly || target == null) return;
  26.                 if (property != null) property.SetValue(target, val);
  27.         }
  28. }
Next, there needs to be a function that can be used to retrieve all game options on the desired type:
  1. // Caching the result is always a good idea!
  2. static Dictionary<Type, List<GameOption>> mOptions = new Dictionary<Type, List<GameOption>>();
  3.  
  4. static public List<GameOption> GetOptions (this Type type)
  5. {
  6.         List<GameOption> list = null;
  7.  
  8.         if (!mOptions.TryGetValue(type, out list))
  9.         {
  10.                 list = new List<GameOption>();
  11.                 mOptions[type] = list;
  12.  
  13.                 var flags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic;
  14.                 var fields = type.GetFields(flags);
  15.  
  16.                 for (int b = 0, bmax = fields.Length; b < bmax; ++b)
  17.                 {
  18.                         var field = fields[b];
  19.                        
  20.                         if (field.IsDefined(typeof(GameOption), true))
  21.                         {
  22.                                 GameOption opt = (GameOption)field.GetCustomAttributes(typeof(GameOption), true)[0];
  23.                                 opt.property = FieldOrProperty.Create(type, field);
  24.                                 list.Add(opt);
  25.                         }
  26.                 }
  27.  
  28.                 var props = type.GetProperties(flags);
  29.  
  30.                 for (int b = 0, bmax = props.Length; b < bmax; ++b)
  31.                 {
  32.                         var prop = props[b];
  33.                         if (!prop.CanRead) continue;
  34.                        
  35.                         if (prop.IsDefined(typeof(GameOption), true))
  36.                         {
  37.                                 GameOption opt = (GameOption)prop.GetCustomAttributes(typeof(GameOption), true)[0];
  38.                                 opt.property = FieldOrProperty.Create(type, prop);
  39.                                 list.Add(opt);
  40.                         }
  41.                 }
  42.         }
  43.         return list;
  44. }
Of course it's even more handy to have this on the Game Object:
  1. static public List<GameOption> GetOptions (this GameObject go)
  2. {
  3.         return go.GetOptions<GameOption>();
  4. }
  5.  
  6. static public List<T> GetOptions<T> (this GameObject go) where T : GameOption
  7. {
  8.         List<T> options = new List<T>();
  9.         MonoBehaviour[] mbs = go.GetComponents<MonoBehaviour>();
  10.  
  11.         for (int i = 0, imax = mbs.Length; i < imax; ++i)
  12.         {
  13.                 MonoBehaviour mb = mbs[i];
  14.                 List<GameOption> list = mb.GetType().GetOptions();
  15.  
  16.                 for (int b = 0; b < list.size; ++b)
  17.                 {
  18.                         GameOption opt = list[b] as T;
  19.  
  20.                         if (opt != null)
  21.                         {
  22.                                 opt = opt.Clone();
  23.                                 opt.target = mb;
  24.                                 options.Add(opt);
  25.                         }
  26.                 }
  27.         }
  28.         return options;
  29. }
So now I can have a property like this in a custom class:
  1. public class CustomClass : MonoBehaviour
  2. {
  3.     [GameOption]
  4.     public float someValue { get; set; }
  5. }
...and I can do this:
  1. var options = gameObject.GetOptions<GameOption>();
  2. foreach (var opt in options)
  3. {
  4.     opt.value = 123.45f;
  5.     Debug.Log(opt.value);
  6. }
Better still, I can inherit a custom attribute from GameOption and have custom code handle both the getter and the setter. I could filter exactly what kind of custom attribute is retrieved using the gameObject.GetOptions<DesiredAttributeType>() call. With the way of retrieving custom properties set, all that's left is to draw them automatically after some action.

That is actually quite trivial using NGUI. I simply registered a generic UICamera.onClick delegate, and inside it I collect the options using gameObject.GetOptions then display them using an appropriate prefab. For example
  1. if (opt.value is float) // draw it as a slider
I also register an event listener to the appropriate UI element itself (in the case above -- a slider), so that when the value changes, I simply set the opt.value to the new one. So there -- the mod content maker no longer needs to worry about creating custom UI elements at all. All he needs to do is mark desired fields or properties as [GameOption], and they will show up via right-click. Simple!

Of course I then went on to make it more advanced than that -- adding an optional sorting index and category values (so that the order of properties that show up can be controlled via the index, and filtered using the category). I also added support for buttons -- that is, I simply expanded the attribute to include methods:
  1. AttributeTargets.Field | AttributeTargets.Property | AttributeTargets.Method
...and added a MethodInfo to go with the FieldOrProperty attribute as well as an Invoke() function to trigger it. I also added support for Range(min, max) property for sliders, popup lists for multiple selection drop-down lists... I can go on, but there is no need to complicate the explanation further. Point is -- this approach is highly customizable and very powerful:

C# reflection is fun!

Pages: [1] 2 3 ... 10