Author Topic: Building Material Design UI with NGUI  (Read 42680 times)

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Building Material Design UI with NGUI
« on: October 14, 2014, 03:23:26 AM »


Thought I should post about that work outside of support threads. Exhibit A:


Long story short, I got a bit tired of implementing controls from scratch in every project and overall from using unstructured UI workflows. Seriously, why am I still using awkward half-assed window managers that are created anew every time, why do I have to deal with setting up tweens and sprite references when adding a spinner and why do I need custom switches, buttons and show-hide sequences every time? I shouldn't be doing that.

So I started working on a coherent MVC based foundation that will allow me to create interfaces that are quick to set up, easy to maintain and easy to expand.

While at it, I thought to myself - wouldn't it be wonderful if I had not just nice code providing reusable elements, but also those beautifully implemented controls from Material Design by Google that native Android developers enjoy? Wouldn't it be nice to have Unity applications that can fool a user into believing they are native? Anyway, how hard would implementing controls from Material Design guidelines would be?

________________

Turns out they are quite a bit complex, but every single one of them can be implemented without atrocious hacks or performance-hungry workarounds like frame-based animations. For example, those radio buttons are just three overlayed dots that require no custom masking - just proper order of color and scale tweens.



The most complex things here are touch effects and shadows. Those were a complete mystery to me - for all I knew, Google implemented them with magic. Check these animations:


Only idea I had at first was using NGUI panel clipping in every element, but that was unacceptable from performance standpoint and would have cluttered the hierarchy - and that would only allow those radial ripples, without addressing even more mysterious second part of the animation - inverse erasing of the expanding ripple sprite, which can't be achieved through traditional rect-based clipping at all. But as it turns out, it can be implemented, at almost no performance cost, and with that double clipping from within.

You set up a separate UIPanel set to hard edge clipping, with a component that can set it's dimensions and position, with a singleton to access it, and a child ripple sprite that can tween through the touch animation. Any touchable widget can call the singleton and invoke a method (using itself as an argument) on touch, which will reposition the UIPanel clip area to dimensions of a widget arriving in the argument and start the ripple animation in the clicked point of the screen.



Now only thing that is left is the second clipping - erasing the ripple sprite from within. That one is achieved by creating clipping-compatible depth cutout shader (no need to modify the example, just give NGUI a properly named duplicate) and applying it to a UITexture with the circle texture, then moving the object with it outside of the UI plane to allow depth testing to kill pixels within the panel. All you need to do when that is set up is to tween the ripple sprite first and the eraser sprite second, and you get yourself that sexy impossible ring that is required for every clickable element in Material Design.



________________

Another area where Material Design presents a huge challenge is dynamic shadows. They are not in any way similar to the standard static drop shadows that people bake into sprites.


They are dynamic, with every rectangular element capable of lifting back and forth through multiple depth levels, casting very smooth shadow of variable feathering radius. That's extremely problematic. But as it turns out, it can be implemented too, with some clever trickery. Take a look:


To do this, I prepare a sliced sprite with rectangular shadow and assign it to the sprite anchored without any offsets to my card. There is no need to do it manually - I just add a "sprite shadow" component to a UISprite object and everything is set up automatically (and cleaned up when that component is removed).

The desired look, with variable feathering radius, is impossible to achieve with standard sliced sprite behaviour and anchoring in NGUI. Using that custom component, I subscribe to the fill event of the sliced shadow and directly modify the positions of 36 vertices, pushing the central quad instead of all quads to be controlled by the sprite dimensions and anchoring, and pushing the other quads outward depending on offset calculated from the depth, then finally sampling a certain curve to get proper shadow intensity. Ah, and the sprite is offset downward a bit, depending on the depth.

________________

Not sure if that webm hosting has limits on the traffic (sites accepting 15s+ files are hard to come by), so just in case, here is a mirror (379kb):


________________

P.S.: To inevitable question of "why not uGUI", well, I really prefer NGUI for a number of reasons.

  • First, uGUI will never have the sort of personal tech support that NGUI had for years. Unity forums and issue tracker are nice and dandy, but not really comparable to the developer himself answering every single question.
  • Second, I prefer depth based draw order to hierarchy based draw order and dislike uGUI dependency on the hierarchy sorting
  • And third, simply by virtue of existing for a very, very long time, NGUI has thousands of threads, posts, docs, tutorials and other things that allow you to learn faster, and allow you successfully find on the net solutions to most of the problems you might encounter. uGUI will solve that over time, but has not accumulated that amount of material around itself yet.
« Last Edit: October 14, 2014, 06:44:22 AM by bac9 »

hexaust

  • Newbie
  • *
  • Thank You
  • -Given: 14
  • -Receive: 1
  • Posts: 35
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #1 on: October 14, 2014, 08:25:51 AM »
Wow! Holy **** dude, this is impressive! Big thank you for sharing. Great work!

ArenMook

  • Administrator
  • Hero Member
  • *****
  • Thank You
  • -Given: 337
  • -Receive: 1171
  • Posts: 22,128
  • Toronto, Canada
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #2 on: October 14, 2014, 03:35:33 PM »
You sir, are a wizard.

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #3 on: October 17, 2014, 07:43:14 AM »
Input fields in two varieties (yet to make a multiline one):



Works with any content text size, automatically adapts the control widget size to make guidelines-compliant spacing easy, can work on any background.
« Last Edit: October 17, 2014, 07:51:11 AM by bac9 »

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #4 on: October 17, 2014, 05:13:15 PM »
Spinner: controlled through a method with 0-1 float as an argument, value that can be fed with the progress on some operation or alternatively just produced from delta time in update.
The method constructs the color by sliding through HSB, while rotation and fill completion are evaluated from two relatively simple custom AnimationCurves that intersect to provide the catch-up impression.



Multiline input:

« Last Edit: October 17, 2014, 05:24:38 PM by bac9 »

ArenMook

  • Administrator
  • Hero Member
  • *****
  • Thank You
  • -Given: 337
  • -Receive: 1171
  • Posts: 22,128
  • Toronto, Canada
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #5 on: October 18, 2014, 07:21:15 AM »
Very, very sexy. Are you going to put this up on the Asset Store at any point?

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #6 on: October 18, 2014, 07:55:00 AM »
After actually making a reasonably complex project with it, yeah, probably.

I just need that to ensure I'm not making some impractical monstrosity with awful architecture that drives you insane right after a minute of attempting to make a UI with it. So far I'm starting with simple stuff (like, let's say, codex app with simple screen hierarchy, articles, settings etc), checking what drives me insane in the process and solving the issues that do that.

Dislike to redo the work to create every dialog box? Abstract that object into a component that sets everything up for you, exposing just the size, actions and text to configure. Dislike how much you have to hardcode while setting up buttons for that dialog? Create a better abstracted button class that can set up what you need with just one line. Dislike how you have to set up icon buttons, FAB buttons, rect buttons and sidebar buttons separately with different objects? Come up with a way to combine them all into one button class that can switch between every type. Dislike how that makes the button object bloated with children that are frequently disabled and unused? Improve your code so that only objects necessary for the current type are maintained. Dislike how you have to recheck the referenced objects and recreate them with proper configuration using kilometer long code? Write a utility class that can check your references and replace them if they are missing, creating the sprites, labels, control widgets, textures, tables and so on with one line for you, enabling you to drop boiler plate code from all abstract components. Dislike setting up guideline-compliant colors through Color for every single object? Create a library that can be referenced instead. Dislike having to open a calculator to recheck how DP size values from the guidelines are scaled into pixels in XXHDPI space? Create in-editor tool that can give you the values directly and provides grid info. And so on.

So far it's going nicely but there is still a lot of work to do. No blocking issues though, like in the beginning when I had no idea if it's even possible to replicate the required effects.

P.S.: By the way, it would be nice to add onSelect and onDeselect event delegate lists to UIInput in addition to existing two. All methods already exist, it's a matter of declaring them and adding .Execute in two new places, nothing else required. Having them enables a lot of interesting stuff, including that hint behavior on the gifs above, so it would be nice to have them by default - so far it's the only change I had to make in NGUI code. I would obviously prefer to stay away from making changes like that where possible, to enable easy support.
« Last Edit: October 18, 2014, 08:23:21 AM by bac9 »

ArenMook

  • Administrator
  • Hero Member
  • *****
  • Thank You
  • -Given: 337
  • -Receive: 1171
  • Posts: 22,128
  • Toronto, Canada
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #7 on: October 18, 2014, 09:03:12 AM »
Regarding your PS: just attach UIEventTrigger. It has the OnSelect/OnDeselect. No need to code.

I'm sure if you release this on the Asset Store, it will sell like hotcakes because of how awesome it all looks.

P.S. of my own: I recommend turning each type of a control into a prefab, and setting up their previews properly so that they show up in the Prefab Toolbar.

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #8 on: October 18, 2014, 10:40:05 AM »
Oh, that's a nice solution.

As about the prefabs, to be honest I'm not much of a fan of them in UI. They are extremely useful for things like:

  • Resolving the problem of scene editing in version control environment: do not edit Unity scene files, split scene into prefabbed sections, let environment artists update the prefabs, voila
  • Serving as blueprints for instantiation of objects like level props

But I'm skeptical about using them for reusable components because usually you want a lot more than an object properly instantiated from a blueprint: you want an object that self-updates itself to stay compliant with the latest reference design, the object that repairs itself if something is missing. You want every single button and checkbox in your project to update themselves (talking about editor environment, not runtime, ofc) the instance you update their design. And importantly, you want to have all that for objects that are combined into an intricate nested hierarchy where some instances control others but must be updated independently. Unity prefabs can't offer that. So I construct stuff without them, and doing so directly allows me to have buttons, fields, etc. that are harder to break, that use the very latest configuration and look no matter how and when you create them, do not require you to hand-check every scene wondering if your changes to the reference were properly distributed to the copies, etc.

It's a bit more rigid approach, one that won't allow the users to slap ten effects to a button and distribute that with one click on "Apply" on top of the inspector, but when the appeal of the system is replication of a rigid design framework in the first place, I guess it's not exactly a problem :) And if they really want to, it's simple to do so through code.

P.S.: Did some work on screen control.



At the moment it works like this:

  • UI is split into overlays and screens, each controlled by their managers

  • Overlays include touch ripple panel, focus ripple panel, and stuff like full-screen fills that appear behind on sidebar/dialog opening. Overlays are not concerned with each other and their manager is only exposing methods allowing to call them in isolation: for example, to create a ripple somewhere, or block the screen for something. Overlays don't care if they are opened or otherwise used simultaneously, they fulfill different roles each. That whole part is created automatically.

  • Screens are containers that define what the user is seeing when interacting with a certain distinct area of an application. In various situations they can be called tabs or windows, but those words just describe the look. Screens themselves have bare minimum of functionality: their manager keeps track of them all and provides exposed methods to show a certain window, either by direct reference or by ID number. Manager takes care of closing previously opened windows and other mundane stuff like that.

  • Screens are parents to areas: every screen can have one or multiple areas. Easy way to think about them is to think of them as paper sheets from Google guidelines - you drop them onto the canvas to create layouts, you use them to house your content, you slide them in and out of the screen. Areas implement show/hide functionality called by their parent screen (optionally exposing slide out direction and distance to allow you to create complex transitions Google heavily employs) and are the first entities to actually create anything visual: areas optionally control the creation of the "paper" sprites along with their perceived depth (through the shadow effect described above). Each area is housing a UIPanel, so that's also the first and last entity allowing you to control relative depth between parts of UI layout.

  • After that, you can do whatever you want in the bounds of an area. You can of course jump directly into adding labels, buttons and sprites, but there are few abstracted entities to make common use cases easy: for example, Content entity that creates a widget and sets up few most common content types anchored to it within (for example, text with a header and proper margins, or a dialog layout). Another example is ScrollView entity that sets up a panel, scroll view, table, scroll bar and some other stuff for you, allowing control with a simple parent widget.

So, the most simple UI design goes like this:

  • Create screen manager object
  • Create a screen underneath it
  • Create an area in that screen
  • Create a content entity in that area, matching it's dimensions and set to one of predefined types
  • Interact with content entity (view presenter) from your controller

Of course, no one is stopping you from setting up whatever internal layout you want in every area by using labels, buttons, switches and separators (all of those are parent entities wrapping and constructing certain NGUI design, of course - you don't have to deal with setting up tweens in a switch animation or shadows in a button).

Creating any entity type is a matter of adding a component to an empty GameObject. The component checks what objects are required for it, and if they are not present, creates them following in-built presets (for a flat rectangular button that would be creating one sprite, one label, a control widget and a collider for it). Presets can be very varied and can be switched on the fly - a switch can transform itself into radio, checkbox or a toggle slider with just one enum selection in it's inspector. The component also provides a method (and a context menu option) to destroy it along with all connected objects and components, which is handy when you don't want to clean up that stuff yourself.
« Last Edit: October 18, 2014, 01:24:07 PM by bac9 »

badawe

  • Jr. Member
  • **
  • Thank You
  • -Given: 8
  • -Receive: 7
  • Posts: 70
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #9 on: October 22, 2014, 01:20:58 PM »
Amazing WORK!

Any chance of you share this "clipping-compatible depth cutout shader"

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #10 on: October 22, 2014, 05:52:33 PM »
It can be a simple duplicate of an existing shader with a space and "1" added in the end of the name - as it's a subtractive shader that can only operate against the content of a panel anyway, there is no need to actually clip what it does. Just make sure NGUI finds the separate version of a shader, which is why you need that new file.

Nicki

  • Global Moderator
  • Hero Member
  • *****
  • Thank You
  • -Given: 33
  • -Receive: 141
  • Posts: 1,768
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #11 on: October 23, 2014, 06:29:12 AM »
It's looking damn sexy. I'll pick this up when you go live.

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #12 on: October 25, 2014, 09:17:30 AM »
Had some progress. I'm navigating with a mouse in the following gifs, but GifCam is not capturing the pointer for some reason.



This time, I tried to create a simple note-keeping application with few options (list, editor, tagging, removal, etc.). Can easily be expanded into a quest log or, say, Mass Effect style codex.
Along the way, as usual, I spent time working on workflow problems that drove me insane, and added stuff which could speed up the workflow. The results are pretty neat.



The scene hierarchy goes like this:

  1. VPScreenManager
  2. └   VPScreen
  3.     └   VPArea
  4.         └   Various content
  5.  

There are following basic entities:

  • Screen manager: Keeps track of all screens and provides methods to open/close/toggle them if you have screen ID or screen object reference. Those methods request appropriate action from screens themselves, nothing complex happens in the manager.

  • Screen: Holds part of UI you want to show or hide from the user in it's entirety. Header bar, sidebar, window, photo gallery, contact window, browser tab, - all of those and many more are screens. Screens are split into two distinct types: isolated and unified. There can only be one unified screen visible at a time: like a browser tab or an app section like settings, for example, belong to that type. Screen manager ensures that whenever a unified screen showing is requested from anywhere, every other unified screen should stay hidden. Isolated screens, on the other hand, have no connection to anything else: things like sidebars and pop-up overlays belong to that type. They do not care what else is open at the time and screen manager makes no attempt to hide other screens when an isolated one is called. Screens also expose event delegate lists onEntry and onExit, allowing very easy control over presentation: for example, a sidebar screen can call overlay manager on entry and exit to get the dark overlay obscuring screens below, like you see on the gifs. Another example is a page with a list of notes that can request a refresh of the list from a controller before it is presented - you, again, see this in a gif above. Both isolation toggle and delegate lists remove any need to maintain inconvenient piles of event delegates on entities like screen switch buttons - you no longer need to explicitly set up what should be shown, hidden, and called from every single navigation button.

  • Area: Foundation of a screen, every one of them contains at least one area. Area view presenter wraps NGUI UIPanel and performs actual alpha/position changes should show/hide/toggle be requested from it's parent screen. You can customize the entry direction and position shift. For example, the sidebar in the gif above is a screen with one area, Left entry direction and position shift value equal to it's width. A screen can contain multiple areas to allow complex depth setups or complex entry animations - like a mosaic that flies into the screen at different speeds and from different directions. Aside from panel control and entry/exit animation, area does little, and it's usually not accessed directly from any entity but it's parent screen.

None of those three create anything visual. They do not control a single sprite. Now though, after those are set up, under any area, you can drop a wide variety of view presenters, including your own, to actually create a visible layout and allow interaction. Few examples:

  • Sheet: Creates a simple shadow casting paper sheet and provides some exposed properties like types from guidelines (card vs tile) or shadow depth. Can be used for initial layout setup.

  • Separator: Creates a separator line across the selected widget in a selected direction with optional margins - useful when your layout consists of multiple docked paper sheets that never travel independently - you can then just draw one area and split it with lightweight separators.

  • Button: Well, creates a button. Very rich element - it allows you to select one of the five button types from Google guidelines (flat rectangular, raised rectangular, icon-supported flat rectangular, round floating action and flat icon buttons), provides all imaginable properties depending on the type (icon reference, colors, text, and so on) and provides an event delegate list for subscription. Optionally, it can also pass bool, int, float, string or object arguments on click. That's how the auto-generated buttons in the note list depicted in the gif above work - they have no custom components on top of them, they simply send an int with a document ID to allow the controller to open a right one. Again, that allows less clutter and less time spent doing manual setup.

  • Switch: Similar to button, but provides one of the three switch types from Google guidelines (toggle, radio or checkbox). Keeps track of required objects, provides event delegate list, and so on.

  • Content template: Simple entity that creates on the few very widespread content types in the bounds of a widget (uniform text, titled text, dialog, etc.). Keeps track of proper per-type anchoring, clamps widget size to prohibit inappropriate rescaling and so on. Useful when you need to whip up a layout filled with simple text quickly: just create a paper sheet and anchor a content template to it.

  • Input field: Creates an input field with all the fancy line control, hint handling and other niceties dictated by Google guidelines. You can see it both in the gifs above and in the previous post.

It's also extremely easy to create your own view presenter entities if your application has some unusual elements not covered by existing types. I added two:
  • Screen announcer: Subscribes to screen manager and feeds the name of the currently active screen to a label, with a fancy animation. You can see it in the upper left corner in the gifs above

  • Scroll list card: A simple entity that mostly exploits existing types, wrapping them. Creates a paper sheet, covers it with a flat rectangular icon button, adds scroll view drag component, text preview label and some other minor things. Allows the sample controller from the gifs above to set up that scrollable document list easier.

The whole application depicted on the gifs takes few hours to set up at most: the controller and data models are pretty short and UI work is mostly dragging ready-made entities in the scene view. It's not a static demo, it's data-driven UI that loads documents from files and saves them back. Pretty neat.

Next time I'll try something more complex, maybe an inventory with tabs, dropdowns and previews.
« Last Edit: October 25, 2014, 09:39:36 AM by bac9 »

ArenMook

  • Administrator
  • Hero Member
  • *****
  • Thank You
  • -Given: 337
  • -Receive: 1171
  • Posts: 22,128
  • Toronto, Canada
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #13 on: October 25, 2014, 01:53:57 PM »
Wizard, I say.

bac9

  • Full Member
  • ***
  • Thank You
  • -Given: 2
  • -Receive: 4
  • Posts: 113
    • View Profile
Re: Building Material Design UI with NGUI
« Reply #14 on: October 27, 2014, 04:38:49 AM »
No fancy gifs this time, but nevertheless, nice progress. The one thing I was extremely worried about is screen density handling.

Usually, you think about your UI design in pixels. That button is 48 pixels tall, that anchor offsets a widget by 16 pixels, and so on. That's a great way of doing it on traditional desktop games where resolutions and screen densities are pretty predictable and no one bats an eye about size of the elements unless they get a small 4K screen. Mobile platforms, on the other hand...

You usually want your buttons, or text fields, or other stuff to be around 0.7-1cm high for good balance between usability and amount of content on the mobile screen. Tiny problem is, there are smartphones with 480px 4' screens, and there are smartphones with 2160px 5' screens, and there are tablets, and there is insane wealth of other size+density combinations. So using pixel-driven control size, you will easily get this:



Ugh. What are your existing options?

  • Use Constrained UIRoot: Nice, but you say goodbye to pixel perfect elements. Do you really want to have your UI scaled to x2.1723454 on some device? Not really, you'd probably prefer it to be x2.
  • Use Adjust by DPI on Flexible UIRoot: Very nice option, retains pixel perfect UI on 96dpi screens and fools UIRoot into thinking it has a different resolution on a different DPI, allowing you, for example, to use absolutely identical values for non-Retina and Retina iPads while getting pixel perfect UI on both. Except many devices do not properly report their DPI so you'll be stuck with one size fits all platform-dependent assumption like 160dpi. Except most devices do not have the DPI multiple of 96, so say goodbye to nice proportional scaling. Except some devices combine relatively high DPI with relatively low resolution so you might want to use a lower DPI layout there instead of unbiased scaling to allow your layout to fit at all.

Okay, that's not a very attractive situation. You want this on every single device, no matter how nice it's resolution is and how weird it's screen size is:



Well, that problem is already solved in native UIs. Android has a very nice approach to solving that very issue:
https://developer.android.com/guide/practices/screens_support.html

There are few important ideas:

  • Do not use pixels, set up everything using virtual units (Android calls them DP, density independent pixels)
  • Translate virtual units into physical pixels depending on the DPI of the screen
  • Do not actually use true DPI of the device, because calculated scaling multiplier will most likely be absolutely awful - instead, sort all devices into broad groups by their DPI and force those groups to use unified, preferably integer scaling multipliers like 3x

Nice, but how can we do that in NGUI? Turns out it's pretty simple. I have mentioned the experimental Adjust by DPI mode above, and it actually does a very similar thing, except it's reliant on true DPI, creating multipliers by dividing 96 by that DPI. So, I just replaced NGUIMath.AdjustByDPI call in UIRoot with my own and created this simple component:



Replacing atlases aside, it does this:

  • You can assign screen density bucket (like HDPI) in the editor to directly preview how your UI will look on a device belonging to that density
  • When you change that option, the component checks the screen resolution and, if necessary, drops the DPI bucket selection down until it fits the guidelines: it's important to remember that, for example, even if a device belongs to XXHDPI (480DPI) density, unless it has a screen bigger than 960 pixels, it absolutely can not fit enough content on the screen and should be forced to use a lower bucket
  • After the check is passed, the component updates the multiplier (1x for MDPI, 3x for XXHDPI, 2x for XHDPI and so on) in the static class that returns adjusted height for UIRoot

With this slight change, Adjust by DPI UIRoot mode is perfectly usable! Well, ahem, for Editor work, that is. I still need to ensure that builds will automatically force a proper density depending on device. NGUIMath.AdjustByDPI fetches the DPI, but I wonder if any improvements are possible there - for example, maybe it's possible to ask an Android device it's real density bucket directly, in a way native apps do that.

P.S.: Looks like there is one issue: There is no way to tell the true type labels to rasterize at true resolution instead of virtual resolution, so they stay blurred no matter how high you are going. Not sure how to counteract this yet.
« Last Edit: October 27, 2014, 06:05:36 AM by bac9 »