Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - bac9

Pages: 1 2 3 [4] 5 6 ... 8
46
NGUI 3 Support / Interaction of UI root resolution vs game resolution
« on: October 25, 2014, 01:24:10 PM »
Can someone comment on how resolutions of the UI and of the Unity viewport interact? Let's consider three cases:

  • Viewport resolution of 1280x720, UI resolution of 1280x720
  • Viewport resolution of 1280x720, UI resolution of 3840x2160
  • Viewport resolution of 3840x2160, UI resolution of 3840x2160

Provided we follow proper auto layout practices advocated by Google, Apple, NGUI and everyone else who has tried to develop for multiple screens, all those three cases should look absolutely identical when rendered into a 1280x720 screenshot. Now, what I'm interested in is not how the end result looks, but how difficult it was to render it from UI standpoint. I'm not interested in actual per-pixel shader cost, it's obvious that with viewport resolution going up, your GPU will have more work to do. I'm interested in performance hit of UI resolution.

From what I'm seeing so far, UI resolution affects only the scaling of objects inside UI root, and consequentially, controls the scale of pixel unit in relation to real pixels of your screen. That alone is extremely unlikely to cause any different in performance, I would guess. You don't get performance hits from changing transform scales and saving higher values into your int variables. Okay, what else... labels, textures, sprites. From what I am seeing, true type labels in NGUI are rasterized at the viewport resolution, never at UI resolution, so it's reasonable to guess that performance hit of labels is not at all different between the cases 1 and 2. Sprites and textures, as well as bitmap font labels, are already rasterized, all that happens in runtime is your GPU sampling them, and between the cases 1 and 2 that will have absolutely identical flat cost no matter how high or low the UI resolution is.

With that in mind, am I correct in assuming that cases 1 and 2 will have absolutely identical performance on every device? Essentially I'm asking if I overlooked any per-pixel calculations NGUI might perform in UI resolution space with performance impact proportional to that resolution. So far I'm seeing none.

Reason I'm asking about that is simple - I would like to adopt DP instead of pixels, keep one high-res atlas and use root scaling to influence size of the elements in proportion to the screen, in line with how native UIs in Android and iOS do it depending on the DPI of the device. Knowing I can run 4k UI root without any performance impact would relieve a great burden from my mind there. :)

47
Misc Archive / Re: Building Material Design UI with NGUI
« on: October 25, 2014, 09:17:30 AM »
Had some progress. I'm navigating with a mouse in the following gifs, but GifCam is not capturing the pointer for some reason.



This time, I tried to create a simple note-keeping application with few options (list, editor, tagging, removal, etc.). Can easily be expanded into a quest log or, say, Mass Effect style codex.
Along the way, as usual, I spent time working on workflow problems that drove me insane, and added stuff which could speed up the workflow. The results are pretty neat.



The scene hierarchy goes like this:

  1. VPScreenManager
  2. └   VPScreen
  3.     └   VPArea
  4.         └   Various content
  5.  

There are following basic entities:

  • Screen manager: Keeps track of all screens and provides methods to open/close/toggle them if you have screen ID or screen object reference. Those methods request appropriate action from screens themselves, nothing complex happens in the manager.

  • Screen: Holds part of UI you want to show or hide from the user in it's entirety. Header bar, sidebar, window, photo gallery, contact window, browser tab, - all of those and many more are screens. Screens are split into two distinct types: isolated and unified. There can only be one unified screen visible at a time: like a browser tab or an app section like settings, for example, belong to that type. Screen manager ensures that whenever a unified screen showing is requested from anywhere, every other unified screen should stay hidden. Isolated screens, on the other hand, have no connection to anything else: things like sidebars and pop-up overlays belong to that type. They do not care what else is open at the time and screen manager makes no attempt to hide other screens when an isolated one is called. Screens also expose event delegate lists onEntry and onExit, allowing very easy control over presentation: for example, a sidebar screen can call overlay manager on entry and exit to get the dark overlay obscuring screens below, like you see on the gifs. Another example is a page with a list of notes that can request a refresh of the list from a controller before it is presented - you, again, see this in a gif above. Both isolation toggle and delegate lists remove any need to maintain inconvenient piles of event delegates on entities like screen switch buttons - you no longer need to explicitly set up what should be shown, hidden, and called from every single navigation button.

  • Area: Foundation of a screen, every one of them contains at least one area. Area view presenter wraps NGUI UIPanel and performs actual alpha/position changes should show/hide/toggle be requested from it's parent screen. You can customize the entry direction and position shift. For example, the sidebar in the gif above is a screen with one area, Left entry direction and position shift value equal to it's width. A screen can contain multiple areas to allow complex depth setups or complex entry animations - like a mosaic that flies into the screen at different speeds and from different directions. Aside from panel control and entry/exit animation, area does little, and it's usually not accessed directly from any entity but it's parent screen.

None of those three create anything visual. They do not control a single sprite. Now though, after those are set up, under any area, you can drop a wide variety of view presenters, including your own, to actually create a visible layout and allow interaction. Few examples:

  • Sheet: Creates a simple shadow casting paper sheet and provides some exposed properties like types from guidelines (card vs tile) or shadow depth. Can be used for initial layout setup.

  • Separator: Creates a separator line across the selected widget in a selected direction with optional margins - useful when your layout consists of multiple docked paper sheets that never travel independently - you can then just draw one area and split it with lightweight separators.

  • Button: Well, creates a button. Very rich element - it allows you to select one of the five button types from Google guidelines (flat rectangular, raised rectangular, icon-supported flat rectangular, round floating action and flat icon buttons), provides all imaginable properties depending on the type (icon reference, colors, text, and so on) and provides an event delegate list for subscription. Optionally, it can also pass bool, int, float, string or object arguments on click. That's how the auto-generated buttons in the note list depicted in the gif above work - they have no custom components on top of them, they simply send an int with a document ID to allow the controller to open a right one. Again, that allows less clutter and less time spent doing manual setup.

  • Switch: Similar to button, but provides one of the three switch types from Google guidelines (toggle, radio or checkbox). Keeps track of required objects, provides event delegate list, and so on.

  • Content template: Simple entity that creates on the few very widespread content types in the bounds of a widget (uniform text, titled text, dialog, etc.). Keeps track of proper per-type anchoring, clamps widget size to prohibit inappropriate rescaling and so on. Useful when you need to whip up a layout filled with simple text quickly: just create a paper sheet and anchor a content template to it.

  • Input field: Creates an input field with all the fancy line control, hint handling and other niceties dictated by Google guidelines. You can see it both in the gifs above and in the previous post.

It's also extremely easy to create your own view presenter entities if your application has some unusual elements not covered by existing types. I added two:
  • Screen announcer: Subscribes to screen manager and feeds the name of the currently active screen to a label, with a fancy animation. You can see it in the upper left corner in the gifs above

  • Scroll list card: A simple entity that mostly exploits existing types, wrapping them. Creates a paper sheet, covers it with a flat rectangular icon button, adds scroll view drag component, text preview label and some other minor things. Allows the sample controller from the gifs above to set up that scrollable document list easier.

The whole application depicted on the gifs takes few hours to set up at most: the controller and data models are pretty short and UI work is mostly dragging ready-made entities in the scene view. It's not a static demo, it's data-driven UI that loads documents from files and saves them back. Pretty neat.

Next time I'll try something more complex, maybe an inventory with tabs, dropdowns and previews.

48
Misc Archive / Re: Building Material Design UI with NGUI
« on: October 22, 2014, 05:52:33 PM »
It can be a simple duplicate of an existing shader with a space and "1" added in the end of the name - as it's a subtractive shader that can only operate against the content of a panel anyway, there is no need to actually clip what it does. Just make sure NGUI finds the separate version of a shader, which is why you need that new file.

49
I'm trying to make clean UI setup a bit easier, and one of the common tools used for that is a grid with snapping. NGUI already features very nice snapping of widgets to other widgets, so naturally, I'm wondering if it's possible to exploit that.

I have tried out a brute force solution: writing a very basic grid manager that just calculates how many rows and columns it needs at a set step size to cover the screen and creates managed set of widgets aligned with the resulting grid.



But it's obviously not very nice thing to do, because even tiny 1280x720 screen requires a whopping number of 240 widgets to cover at a commonly used 64dp grid step. It works on my PC, but it noticeably slows down the scene view and interferes with on-click selection of relevant widgets I actually want to drag. Can probably be optimized into checkerboard pattern because half of the widget sides end up unused by snapping anyway, but still, not very convenient approach.

So, it would be neat if there was a way to trigger widget snapping using something more elegant and less intrusive. Maybe not a grid at first, I think the idea can be distilled to a more basic entity: a guide line, similar to ones Photoshop and Illustrator provide, that snaps widget borders in a direction perpendicular to itself. Reference grid is, after all, just a set of guides like that.

So, is it possible to implement such a line tool? I would guess it won't be able to inherit from UIWidget due to required dimensions (no width) and due to corner/sides being an overkill. I'm also unsure how to add such an entity into consideration in UIWidget snapping code - at a glance I was not able to fully wrap my head about the way snapping is implemented there.

P.S.: To clarify, by snapping I mean in-editor nudging of NGUI gizmos when user moves them near a suggested position. Nothing related to anchoring.

50
Misc Archive / Re: Building Material Design UI with NGUI
« on: October 18, 2014, 10:40:05 AM »
Oh, that's a nice solution.

As about the prefabs, to be honest I'm not much of a fan of them in UI. They are extremely useful for things like:

  • Resolving the problem of scene editing in version control environment: do not edit Unity scene files, split scene into prefabbed sections, let environment artists update the prefabs, voila
  • Serving as blueprints for instantiation of objects like level props

But I'm skeptical about using them for reusable components because usually you want a lot more than an object properly instantiated from a blueprint: you want an object that self-updates itself to stay compliant with the latest reference design, the object that repairs itself if something is missing. You want every single button and checkbox in your project to update themselves (talking about editor environment, not runtime, ofc) the instance you update their design. And importantly, you want to have all that for objects that are combined into an intricate nested hierarchy where some instances control others but must be updated independently. Unity prefabs can't offer that. So I construct stuff without them, and doing so directly allows me to have buttons, fields, etc. that are harder to break, that use the very latest configuration and look no matter how and when you create them, do not require you to hand-check every scene wondering if your changes to the reference were properly distributed to the copies, etc.

It's a bit more rigid approach, one that won't allow the users to slap ten effects to a button and distribute that with one click on "Apply" on top of the inspector, but when the appeal of the system is replication of a rigid design framework in the first place, I guess it's not exactly a problem :) And if they really want to, it's simple to do so through code.

P.S.: Did some work on screen control.



At the moment it works like this:

  • UI is split into overlays and screens, each controlled by their managers

  • Overlays include touch ripple panel, focus ripple panel, and stuff like full-screen fills that appear behind on sidebar/dialog opening. Overlays are not concerned with each other and their manager is only exposing methods allowing to call them in isolation: for example, to create a ripple somewhere, or block the screen for something. Overlays don't care if they are opened or otherwise used simultaneously, they fulfill different roles each. That whole part is created automatically.

  • Screens are containers that define what the user is seeing when interacting with a certain distinct area of an application. In various situations they can be called tabs or windows, but those words just describe the look. Screens themselves have bare minimum of functionality: their manager keeps track of them all and provides exposed methods to show a certain window, either by direct reference or by ID number. Manager takes care of closing previously opened windows and other mundane stuff like that.

  • Screens are parents to areas: every screen can have one or multiple areas. Easy way to think about them is to think of them as paper sheets from Google guidelines - you drop them onto the canvas to create layouts, you use them to house your content, you slide them in and out of the screen. Areas implement show/hide functionality called by their parent screen (optionally exposing slide out direction and distance to allow you to create complex transitions Google heavily employs) and are the first entities to actually create anything visual: areas optionally control the creation of the "paper" sprites along with their perceived depth (through the shadow effect described above). Each area is housing a UIPanel, so that's also the first and last entity allowing you to control relative depth between parts of UI layout.

  • After that, you can do whatever you want in the bounds of an area. You can of course jump directly into adding labels, buttons and sprites, but there are few abstracted entities to make common use cases easy: for example, Content entity that creates a widget and sets up few most common content types anchored to it within (for example, text with a header and proper margins, or a dialog layout). Another example is ScrollView entity that sets up a panel, scroll view, table, scroll bar and some other stuff for you, allowing control with a simple parent widget.

So, the most simple UI design goes like this:

  • Create screen manager object
  • Create a screen underneath it
  • Create an area in that screen
  • Create a content entity in that area, matching it's dimensions and set to one of predefined types
  • Interact with content entity (view presenter) from your controller

Of course, no one is stopping you from setting up whatever internal layout you want in every area by using labels, buttons, switches and separators (all of those are parent entities wrapping and constructing certain NGUI design, of course - you don't have to deal with setting up tweens in a switch animation or shadows in a button).

Creating any entity type is a matter of adding a component to an empty GameObject. The component checks what objects are required for it, and if they are not present, creates them following in-built presets (for a flat rectangular button that would be creating one sprite, one label, a control widget and a collider for it). Presets can be very varied and can be switched on the fly - a switch can transform itself into radio, checkbox or a toggle slider with just one enum selection in it's inspector. The component also provides a method (and a context menu option) to destroy it along with all connected objects and components, which is handy when you don't want to clean up that stuff yourself.

51
Misc Archive / Re: Building Material Design UI with NGUI
« on: October 18, 2014, 07:55:00 AM »
After actually making a reasonably complex project with it, yeah, probably.

I just need that to ensure I'm not making some impractical monstrosity with awful architecture that drives you insane right after a minute of attempting to make a UI with it. So far I'm starting with simple stuff (like, let's say, codex app with simple screen hierarchy, articles, settings etc), checking what drives me insane in the process and solving the issues that do that.

Dislike to redo the work to create every dialog box? Abstract that object into a component that sets everything up for you, exposing just the size, actions and text to configure. Dislike how much you have to hardcode while setting up buttons for that dialog? Create a better abstracted button class that can set up what you need with just one line. Dislike how you have to set up icon buttons, FAB buttons, rect buttons and sidebar buttons separately with different objects? Come up with a way to combine them all into one button class that can switch between every type. Dislike how that makes the button object bloated with children that are frequently disabled and unused? Improve your code so that only objects necessary for the current type are maintained. Dislike how you have to recheck the referenced objects and recreate them with proper configuration using kilometer long code? Write a utility class that can check your references and replace them if they are missing, creating the sprites, labels, control widgets, textures, tables and so on with one line for you, enabling you to drop boiler plate code from all abstract components. Dislike setting up guideline-compliant colors through Color for every single object? Create a library that can be referenced instead. Dislike having to open a calculator to recheck how DP size values from the guidelines are scaled into pixels in XXHDPI space? Create in-editor tool that can give you the values directly and provides grid info. And so on.

So far it's going nicely but there is still a lot of work to do. No blocking issues though, like in the beginning when I had no idea if it's even possible to replicate the required effects.

P.S.: By the way, it would be nice to add onSelect and onDeselect event delegate lists to UIInput in addition to existing two. All methods already exist, it's a matter of declaring them and adding .Execute in two new places, nothing else required. Having them enables a lot of interesting stuff, including that hint behavior on the gifs above, so it would be nice to have them by default - so far it's the only change I had to make in NGUI code. I would obviously prefer to stay away from making changes like that where possible, to enable easy support.

52
Misc Archive / Re: Building Material Design UI with NGUI
« on: October 17, 2014, 05:13:15 PM »
Spinner: controlled through a method with 0-1 float as an argument, value that can be fed with the progress on some operation or alternatively just produced from delta time in update.
The method constructs the color by sliding through HSB, while rotation and fill completion are evaluated from two relatively simple custom AnimationCurves that intersect to provide the catch-up impression.



Multiline input:


53
Misc Archive / Re: Building Material Design UI with NGUI
« on: October 17, 2014, 07:43:14 AM »
Input fields in two varieties (yet to make a multiline one):



Works with any content text size, automatically adapts the control widget size to make guidelines-compliant spacing easy, can work on any background.

54
NGUI 3 Support / Re: 9-Sliced sprite as a mask for texture
« on: October 16, 2014, 07:06:09 AM »
Occam's razor guys.

You have something that can be achieved easily using 2 clipped regions. One regular, and another tilted 45 degrees, creating a second set of sliced corners.

So do just that :) -- 2 clipped panels affecting your tiled sprite.

It's implied they want every single UI element to be styled like that, which makes that setup (2 panels per rect element) practically impossible :P

55
NGUI 3 Support / Re: 9-Sliced sprite as a mask for texture
« on: October 16, 2014, 06:17:07 AM »
First of all, if you absolutely need to keep the background a separate sprite, add identical alpha and slicing setup to it. Computing actual clipping to add it behind the frame is an overkill.

Now, for the tiled texture overlay, yeah, that's not possible using exclusively existing UI elements. There are two ways out of this:

A: Rethink the design to remove the need for tiled areas in all border quads, then use Advanced sprite type that allows you to set fill type individually on per-quad basis and use Tiled type for the central quad. Or maybe create an inward transparency fade in the sliced sprite, disable the fill of the central quad and insert the anchored tiled sprite underneath, that works too and would allow you to get the tiled pattern closer to corners (although you'll still have to use the inward halo with flat color).

B: Use the approach practically every single game with heavy effects on UI elements does: post processing through shaders. Create a shader and a component following the Image Effect pattern in Unity that will overlay your UI camera with scrolling texture masked using alpha of UI camera output.

I'd recommend the last one, it's clean object hierarchy-wise, it gives you enormous freedom to do stuff like glitches, varied scrolling control etc at a completely flat cost.

56
I'm making a few abstract elements that are using UIWidget for width, height, depth control and other niceties it provides. It would be really handy if I could disable and/or clamp the scene view handles that are setting UIWidget width/height, but I'm not sure if I can implement that without intruding into NGUI editor code with edits that can be purged with a framework update. Few use cases for that, to provide context:

  • Horizontal separator element that has fixed height and can only be dragged by the corner gizmos (with only width being altered while height is being clamped to a certain value) or by the side gizmos
  • Card element that can assume different layouts (empty, filled with uniform text, filled with uniform text and header text, filled with uniform text, header text and dialog buttons), which will actively enforce certain limits on minimal width and height depending on selected layout (to prohibit setting it to size that would, for example, make all text lines disappear)

So, what would be the best way to clamp width/height of a UIWidget in editor?

I can obviously do that by directly accessing those properties, but:

  • Doing so in inspector update seems to be too late, as clamping sets into effect only after you let go of a gizmo and sometime gets lots altogether if gizmos sent the last update after the inspector set the last update. There is probably a cleaner solution to override the size.
  • Doing so creates drift if the transform position was recalculated by the widget during rescale (and that's usually the case when you rescale widgets, with exception of rescaling opposite sides with side-aligned pivots)

57
Misc Archive / Building Material Design UI with NGUI
« on: October 14, 2014, 03:23:26 AM »


Thought I should post about that work outside of support threads. Exhibit A:


Long story short, I got a bit tired of implementing controls from scratch in every project and overall from using unstructured UI workflows. Seriously, why am I still using awkward half-assed window managers that are created anew every time, why do I have to deal with setting up tweens and sprite references when adding a spinner and why do I need custom switches, buttons and show-hide sequences every time? I shouldn't be doing that.

So I started working on a coherent MVC based foundation that will allow me to create interfaces that are quick to set up, easy to maintain and easy to expand.

While at it, I thought to myself - wouldn't it be wonderful if I had not just nice code providing reusable elements, but also those beautifully implemented controls from Material Design by Google that native Android developers enjoy? Wouldn't it be nice to have Unity applications that can fool a user into believing they are native? Anyway, how hard would implementing controls from Material Design guidelines would be?

________________

Turns out they are quite a bit complex, but every single one of them can be implemented without atrocious hacks or performance-hungry workarounds like frame-based animations. For example, those radio buttons are just three overlayed dots that require no custom masking - just proper order of color and scale tweens.



The most complex things here are touch effects and shadows. Those were a complete mystery to me - for all I knew, Google implemented them with magic. Check these animations:


Only idea I had at first was using NGUI panel clipping in every element, but that was unacceptable from performance standpoint and would have cluttered the hierarchy - and that would only allow those radial ripples, without addressing even more mysterious second part of the animation - inverse erasing of the expanding ripple sprite, which can't be achieved through traditional rect-based clipping at all. But as it turns out, it can be implemented, at almost no performance cost, and with that double clipping from within.

You set up a separate UIPanel set to hard edge clipping, with a component that can set it's dimensions and position, with a singleton to access it, and a child ripple sprite that can tween through the touch animation. Any touchable widget can call the singleton and invoke a method (using itself as an argument) on touch, which will reposition the UIPanel clip area to dimensions of a widget arriving in the argument and start the ripple animation in the clicked point of the screen.



Now only thing that is left is the second clipping - erasing the ripple sprite from within. That one is achieved by creating clipping-compatible depth cutout shader (no need to modify the example, just give NGUI a properly named duplicate) and applying it to a UITexture with the circle texture, then moving the object with it outside of the UI plane to allow depth testing to kill pixels within the panel. All you need to do when that is set up is to tween the ripple sprite first and the eraser sprite second, and you get yourself that sexy impossible ring that is required for every clickable element in Material Design.



________________

Another area where Material Design presents a huge challenge is dynamic shadows. They are not in any way similar to the standard static drop shadows that people bake into sprites.


They are dynamic, with every rectangular element capable of lifting back and forth through multiple depth levels, casting very smooth shadow of variable feathering radius. That's extremely problematic. But as it turns out, it can be implemented too, with some clever trickery. Take a look:


To do this, I prepare a sliced sprite with rectangular shadow and assign it to the sprite anchored without any offsets to my card. There is no need to do it manually - I just add a "sprite shadow" component to a UISprite object and everything is set up automatically (and cleaned up when that component is removed).

The desired look, with variable feathering radius, is impossible to achieve with standard sliced sprite behaviour and anchoring in NGUI. Using that custom component, I subscribe to the fill event of the sliced shadow and directly modify the positions of 36 vertices, pushing the central quad instead of all quads to be controlled by the sprite dimensions and anchoring, and pushing the other quads outward depending on offset calculated from the depth, then finally sampling a certain curve to get proper shadow intensity. Ah, and the sprite is offset downward a bit, depending on the depth.

________________

Not sure if that webm hosting has limits on the traffic (sites accepting 15s+ files are hard to come by), so just in case, here is a mirror (379kb):


________________

P.S.: To inevitable question of "why not uGUI", well, I really prefer NGUI for a number of reasons.

  • First, uGUI will never have the sort of personal tech support that NGUI had for years. Unity forums and issue tracker are nice and dandy, but not really comparable to the developer himself answering every single question.
  • Second, I prefer depth based draw order to hierarchy based draw order and dislike uGUI dependency on the hierarchy sorting
  • And third, simply by virtue of existing for a very, very long time, NGUI has thousands of threads, posts, docs, tutorials and other things that allow you to learn faster, and allow you successfully find on the net solutions to most of the problems you might encounter. uGUI will solve that over time, but has not accumulated that amount of material around itself yet.

58
There is a flat subtype too, shadow is just a hover effect for more rare buttons that can be lost between busy content otherwise. :)
Same sort of deal as with separating content with flat seams vs separating content into floating cards with shadows.


59
Disregard that, I'm an idiot. :)
It's a subtractive shader and it's already limited to the contents of the panel, so there is no need to rewrite anything at all, simple duplicate will do. Time for even more exotic touch effects!


60
I have to admit I'm not terribly familiar with abstracted methods Unity is exposing for the shaders and I'm more comfortable with vertex/fragment code that is not obfuscating anything, so I'm having a bit of trouble understanding what is happening in the lowest LoD of NGUI shaders. In particular, I'm having trouble creating a panel-clipped subtype of the Depth Cutout shader that was provided as a sample with NGUI. As in contrast with standard colored shader, depth cutout shader has no high-LOD vertex/fragment section at all, I'm not really sure what I should alter in the LOD 100 section to make it compliant with panel clipping.

How can I approach this?


Pages: 1 2 3 [4] 5 6 ... 8