Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - bac9

Pages: [1] 2 3
1
Simple to reproduce: just make a label, type something ("lorem ipsum"), set label mode to Resize Freely, then drag the X spacing around. Previously, calculated widget size accounted for it, now it stays exactly the same no matter the spacing, leading labels to break on more and more lines as spacing grows:



It's probably related to new changes in NGUIText and UILabel, checking how simple it is to fix now.

2
NGUI 3 Support / Looks like Unity 5.6 broke atlas generation
« on: May 05, 2017, 01:33:22 AM »
Fairly easy to reproduce, just regenerate the atlas over and over again (e.g. by making small changes to one of the bitmaps included into it and invoking the add to atlas context option) - random sprites have a chance of getting corrupted boundaries and dimensions - for example, a number of 48x48px .png files which were previously correctly imported into an atlas turned into 30x30px sprites and 23x16px sprites in some cases. Not sure what triggers that, maybe 5.6 changed something NGUI relied on to determine image dimensions?

3
Simple example: let's say you have a GameObject in your DontDestroyOnLoad hierarchy which you move to various world-space positions you want a UI widget to be anchored to. Internally, that case works like this: a scene-rendering camera is retrieved and used for world to screen pos conversion of position of that GameObject to position of the anchored widget. Example in motion (line endpoints and big white marker are all done this way).


^ WebM, click to open

There is one case where current implementation fails, though, and that's when your game has an "immortal" (DontDestroyOnLoad) UI hierarchy travelling trough multiple scenes, each scene with it's own scene camera. Scene camera used for anchor conversions is fetched on Awake of a widget once, so that camera reference goes null immediately after you load another scene, permanently disabling world space anchoring on a widget.

It's pretty simple to fix, though. Just add this to the top of UIRect.GetLocalPos method:
To this:

  1. protected Vector3 GetLocalPos (AnchorPoint ac, Transform trans)
  2. {
  3.     if (ac.targetCam == null) ac.targetCam = Camera.main;
  4.     ...

With this, UIRect will recover the target camera once it becomes null. Maybe there is more elegant way of doing that, but this fixed our issues with world space anchored widgets becoming inert on every scene load. Might be worth adding to the next patch. :)

4
Let's say I have a label which grows vertically to fit texts of different length - for example, item descriptions. Anchored to the label, there is a widget offset by, say, -24 on the left, 24 on the right, 24 on the top, -24 on the bottom. In turn, anchored to the top, left and right (never bottom) of that widget are various secondary child elements, like a clamped header with an item name, item preview images, fixed-size lines with item stats and so on. To illustrate, here is how this looks:

Game camera view:


Scene view of the primary label and sole widget directly parented to it:


Scene view of the secondary elements anchored to the boundary widget:


There is a small problem with that sort of setup. While it works like you see on the gifs in Edit mode, Play mode is a different story. Unless you make set every single anchor to OnUpdate refresh mode (which you definitely don't want to do if you want to avoid GC from anchoring operations generated every frame), there is no way to actually force a refresh on all those elements after you change the text on that label. I've tried a lot of methods exposed by NGUI UIPanel and looked at how on-anchor refreshing is done in UIRect/UIWidget/UIPanel and I think there is no standard way to do that for OnEnable/OnStart anchors.

Please correct me if I'm wrong and actually missed some method on UIPanel or other NGUI class meant to force refresh of all anchored widgets. I went on with an assumption that no such method exists and tried to force it manually, with my own methods. I focused on an attempt to devise a method that would force correctly ordered anchor refresh on every single widget under a certain panel - while forcing an anchoring update of an individual widget is easy, it's not a useful approach, since it would require every anchoring-influencing object to keep explicit references to all anchored dependent objects. I just wanted to call something like panel.ForceAnchorRefresh () from my label, which keeps the UI nicely decoupled. Here is what I got:

  1. public void ForceAnchorRefresh ()
  2. {
  3.     ForceAnchorRefreshRecursive (transform);
  4. }
  5.  
  6. private void ForceAnchorRefreshRecursive (Transform parent)
  7. {
  8.     UIRect rect = parent.GetComponent<UIRect> ();
  9.     if (rect != null)
  10.         rect.UpdateAnchors ();
  11.  
  12.     for (int i = 0; i < parent.childCount; ++i)
  13.         ForceAnchorRefreshRecursive (parent.GetChild (i));
  14. }

This works great, with sole exception of cases where widgets influencing other widgets but not being their parents aren't guaranteed to update first. But that's easy to prevent if you structure your UI with strict parenting reflecting anchoring dependencies. Is there anything wrong with this approach, or it's a sound way to force refresh of OnStart/OnEnable anchored widgets after their dependency was resized?

5
To be specific, I'm referring to this piece of code in UIPanel:

  1.         void LateUpdate ()
  2.         {
  3.                 #if UNITY_EDITOR
  4.                 if (mUpdateFrame != Time.frameCount || !Application.isPlaying)
  5.                 #else
  6.                 if (mUpdateFrame != Time.frameCount)
  7.                 #endif
  8.                 {
  9.                     ...

I am getting enormous, 300-400ms long interruptions in Editor window redraws every frame whenever I edit a label or move a UI widget, which is seemingly caused by this snippet above starting the redraw process for every single panel in the whole UI hierarchy if you are using the Editor (!Application.isPlaying case). Not sure why this is necessary - commenting out !Application.isPlaying case and only letting the method to continue if the first condition is satisfied doesn't seem to cause any issues whatsoever - all panels are correctly rebuilt when necessary, but widget editing is no longer triggering lag from whole UI rebuilding. Am I missing something? Is there a reason to redraw whole UI every LateUpdate in the Editor without any additional conditions?

6
NGUI 3 Support / PSA: WorldToViewportPoint used by NGUI seems to be broken
« on: November 05, 2016, 05:15:30 AM »
Just wanted to give a heads up: Camera.WorldToViewportPoint seems to be returning incorrect results whenever you hold a mouse button when called from certain points in execution order, e.g. from LateUpdate on a script with execution position 0. This might be linked to the bizarre OSX issue you have already encountered and worked around of with Unity 5.4 (reported screen size going wrong whenever mouse is pressed). So far we have reproduced it on two Windows PCs with Unity 5.4.2f1. I'm not yet sure which Unity 5.4 versions beyond 5.4.2f1 it affects, haven't tested the latest patches yet.

The interesting detail about the issue is that only LateUpdate calls to Camera.WorldToViewportPoint seem to return incorrect coordinates - FixedUpdate and Update usually return proper ones (unless we're checking Update on late-executing scripts), even when the mouse is pressed. Might be worth going through NGUI code to check if there are any calls to Camera.WorldToViewportPoint from LateUpdate.

7
Pretty weird editor side error, not sure what can be the cause of this. It usually happens when the Editor loses focus, for example when you switch to your IDE or browser and come back later - at that point, there is a chance that UI widgets anchored to panels (for example, some top-level UIWidget holder attached to a top-right corner of it's panel) disappear from view of the UI camera and go into some extreme position like (-23223485, -6451236). This can only be fixed by reloading a scene or by pressing Apply on a prefab enclosing your UI hierarchy (that probably calls some method like OnEnable that reruns parts of UI code responsible for the issue).

I was very puzzled about the possible reasons behind this until one day I moved my main gameplay camera around an area where UI camera and elements are situated. I've left the UI layer rendered by the gameplay camera so that you'll see the situation clearly:


▲ WebM, click to view

And here is how things are supposed to work, properly anchored to UI camera:


▲ WebM, click to view

What might be causing this temporary use of a completely unrelated camera?

I'm using Unity 5.4.1f1, up to date version of NGUI. Nothing exotic about my setup, just one 2D UI in the scene, arranged into a traditional NGUI hiearchy (parent -> camera, root -> panels -> widgets).

8
NGUI 3 Support / Pixel-perfect bitmap fonts in NGUI - 2015
« on: August 20, 2015, 04:50:33 AM »
Most of the tutorials I can find on the net date back to 2012-2013 and NGUI underwent many changes since, which makes me doubt whether I'm doing everything right. So I'll try to make a short tutorial on that - please correct me if anything I'm writing here is wrong.

Let's say I have a .ttf source font. Popular example of one is 11px GohuFont:



How do I get it into NGUI with the same pixel-perfect crispness? Obviously, Dynamic Unity fonts are out, I should be using atlased bitmap font. As far as I see, there is a Font Maker tool now.
Unfortunately, I was unable to get good results out of it's Generated Bitmap mode: unless you leave the default size value of 16 unchanged, Font Maker creates a mess like this:



And setting glyph size to 16 is obviously incorrect for this font:



So I tried another approach - digging out BMFont and importing my .ttf there. I used the following settings:



I then used the resulting .txt definition file and .tga atlas file, plugging them into NGUI Font Maker switched to Imported Bitmap mode.
The resulting font prefab looked more like it:



Except, obviously, alpha was missing. I tried to change BMFont export options with no luck, it insisted on exporting RGB or grayscale image with no alpha. So I simply created a new image with white fill in RGB and BMFont output in A, and reimported the font using NGUI Font Maker with that new texture. Now that's better, I'm getting almost perfect replication of results I wanted in game view, especially with pixel perfect snapping of widgets:



Only doubt left is texture filtering. No matter how well pixel perfect alignment lines up your labels and sprites, wouldn't the bilinear/trilinear filtering unavoidably dilute and dim the edges, especially pixel-thin lines like ones used in such a font? If you need 1:1 translation of pixels with no smoothing, why not set the filtering of your atlas to Point mode? Well, after trying it out, I'm not sure.



I'm unable to pick up any differences in label pixel colors between Point and Trilinear filtered atlases. It's not even subjective, Photoshop Difference layer mode literally returns black everywhere.
So, I guess Point filtering only makes sense in some exotic cases - for example, when you perfectly scale up your UI by 200 or 300 percent on XXXHDPI Android devices. In that case, getting crisp pixels on magnified sprites can only be achieved like that.

Did I miss anything, or that pretty much covers it all?

9
I'm having a bit of trouble with an exotic case: for a certain screen transition type, I parent an object with a panel hosting some widgets to another object with a texture clipped panel, play some animations with higher-level texture clipped panel, then restore the hierarchy back. Trouble is, I can not figure how to disable the moved panel for the duration of that scene without killing the rendering of all child widgets.

To explain, here is the standard hierarchy:

Clipped panel (enabled)
Normal panel (enabled)
- Widget


And here is what happens for the duration of screen transition:

Clipped panel (enabled)
- Normal panel (disabled)
- - Widget


Problem is, the moment I disable the normal panel, every single widget stops rendering, probably because it's supposed to do that due to UIPanel OnDisable method. But I don't want that: I want the widgets to continue being rendered, just using the higher panel in the hierarchy: clipped one. And I'm unable to do that. So far I have tried lots of combinations and different orders of normal UIPanel component .enable setting, widget.ParentHasChanged() calls, direct widget.panel parameter reassignment, panel Refresh() calls, and some other methods, but nothing seems to work - absolutely invariably, once I disable the normal panel, every single widget linked to it will disappear.

The only way to get the desired behavior I have found was to destroy the UIPanel manually from the inspector. That's obviously very undesirable way of doing things, I'd prefer not to have a hassle of caching all panel properties to recreate it later and I'd prefer not to risk GC rushing in during 1 second long buttery smooth animation. I just need to replicate what happens to child widgets when a panel is removed, without actually removing (only disabling) it. But I'm not seeing any OnDestroy behavior in UIPanel, so I'm not sure how exactly widgets actually move under command of a higher panel. It's reliably happening and it's exactly what I need, but I'm unable to track how it's done. Can someone shine a light on that?

10
I'm having a bit of trouble with root rescaling and anchors. In short, when I change the root resolution (both in edit mode and in play mode), positions of anchored widgets are not updated correctly.

Here is a simple setup:

  • Anchor a sprite to a panel (say, a background snapped to edges)
  • Anchor another sprite to that first sprite (say, a window covering half of that background)
  • Anchor another sprite to that second sprite (say, a shadow of that window)

When you change UIRoot resolution, widgets are properly updated, but anchor-based positions are not propagated because their positions are based on other positions that were not updated yet.

The only way I can get my UI in order right now is to manually enable and disable UI root object three times, with every update making anchors one level deeper right. I'd love to know if there is a way to instantly invalidate all anchors AND force their update in hierarchical order so that UI will snap itself together in one update instead of taking unpredictable amount of takes. Ah, and few things:

  • Performance is not a concern, I only ever need that operation to run in the edit mode (when I change screen density or rescale the window) and in the very beginning of a play session (when root gets scaled appropriately for a detected device)
  • Every single anchor in the project is set to OnEnable as I don't need any runtime anchor-based updates - but as far as I remember from NGUI code, that has zero effect on edit mode where anchors are using always-on updates
  • I have tried panel.SetDirty (), panel.RebuildDrawCalls (), broadcasting CreatePanel message and all other stuff I found in the documentation, but nothing provides the desired effect - lower hierarchy elements only get their positions updated after manual game object toggling.

_____________________________

Edit: Ah nope, disregard that, forgot about panel.Refresh () which seems to work great there :)

11
NGUI 3 Support / Interaction of UI root resolution vs game resolution
« on: October 25, 2014, 01:24:10 PM »
Can someone comment on how resolutions of the UI and of the Unity viewport interact? Let's consider three cases:

  • Viewport resolution of 1280x720, UI resolution of 1280x720
  • Viewport resolution of 1280x720, UI resolution of 3840x2160
  • Viewport resolution of 3840x2160, UI resolution of 3840x2160

Provided we follow proper auto layout practices advocated by Google, Apple, NGUI and everyone else who has tried to develop for multiple screens, all those three cases should look absolutely identical when rendered into a 1280x720 screenshot. Now, what I'm interested in is not how the end result looks, but how difficult it was to render it from UI standpoint. I'm not interested in actual per-pixel shader cost, it's obvious that with viewport resolution going up, your GPU will have more work to do. I'm interested in performance hit of UI resolution.

From what I'm seeing so far, UI resolution affects only the scaling of objects inside UI root, and consequentially, controls the scale of pixel unit in relation to real pixels of your screen. That alone is extremely unlikely to cause any different in performance, I would guess. You don't get performance hits from changing transform scales and saving higher values into your int variables. Okay, what else... labels, textures, sprites. From what I am seeing, true type labels in NGUI are rasterized at the viewport resolution, never at UI resolution, so it's reasonable to guess that performance hit of labels is not at all different between the cases 1 and 2. Sprites and textures, as well as bitmap font labels, are already rasterized, all that happens in runtime is your GPU sampling them, and between the cases 1 and 2 that will have absolutely identical flat cost no matter how high or low the UI resolution is.

With that in mind, am I correct in assuming that cases 1 and 2 will have absolutely identical performance on every device? Essentially I'm asking if I overlooked any per-pixel calculations NGUI might perform in UI resolution space with performance impact proportional to that resolution. So far I'm seeing none.

Reason I'm asking about that is simple - I would like to adopt DP instead of pixels, keep one high-res atlas and use root scaling to influence size of the elements in proportion to the screen, in line with how native UIs in Android and iOS do it depending on the DPI of the device. Knowing I can run 4k UI root without any performance impact would relieve a great burden from my mind there. :)

12
I'm trying to make clean UI setup a bit easier, and one of the common tools used for that is a grid with snapping. NGUI already features very nice snapping of widgets to other widgets, so naturally, I'm wondering if it's possible to exploit that.

I have tried out a brute force solution: writing a very basic grid manager that just calculates how many rows and columns it needs at a set step size to cover the screen and creates managed set of widgets aligned with the resulting grid.



But it's obviously not very nice thing to do, because even tiny 1280x720 screen requires a whopping number of 240 widgets to cover at a commonly used 64dp grid step. It works on my PC, but it noticeably slows down the scene view and interferes with on-click selection of relevant widgets I actually want to drag. Can probably be optimized into checkerboard pattern because half of the widget sides end up unused by snapping anyway, but still, not very convenient approach.

So, it would be neat if there was a way to trigger widget snapping using something more elegant and less intrusive. Maybe not a grid at first, I think the idea can be distilled to a more basic entity: a guide line, similar to ones Photoshop and Illustrator provide, that snaps widget borders in a direction perpendicular to itself. Reference grid is, after all, just a set of guides like that.

So, is it possible to implement such a line tool? I would guess it won't be able to inherit from UIWidget due to required dimensions (no width) and due to corner/sides being an overkill. I'm also unsure how to add such an entity into consideration in UIWidget snapping code - at a glance I was not able to fully wrap my head about the way snapping is implemented there.

P.S.: To clarify, by snapping I mean in-editor nudging of NGUI gizmos when user moves them near a suggested position. Nothing related to anchoring.

13
I'm making a few abstract elements that are using UIWidget for width, height, depth control and other niceties it provides. It would be really handy if I could disable and/or clamp the scene view handles that are setting UIWidget width/height, but I'm not sure if I can implement that without intruding into NGUI editor code with edits that can be purged with a framework update. Few use cases for that, to provide context:

  • Horizontal separator element that has fixed height and can only be dragged by the corner gizmos (with only width being altered while height is being clamped to a certain value) or by the side gizmos
  • Card element that can assume different layouts (empty, filled with uniform text, filled with uniform text and header text, filled with uniform text, header text and dialog buttons), which will actively enforce certain limits on minimal width and height depending on selected layout (to prohibit setting it to size that would, for example, make all text lines disappear)

So, what would be the best way to clamp width/height of a UIWidget in editor?

I can obviously do that by directly accessing those properties, but:

  • Doing so in inspector update seems to be too late, as clamping sets into effect only after you let go of a gizmo and sometime gets lots altogether if gizmos sent the last update after the inspector set the last update. There is probably a cleaner solution to override the size.
  • Doing so creates drift if the transform position was recalculated by the widget during rescale (and that's usually the case when you rescale widgets, with exception of rescaling opposite sides with side-aligned pivots)

14
Misc Archive / Building Material Design UI with NGUI
« on: October 14, 2014, 03:23:26 AM »


Thought I should post about that work outside of support threads. Exhibit A:


Long story short, I got a bit tired of implementing controls from scratch in every project and overall from using unstructured UI workflows. Seriously, why am I still using awkward half-assed window managers that are created anew every time, why do I have to deal with setting up tweens and sprite references when adding a spinner and why do I need custom switches, buttons and show-hide sequences every time? I shouldn't be doing that.

So I started working on a coherent MVC based foundation that will allow me to create interfaces that are quick to set up, easy to maintain and easy to expand.

While at it, I thought to myself - wouldn't it be wonderful if I had not just nice code providing reusable elements, but also those beautifully implemented controls from Material Design by Google that native Android developers enjoy? Wouldn't it be nice to have Unity applications that can fool a user into believing they are native? Anyway, how hard would implementing controls from Material Design guidelines would be?

________________

Turns out they are quite a bit complex, but every single one of them can be implemented without atrocious hacks or performance-hungry workarounds like frame-based animations. For example, those radio buttons are just three overlayed dots that require no custom masking - just proper order of color and scale tweens.



The most complex things here are touch effects and shadows. Those were a complete mystery to me - for all I knew, Google implemented them with magic. Check these animations:


Only idea I had at first was using NGUI panel clipping in every element, but that was unacceptable from performance standpoint and would have cluttered the hierarchy - and that would only allow those radial ripples, without addressing even more mysterious second part of the animation - inverse erasing of the expanding ripple sprite, which can't be achieved through traditional rect-based clipping at all. But as it turns out, it can be implemented, at almost no performance cost, and with that double clipping from within.

You set up a separate UIPanel set to hard edge clipping, with a component that can set it's dimensions and position, with a singleton to access it, and a child ripple sprite that can tween through the touch animation. Any touchable widget can call the singleton and invoke a method (using itself as an argument) on touch, which will reposition the UIPanel clip area to dimensions of a widget arriving in the argument and start the ripple animation in the clicked point of the screen.



Now only thing that is left is the second clipping - erasing the ripple sprite from within. That one is achieved by creating clipping-compatible depth cutout shader (no need to modify the example, just give NGUI a properly named duplicate) and applying it to a UITexture with the circle texture, then moving the object with it outside of the UI plane to allow depth testing to kill pixels within the panel. All you need to do when that is set up is to tween the ripple sprite first and the eraser sprite second, and you get yourself that sexy impossible ring that is required for every clickable element in Material Design.



________________

Another area where Material Design presents a huge challenge is dynamic shadows. They are not in any way similar to the standard static drop shadows that people bake into sprites.


They are dynamic, with every rectangular element capable of lifting back and forth through multiple depth levels, casting very smooth shadow of variable feathering radius. That's extremely problematic. But as it turns out, it can be implemented too, with some clever trickery. Take a look:


To do this, I prepare a sliced sprite with rectangular shadow and assign it to the sprite anchored without any offsets to my card. There is no need to do it manually - I just add a "sprite shadow" component to a UISprite object and everything is set up automatically (and cleaned up when that component is removed).

The desired look, with variable feathering radius, is impossible to achieve with standard sliced sprite behaviour and anchoring in NGUI. Using that custom component, I subscribe to the fill event of the sliced shadow and directly modify the positions of 36 vertices, pushing the central quad instead of all quads to be controlled by the sprite dimensions and anchoring, and pushing the other quads outward depending on offset calculated from the depth, then finally sampling a certain curve to get proper shadow intensity. Ah, and the sprite is offset downward a bit, depending on the depth.

________________

Not sure if that webm hosting has limits on the traffic (sites accepting 15s+ files are hard to come by), so just in case, here is a mirror (379kb):


________________

P.S.: To inevitable question of "why not uGUI", well, I really prefer NGUI for a number of reasons.

  • First, uGUI will never have the sort of personal tech support that NGUI had for years. Unity forums and issue tracker are nice and dandy, but not really comparable to the developer himself answering every single question.
  • Second, I prefer depth based draw order to hierarchy based draw order and dislike uGUI dependency on the hierarchy sorting
  • And third, simply by virtue of existing for a very, very long time, NGUI has thousands of threads, posts, docs, tutorials and other things that allow you to learn faster, and allow you successfully find on the net solutions to most of the problems you might encounter. uGUI will solve that over time, but has not accumulated that amount of material around itself yet.

15
I have to admit I'm not terribly familiar with abstracted methods Unity is exposing for the shaders and I'm more comfortable with vertex/fragment code that is not obfuscating anything, so I'm having a bit of trouble understanding what is happening in the lowest LoD of NGUI shaders. In particular, I'm having trouble creating a panel-clipped subtype of the Depth Cutout shader that was provided as a sample with NGUI. As in contrast with standard colored shader, depth cutout shader has no high-LOD vertex/fragment section at all, I'm not really sure what I should alter in the LOD 100 section to make it compliant with panel clipping.

How can I approach this?


Pages: [1] 2 3