I'm trying to integrate a project's input processing with the work NGUI's already doing in UICamera. In the past, I've had a separate class that looks at Unity's Input.touches and just does a raycast to see if it's a UI touch and ignores it, but this seems wasteful since I should just be able to ask UICamera. I see onCustomInput as a handy place to hook in after UICamera has done its work, but getting access to its MouseOrTouch instances is a little wonky.
There's no public access to the mMouse, so if I want to know if a mouse click was on a UI object, I have to figure it out myself. This is in spite of the fact that mMouse is included in the calculations for touchCount and dragCount. I do have access to the touches through GetTouch, but this potentially dirties the list of touches if I ask for a touch ID that has been removed. Also, if I want to account for controller input, onCustomInput is called between mouse/touch processing and controller processing, so that could be problematic.
Of course I can make changes, but for now I'm trying to avoid any custom changes to NGUI to make updating easier. Am I over-thinking/over-complicating things? Is there an easier way to go about deciding whether or not to process a touch?