Perhaps I should elaborate a bit more.

First I should note that removing the rigidbody and the code adding it automatically did work. The UI responds as expected even at rapid position changes.
Regarding having NGUI-generated interfaces/displays attached to moving objects, this is necessary as soon as the gui needs to be rendered by stereoscopic cameras. The workaround cited in a few places around the net is to have a UI camera render to a rendertexture and attach that one to your moving object (and thusly being rendered by the Rift cameras). However, this works only if you have a single (or perhaps two) flat menu-like displays. In cases where you would like NGUI to drive cockpit displays, indicator lights etc. such an approach does not work.
This might seem like a situation NGUI was not "intended for", but it is very efficient to use it for displays as opposed to using traditional textures and UV mapping, especially as soon as you incorporate text into the mix.
With Unity supporting the Rift and many (both independent and traditional) studios investing in VR I would argue that it is valuable both for NGUI and the upcoming Unity GUI to take these situations into account and make sure they work as well as possible.
Again, thanks!
