After the Vision Pro’s announcement I saw a number of VR devs post about the potential with eye tracking input (like how great Pac Man would be to control). I’ve done some eye tracking tests in the past but just for animation / gaze detection, so I wanted to test how bad it would feel to actually try using it as a input.
So using the Varjo XR-3 eye tracking, I made a blackboard you can draw on with your eyes (pinch to draw):
Not only was this incredibly fatiguing (this is attempt number 3 so you can actually see my eyes watering at this point), but it also incredibly hard to draw correctly. My eyes would naturally go to the NEXT place I was going to draw, rather than the line I needed to finish.
What it really showed was what a lot of developers who’ve tried eye tracking already knew -it’s not a input device at all alike to touch/voice/body. We didn’t evolve as humans to use our eyes as tools – so it’s unnatural to use them to discern intent. Something even reacting to your eye movement feels bad and unnatural – this line drawing exercise might be the worst case of that.
I’ve haven’t tried the Vision Pro UI yet, but what I’ve heard of it’s eye tracking is it makes the UI feel like ‘magic’. I think what is meant by this is it uses your eye tracking alongside other input (your hands) as a factor in understanding intent, which I think could work.
Another prototype I wanted to try was making hand grasping feel better in AR.
I wanted to explore a physics building game in AR. To do that though the current way most AR interaction happens with hands felt terrible. The standard interface is to put your hand over a virtual item, then pinch and move, and release the pinch to place there.
When looking for an alternative I found the UltraLeap Physics Hand Package, which allows for grasping virtual items with your hands by physically simulating fingers and joints. It uses Unity’s new Articulation Body physics to be more stable than usual rigidbodies.
It worked great but the main problem was being able to tell how you were actually grasping the virtual item. So I wanted to show that to the user:
I wrote a shader to fade in the hand mesh based on collision – so when you are grasping a virtual item you can see how your physically simulated fingers are grasping it. This felt absolutely necessary to allow for AR physics interaction to feel good.
Lastly another area I wanted to explore was point clouds for communication. I wanted to see how it would look/feel to see a point cloud representation of someone you were talking to. So I found my old ZedMini camera and used that for the point cloud, and the Varjo XR-3 for AR. Here is a quick test of using it with myself – felt more just cool than actually useful: