📍 Project Context
Traditional VR controllers, while intuitive for some interactions, lack the precision required for detailed manipulation tasks. Users frequently need to switch between VR and desktop environments, creating workflow disruptions and usability barriers in professional VR applications.
How might we leverage users' existing familiarity with mouse interaction to create more precise and seamless VR experiences, particularly for seated, precision-focused tasks? I designed and validated a novel mouse-based VR interaction system that seamlessly transitions between 2D and 3D interactions, maintaining the precision of traditional desktop interfaces while preserving VR's immersive benefits.
🔍 User Research & Behavioral Insights
Through preliminary interviews and observation studies, I identified key pain points:
Precision Gap: VR controllers suffer from hand tremor and tracking instability
Workflow Disruption: Constant switching between VR and desktop breaks immersion
Learning Curve: New VR users struggle with 3D controller paradigms
Professional Needs: CAD, design, and data analysis require pixel-perfect accuracy
And Research Questions:
Can mouse-based interaction outperform VR controllers in precision tasks?
How do users perceive the usability of hybrid 2D/3D interaction?
What are the optimal mappings for mouse actions in 3D space?
💡 UX Design Principles
From these insights, we established three core principles to guide all spatial interactions:
Precision without friction: Allow next-level control directly inside VR.
Continuous cognitive flow: Users should seamlessly move between broad exploration and micro adjustments.
Familiar yet immersive: Leverage familiar desktop skills (mouse precision) in a spatial context to minimize learning curves.
🔬 Behavioral UX Solution
Instead of screens, we designed dynamic behaviors and adaptive rules that formed the true spatial interface:
🚀 What We Delivered
We delivered an intuitive 3D interaction system that builds on familiar 2D mouse behaviors and extends them seamlessly into spatial environments. Core features include ray-casting controlled by mouse movement with visual feedback and collision detection, smooth 2D/3D transitions using sticky edges and visual cues, and adaptive interaction techniques like velocity-based scaling and scroll-based depth control.
Technically, the system was built in Unreal with OpenXR for VR support, optimized for Oculus Quest 3, and included custom mouse-to-3D mapping and real-time hand tracking. The interaction model supports object selection, translation, rotation, depth adjustment, and teleportation with simple, familiar inputs.
Phase 1 : A Fitts’ Law experiment. The primary metrics for this comparison were time, accuracy, and throughput.
Phase 2 : A more comprehensive evaluation of object manipulation capabilities. The primary metrics for this comparison were time, rotation difference, and position difference.
🔬 User Testing & Results
More deatiled result, please view: https://summit.sfu.ca/item/39151