Scene Responsiveness for Visuotactile Illusions in Mixed Reality

1Meta Reality Labs Research 2University of Duisburg-Essen
ACM Symposium on User Interface Software and Technology (UIST 2023). San Francisco.
Honorable Mention Award.

Scene Responsiveness allows virtual actions to seemingly affect physicality.

Abstract

Today's mixed reality systems enable us to situate virtual content in the physical scene but fall short of expanding the visual illusion to believable environment manipulations.

In this paper, we present the concept and system of Scene Responsiveness, the visual illusion that virtual actions affect the physical scene. Using co-aligned digital twins for coherence-preserving just-in-time virtualization of physical objects in the environment, Scene Responsiveness allows actors to seemingly manipulate physical objects as if they were virtual.

Based on Scene Responsiveness, we propose two general types of end to-end illusionary experiences that ensure visuotactile consistency through the presented techniques of object elusiveness and object rephysicalization. We demonstrate how our Daydreaming illusion enables virtual characters to enter the scene through a physically closed door and vandalize the physical scene, or users to enchant and summon far-away physical objects.

responding to the user

The scene can also respond to user actions, in this case a telekinetic gesture, to summon the plant to the user's hand.

To prevent disillusion from visual collisions - the user sees that their physical hand collides with the virtualized object, but doesn't feel it - we introduce object elusiveness. To prevent disillusion from tactile collisions - running into a visually hidden object - we introduce object rephysicalization.

RealityToggle: virtualizing physical objects

Scene responsiveness is based on a technique of toggling the reality state of an object from physical to virtualized. Given a physical chair, we first remove it visually. Notice how the filled background lines up with with a physical geometry and appearance. Then, we insert a digital object twin at the physically corresponding pose and shape. The digital object twin can be manipulated in all ways that virtuality affords.

TwinBuilder: virtualizing physical space

After scanning in the space and the objects, we use our custom Unity plugin to automatically import the scene's visual appearance, geometry and walkable area semantics. Next, we position and the object twins and annotate receptive affordances (non-manipulative use of an object) and responsive affordances (manipulative use of an object) for characters and the user. We can develop the gaming logic like a standard 3D game and the interaction logic like a standard VR game.

For more details, including our Copperfield illusion, the spatial computing and shading architecture, many more applications such as scene-responsive gaming, telepresence, or television, and results of our user study, check-out the paper and stay tuned for the talk on the ACM Symposium of User Interface Software and Technology Nov 1, 2023, in San Francisco!

BibTeX

@article{kari2023sceneresponsiveness,
  author    = {Kari, Mohamed and Sch{\"u}tte, Reinhard and Sodhi, Raj},
  title     = {Scene Responsiveness for Visuotactile Illusions in Mixed Reality},
  journal   = {ACM User Interface Software \& Technology (UIST '23)},
  year      = {2023},
  doi       = {https://doi.org/10.1145/3586183.3606825}
}