Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.
Motivated by the advent of mass-marketed head-mounted displays, fully immersive movie experiences have now become an exciting possibility. While synthetic scenes can easily incorporate the necessary stereo vision and ego-motion properties in real-time using standard GPU rendering, it is not straight-forward to do the same for conventional video footage of real-world scenes. The purpose of this project is to enable the immersive experience of real-world recordings by taking advantage of perceptual issues, and extending the visual immersion beyond a single viewer.
Challenges & Highlights
In order to produce immersive videos from dynamic real-world scenes, several problems need to be solved: The scenes must be captured comprehensively, scene content must be represented suitably, and arbitrary new viewpoints must be rendered in real-time producing photo-realistic results with minimal latency. Moreover, additional 3D graphics content needs to be augmented to include avatars so that users can interact which each other. These avatars need to be rendered according to the scene’s illumination and mimic the body and eye movements using motion capturing and eye tracking. To further increase the immersion, perceptive effects are taken into account during rendering.
Potential applications & future issues
Authentic visual realism constitutes the strongest cue for our sensation of reality. By enabling immersive visual realism, the presented project will open up exciting new application scenarios for the use of immersive displays, not only in visual entertainment but also in areas like professional training, remote collaboration, trauma therapy, or fundamental perception research. The long-term goal is to allow fully immersive dynamic real-world videos capable of multi-user interaction in real-time.