Me and my team, we have just created 2 softwares that could run together into the same holographic pyramid and/or on a LCD screen.
The first is a software (with six degrees of freedom), which allows interactions (without glasses or helmets!) with a simple webcam, for the user in front of a giant render (Photorealistic) in real-time: the user simply moves the head and the image shows the depth of the space. Why we have to show, only the Classic, static renders? We are able to move our eyes inside the new spaces of architecture. We are unique in the world: the shadows and the lights of the 3D models represented with translucent materials will change, depending on the position of the observers. This is also available for the 3D holograms!