Multi Plane Stereo 3D (mpS3D) VR AR near eye display architecture


  • PRODUCT SERIES AR VR Near eye display
  • PRODUCT NAME White paper
  • RELEASE DATE November 2018
  • STATUS p2001 concept prototype
  • Cathegory Head Mount mpS3D near eye display

From S3D television to S3D VR headsets

A problem known as vergence-accommodation conflict (VAC), which limits the comfortable viewing time and might cause adverse effects – most notably eye fatigue, vision disorders, blurred vision and problems with concentrating on a task, hasn’t been solved. This is highly undesirable for professional usage of stereoscopic display technology, as well as for consumer and entertainment segments which require safe and widely accepted 3D technology (with high user satisfaction rates).

In human visual system, which is binocular, accommodation and eye convergence angle are naturally linked depth cues. When stereoscopic 3D imagery is displayed on a single focal depth-plane, which is the case for most VR/AR display products, eye convergence has the freedom to change, while accommodation is fixed on the only available focus plane (otherwise the image would be out of focus).
Consequently, these depth cues become decoupled, contradicting and causing the VAC and attributed disadvantages. Typical products that realize such an optical architecture are the stereoscopic 3D TV’s and their more recent derivative – single focus plane VR/AR headsets (near-eye displays).

Optical principles of multi plane Stereoscopic 3D (mpS3D) concept

Multi plane Stereoscopic 3D (mpS3D) display concept from the scientific point of view has been previously investigated from different aspects, using theoretical approach and model-systems in a laboratory environment. Experimentations have been performed with a moving (sweeping) displays, variable focus lenses, electrically controllable optical path extenders, focal surface generation and others.

Nevertheless, for practical implementation in a head-mounted display currently available technology hasn’t yielded a satisfactory result capable of gaining wide acceptance. Studies by Rolland, Akeley, MacKenzie and others, have shown that an accommodation of human visual system can be driven continuously or pseudo-continuously with a quite limited number of focal planes without obvious degradation in image quality.

Key enabling technology and operation principle

In order to implement multiplane S3D image near eye display architecture LightSpace Technologies have developed superior optical quality fast switching optical diffusers that are used to assemble multi plane volumetric 3D image screens. They do switch between low haze fully transparent to high resolution image quality diffuser state.
Translucent state are characterized with very low haze levels that allow to assemble them in multi-plane stacks. For designing mpS3D near eye displays sized for head-mounted VR/AR applications, the Company has developed a scaled-down from large volumetric 3D screens production technology of multi-plane diffuser stacks.

Multi plane diffuser are fabricated as stacks of miniature LCD panels in various form-factors – from 0.5″ to 3.5″ (diagonal), with 4 to 6 active switchable diffusing layers. Individual diffusing layer has switching speed between transparent and diffusing state below <1mS. Complete near eye mpS3D system requires fast image project and synchronized diffuser layer driver. Most optimal image source for fast image projector today is fast refresh rate MEMS Spatial Light Modulator (for example suitable resolution Texas Instruments DLP). Typically a light engine is built around RGB LEDs but in a case, when VR or AR headset design has foreseen utilization of a holographic image waveguide, or other holographic optical elements, the light source has to be reconfigured to utilize small RGB laser diodes or integrated laser modules. The image stream to multi-plane S3D near eye display can be fed via DisplayPort 1.2-1.4 or DisplayPort over USB-C as for complex display device. The whole imaging system operates as follows - a multiplane screen driver sets the Plane 1 in the diffusing state, after a small delay the image projector outputs the first image depth plane comprised of sequently displayed RGB sub-frames.
After the last color subframe has been shown, the screen driver switches Plane 1 to transparent mode and repeats the same pattern for the Plane 2. After completion of the last image plane the sequencing procedure restarts from the beginning. With high speed DLP modulators it is possible to achieve 60 to 90Hz refresh rate for image complexity of 4 to 6 image planes. Optimally, multi-plane S3D system requires two separate image projectors.

Multi-plane switchable diffuser key enabling technology allows to build real time superior stereoscopic 3D near eye display which does not create vergence accommodation conflict (VAC) and has several advantages over single fixed or variable focus plane S3D near eye displays.



General characteristics
Employed optical architecture

Short name of visualization technology
3D Image optical focus depth range, m

Binocular vision convergence depth cue
Monocular accommodation depth cue
Optical 3D depth perception characteristics
Accomodation vergence conflict
Optical 3D depth observabkle by a signle eye
Image quality characteristics
X-;Y-; plane resolution
Number of employed image planes
Overall image resolution over X-;Y-;Z-;
Pixel density of an image, ppi

Comfortable viewing maximum period

Most Sold VR/AR Headsets **)

single focus plane stereoscopic 3D

Single focus plane fixed at a distance of 4m to 8m


creates visual conflict – eye fatigue, sickness

up to 2MPix per eye
up to 2MPix per eye
up to 600, defined by 2D display

30 min., limited by eye fatigue *)

LightSpace Technologies “mpS3D”

multi-plane volumetric Stereoscopic 3D
screen and image projector
multiple simultaneous focus planes over
a range 0.3m to 20m (infinity)

Does not create conflict

Up to 2 MPix per eye
4 to 6
Up to 12 MPix per eye
Up to 1000, scalable by projection ratio
to achieve optimal density/FOV relationship
not limited by eye fatigue

*) – according to Samsung: “…people can’t stay inside Gear VR comfortably for more than 30 minutes …. the lenses cause too much fatigue to the eyes…”
**) Microsoft Hololens; HTC Vive Pro; Oculus Rift, Go; Sony PlayStation VR; Samsung Gear VR; Lenovo Mirage Solo; Google Daydream and others

Optical architecture of mpS3D near eye head mount display for use in VR headset

mpS3D based VR headset consist of very few components:

  • multi plane image stream spatial demux (diffuser stack)
  • eyepiece that optically expands multi plane image screen, requires precise placement to expand virtual image planes at required distances, they need to be adjusted to particular user eye requirements
  • image projector which is placed above multi plane screen and folding mirror system to forward projected image into Image Stream Spatial Demux (diffuser multi plane device)

Birds bath architecture of mpS3D near eye head mount display for use in AR headset

Birds bath optical architecture mpS3D based AR headset consist of very few components:

  • multi plane image stream spatial demux (diffuser stack)
  • image projector which is placed above multi-plane demux
  • birds bath optical combiner that optically expands multi-plane image stack, requires precise placement to expand virtual image planes at required distances, they need to be adjusted to particular user eye requirements
  • optional additional lens to adjust optical distances to user eye requirements – not shown on the drawing

Holographic waveguide based architecture of mpS3D near eye head mount display for use in AR headset

Holographic waveguide based mpS3D AR headset consist of more complex assembly but allows for flat image combiner:

  • multi plane image stream spatial demux (diffuser stack)
  • optical element that optically expands multi plane image stack, requires precise placement to expand virtual image planes at required distances, they need to be adjusted to particular user eye requirements
  • image projector which is placed above multi-plane screen
  • RGB laser light source
  • holographic surface or volume grating waveguide image combiner for RGB
  • optional protective glass to block out from reaching RGB gratings of bright outer light

Rendering software and performance aspects for mpS3D near eye head mount display

The image rendering for the multi-plane stereoscopic architecture is a two-pass process, in contrast to conventional stereoscopic displays. The first pass is identical to conventional methods, while the second pass is used to atribute the data rendered in the first pass to their respective image depth planes – in other words the 2nd rendering pass is the “depth slicing”.

In contrast to, multi-view 3D display technologies (light field and holographic) in which the processing load greatly scales with the number of views to be produced, the multi-plane architecture only additionally requires depth data (depth map) of the scene, which in the case of computer-generated content is readily available. Thus, the processing load in the 2nd rendering pass responsible for the “depth slicing” is virtually negligible in comparison to conventional stereoscopic rendering and makes up a tiny part of the whole rendering process.

A simplified evaluation using OpenGL in graphs demonstrates the processing performance differences between stereoscopic rendering and multi-plane stereoscopic rendering for 6 depth planes. As can be seen, the performance differences are minimal
even for very simple 3D scene. Moreover, when additional processing in form of depth anti-aliasing is enabled, the rendering performance remains comparable to basic stereoscopic rendering.