home..

StereoVR, reinventing 3D vision with anaglyph stereo vision

lovr lua opengl effects

We don’t see 3D anaglyph effects around much anymore, huh?

The concept is quite simple: to convey depth perception, we need to show different images for each eye, mimicking the normal experience of stereo vision. By printing these two images using separate colors, we can use traditional printing techniques and mediums to combine them into a single image, and then re-separate them at the last moment using colored glasses. These only need to filter colors, and can be made very cheaply with colored plastic and paper. If the images are captured correctly, the effect is quite noticeable, especially for scenes such as landscapes or architecture, where the depth perception truly adds a new dimension.

The technique is incredibly old, starting in the 1850s, but my few memories of it are from older comic ads or in some home releases of movies in the 00s (remember Spy Kids?).

The effect can also be emulated with a depth map of the image, reconstructing the two views from this. The effect looses some details, as hidden features and specific poses might be problematic, but it’s generally quite effective.

Why stop doing it? It’s an easy, cheap and simple way to encode 3D images into any 2D medium! Well, it’s mostly because it’s pretty bad, actually.

While companies have worked on the issue, the most common solution of red-cyan glasses produced pretty poor results, with mediocre and non-uniform color rendering, while the 2D image becomes oddly colored and harder to read. The printing and displaying step are also imperfect, as are the glasses used by the user (especially if you buy a 50 glasses pack for 11€ on AliExpress) and the color leakage results in blurry images, confusing depth and headaches.

Moreover, better iterations were developed in the previous decades, like linear and circularly polarized glasses used in theme parts or active glasses, used by expensive TVs and cinemas. Televisions and movie theaters kept at it for a long while, especially Sony in the PS3 era, but users and content makers never seemed to find ways to make the technology shine, leaving it feeling like a gimmick. The new approaches also raised costs and were not possible without specialized screens or in printing at all.

Beyond everything, the real issue is that 3D isn’t that great, as it only mimics the superficial aspect of depth perception.

So I decided to implement it in LOVR!

I thought this would be quick and easy, and it kinda wasn’t. I started thinking about it last year, got stuck and abandoned it. A couple of months ago, when I update the code to version 0.17, and decided to finish it.

Building it

We need:

The two views are achieved by creating a double texture:

function lovr.load() 
    ...
    self.views = 2
    self.height = lovr.system.getWindowHeight()
    -- Compute FOV on X axis
    self.fovx = self.fovy * (self.width / self.height)
    -- Texture on which the view is produced 
    self.canvas = lovr.graphics.newTexture(self.width, self.height, self.views, {
        type = 'array',
        usage = { 'render', 'sample' },
        mipmaps = false
    })
    ...
end

And by rendering two separate views, slightly offset on the X axis. The render pass here means that these textures are produced separately and saved the self.canvas texture:

function Stereo:render(fn)
    -- Pass on which the results are rendered
    local pass = lovr.graphics.getPass('render', self.canvas)

    -- Compute the pose of each eye
    local offset = vec3(self.ipd * .5, 0, 0)
    pass:setViewPose(1, mat4(self.head):translate(-offset))
    pass:setViewPose(2, mat4(self.head):translate(offset))

    --Produce the two views
    local projection = mat4():fov(self.fovx, self.fovx, self.fovy, self.fovy, .01)
    pass:setProjection(1, projection)
    pass:setProjection(2, projection)

    -- Draw all textures on the canvas to then display
    fn(pass)

The combination is done later with a shader, so it doesn’t take away even more resources. The code is heavily inspired from ffmpeg’s implementation and from a relevant SO answer:

layout(set = 2, binding = 0) uniform texture2DArray canvas;

vec4 lovrmain() {
    // Matrix values from https://stackoverflow.com/questions/66404340/how-are-dubois-anaglyph-matrices-calculated
    vec3 RC_Dubois[3][2] =
    {{{0.45610004,  0.50048381,  0.17638087}, {-0.0434706,  -0.08793882, -0.00155529}},
    {{-0.04008216, -0.03782458, -0.01575895},  {0.37847603,  0.73363998, -0.01845032}},
    {{-0.01521607, -0.02059714, -0.00546856}, {-0.07215268, -0.11296065,  1.2263951}}};
    
    
    // the matrix is 0-65536, might want to either rescale or something
//    {{{29891, 32800, 11559}, {-2849, -5763,  -102}}, // r
//     {{-2627, -2479, -1033}, {24804, 48080, -1209}}, // g
//     {{-997, -1350,  -358}, {-4729, -7403, 80373}}}; // b
    vec2 eyeUV = UV * vec2(2, 1);
    float eyeIndex = floor(UV.x * 2.);
    vec3 rEye = getPixel(canvas, UV, 0).rgb;
    vec3 lEye = getPixel(canvas, UV, 1).rgb;
    vec4 outColor = vec4(1);
    outColor.r = dot(lEye, RC_Dubois[0][0]) + dot(rEye, RC_Dubois[0][1]);  
    outColor.g = dot(lEye, RC_Dubois[1][0]) + dot(rEye, RC_Dubois[1][1]);  
    outColor.b = dot(lEye, RC_Dubois[2][0]) + dot(rEye, RC_Dubois[2][1]);  

    //getPixel(canvas, eyeUV, eyeIndex)
    return Color * outColor;
}

Here the syntax is a bit more complex:

The Dubois refers to Eric Dubois, who optimized the color schemes and implemented the method in code in 2001. His website is still online!

If the headset is present in PCVR mode, this is rendered separately from the normal view but in the same pose, allowing spectators to perceive the 3D effect without disturbing the VR user. If it’s used with the simulated headset, the effect is applied to the view. When run on an Android device, the entire process is skipped to not waste precious resources.

Makes sense, right?

Almost

Set up like this, the eyes are not focused. They’re staring straight at infinity, and that breaks the magic. This took me months to figure out.

Add a little rotation to the mix, and

    -- Slightly rotate viewpoints to focus at a distance
    local rotation_l = quat(-self.focus_angle, 0, 1, 0)
    local rotation_r = quat(self.focus_angle, 0, 1, 0)

    -- Compute the pose of each eye
    local offset = vec3(self.ipd * .5, 0, 0)
    pass:setViewPose(1, mat4(self.head):translate(-offset):rotate(rotation_l))
    pass:setViewPose(2, mat4(self.head):translate(offset):rotate(rotation_r))

It looks a lot better! I wish I had a better environment to show, but I’m really not cut for the art segment.

Using it

A usable version of this project as a self-contained library for LOVR can be found on GitHub

To use it, simply add the Stereo3D folder to your project, then add a few lines to your main.lua:

local Stereo = require("StereoVR.Stereo")

function lovr.load()
  Stereo:init('3d')
...
end

...
-- End of the code
Stereo:integrate()

The :integrate() overrides the draw, loads and overrides callbacks as needed.

:init(mode, fov, ipd, focus_distance) has three parameters:

All images courtesy of Wikimedia 1, 2, 3


© 2024 Martino Trapanotto   •  Theme  Moonwalk