A funny thing happened to me at the Digital Cinema Forum this year. There was a 3ality tutorial on the basics of shooting 3D. When they demonstrated bad stuff, it looked bad to me. When they demonstrated good stuff, it looked good. But one time when they demonstrated something that was supposed to look less than good I thought it looked superb. This strange post is about that effect on me (and whether others might feel the same).
What 3ality demonstrated apparently had nothing to do with anaglyph (colored-glasses) 3D. There were no colored glasses involved and no colored filters on the stereoscopic camera rig.
For reasons that will soon be apparent, many who work in stereoscopic 3D don’t want either term applied to anaglyph. But anaglyph is still being used, as can be seen by the glasses shown above from Comcast’s 3D offering of The Final Destination earlier this year.
There are strong reasons why anaglyph is still used in 2010. It works (to some extent) on any TV set. It passes through any color-video distribution system. And its glasses are inexpensive.
Any form of stereoscopic 3D requires a mechanism to ensure that the correct view gets to the correct eye. In anaglyph, that’s done with complementary color pairs, typically variations on red-cyan, green-magenta, and blue-yellow.
A recent study of dozens of pairs of red-cyan anaglyph glasses of different types conducted at Curtin University of Technology found that the vast majority of them did a good job of passing the desired wavelengths of light and rejecting the undesired ones. Unfortunately, the displays they were used with, whether CRT, LCD, or DLP, direct-view or projection, did not match those filter characteristics, resulting in ghosting (visibility of the other eye’s view): http://cmst.curtin.edu.au/local/docs/pubs/2004-08.pdf
Ghosting is not the only complaint about anaglyph. There is also poor color rendition, eye rivalry (based on differing brightnesses), light loss, and a need for display adjustment to get even the limited quality it seems to offer.
Today, the term “stereo” seems most often used in association with sound, not pictures. That wasn’t always the case.
When Scientific American ran the diagram shown above in 1881, they said that the form of dual-microphone, dual-earpiece sound system depicted offered something called “binauricular auduition,” but they noted its similarity (in the aural realm) to an older form of depth perception, that offered by the visual stereoscope. Today, of course, we know the aural system better as stereophonic sound or simply stereo.
Despite its origins in the 19th century, it took a long time for stereo sound to become widespread. In 1959, when ABC offered The Peter Tchaikovsky Story on TV with stereo sound, viewers needed two radios in addition to their TV sets to get the three audio feeds, as shown below.
Ensuring that viewers experienced something different for their efforts required the creation of a sensational, but unnatural, form of stereo. A sound might emerge from just the left speaker or just the right one. That bouncing back and forth between speakers became known as “ping-pong” stereo. It instantly conveyed a sensation of difference from single-speaker sound, even if it wasn’t natural.
Today, with stereo sound nearly ubiquitous (and surround sound rapidly catching up), listeners no longer need to be wooed by an abnormal sensation. Natural-sounding stereo is what’s desired.
Instead of using widely separated microphones, audio producers often use so-called single-source techniques, such as the two-microphone X-Y pickup shown above in Shure Notes on Stereo Miking Basics: http://www.shurenotes.com/issue25/article.asp. There are also single-piece stereo microphones and even single-source speaker systems, such as the ones from EmbracingSound shown below: http://www.embracingsound.com/
I hope it is not controversial to say that 3DTV presents some challenges. One is simply the currently low penetration of 3DTVs, which is why anaglyph is still sometimes used.
Whether for cinema or home, stereoscopic 3D normally requires two cameras and lenses per camera position, plus some kind of rig to hold them in position. In addition to usual shooting crews, there is typically a convergence operator for each pair of cameras and, perhaps, an overall stereographer. There are two eye views to be recorded, processed, and distributed.
Specific to 3DTV, there are issues of screen size and viewing distance. Above is one version of a diagram from Professor Martin Banks at the Visual Space Perception Laboratory at the University of California in Berkeley. It shows that audiences watching 3D in a cinema auditorium might suffer discomfort from the disconnect between eye focus (accommodation) and eye pointing (vergence) when something comes way out of the screen (upper left section of the diagram) but should otherwise be fine.
Home viewers, however (lower section of the chart), can have problems with both images that come out of the screen and those that go far behind the screen. An extended version of the chart appears to indicate that home viewers could be comfortable with great depths behind the screen if they sit at least 3.2 meters (10.5 feet) away from the screen, but Professor Banks says more experimentation is required to determine the exact comfort zone.
To avoid the discomfort caused by that “vergence-accommodation” conflict, 3DTV images can be scaled down from cinema size. That can eliminate the discomfort, but it can introduce a different problem: scene miniaturization.
Because I do not know what stereoscopic 3D glasses you might have, I’ll demonstrate scene miniaturization using a different technique. Below is an image called “Elan Valley Miniature” uploaded to Flickr by Frosted Peppercorn: http://www.flickr.com/photos/frosted_peppercorn/481102393
It looks like a charming scale model, but it actually started as a photograph of a full-size building in a full-size valley. In this case, the “tilt-shift” technique was used to create the miniature effect, but scaled-down stereoscopic images can cause a similar sensation. Big, brawny football players can seem like tiny dolls when the stereoscopic depth is unnaturally shallow.
Other issues associated specifically with 3DTV are related to active-shutter glasses. They are currently relatively expensive. They can be confused by camera or lightning flashes and other lighting. They have batteries that need replacement. And, sometimes, they still leak wrong-eye ghost images. Below a ghost at left can be seen in an image shot through the lens of a pair of active-shutter glasses in a Gizmodo 3DTV review: http://gizmodo.com/5501900/the-best-3dtv-samsung-un55c7000-vs-panasonic-tc+p50vt20
Ghosts, scene miniaturization, glasses, and household penetration were not issues raised in two academic papers co-authored by Mel Siegel, senior research scientist at the Robotics Institute at Carnegie Mellon University. Instead, “simulator sickness” was of concern. “Kinder Gentler Stereo” was published in the Proceedings of the SPIE in May 1999: http://spie.org/x648.html?product_id=349388. “Just Enough Reality: Comfortable 3-D Viewing via Microstereopsis” appeared in August 1999: http://spie.org/x648.html?product_id=357617.
Both suggest that a reduction in the interaxial distance (the distance between the central lens axes of the two cameras in a stereoscopic rig) to near zero can deliver a 3D sensation without discomfort. Yoshihiko Kuroki and Tsuneo Hayashi of Sony’s Technology Development Group applied the principle to a demonstration camera, shown below.
Note that it has just one lens. Behind the lens, an optical system separates nearly (but not quite) identical left-eye and right-eye views to two cameras, as shown below.
The pictures were shown on a 3DTV display at the 2009 Consumer Electronics Show. Both of the displays shown below are 3DTVs, and both require glasses to display stereoscopic images. The right one has the typical double image that makes 3DTVs displaying 3D images incompatible with non-glasses viewing. The left one, fed a microstereosopic signal, however, is fully 2D compatible and, therefore, cannot cause ghost images of the wrong-eye view.
That was the funny thing that happened to me at the Digital Cinema Summit. The demo rig used by 3ality was in a convention-center hallway not lit for shooting, so the light came from high-intensity ceiling lamps, as shown below in an image shot by Mark Forman: http://www.screeningroom.com/
Those high-intensity lights caused ghosting in many of the images I saw (whle sitting in the front row, to the right of the screen at an acute angle) using the glasses and projection available at the event. When the 3ality rig was adjusted to minimal interaxial distance, I saw comfortable 3D images with no ghosting — a pleasure!
The Sony microstereoscopic demo rig, like the 3ality rig set to minimal interaxial distance, addresses some of the challenges of 3DTV. It eliminates visual discomfort caused by vergence-accommodation conflict and doesn’t replace it with scene miniaturization. It eliminates ghosting and allows 2D viewers to share displays with 3D viewers at the same time. It eliminates the need for a convergence operator (and, probably, a stereographer) and (Sony rig only) allows single lenses to be used per camera position. But it still requires two cameras per position; the storage, processing, and distribution of dual images; and the use of 3DTVs, with their associated active-shutter glasses issues. Is it possible to go one more step?
U.S. patent 3,712,199 was granted on Jan. 23, 1973 to Jimmie D. Songer, Jr. He’s also credited with the invention of “video-assist” systems for motion-picture film cameras. U.S. patent 4,290,675 was granted on Sep. 22, 1981 to Leo Beiser, one of the developers of the laser product-code scanner.
Songer suggested using anaglyph-type filters in the iris plane of a lens. Beiser suggested adding a horizontal-slit anaglyph-filtered iris (as shown below) to the existing iris, allowing the latter to be used to control depth of field and the former for 3D and light control.
Anything in focus passes through the clear center of the slit iris and is completely unaffected. Scene objects that are out of focus because they’re too close pick up one set of color fringes, but only within the defocus zone. Scene objects out of focus because they’re too far pick up the opposite fringes. The amount of fringe varies with range from the focus distance.
One camera picks up the images through one lens, the only modification being the slit iris. One video signal passes from the camera all the way to an unmodified color video display. Viewers without glasses see 2D, with a touch of color fringing in the out-of-focus areas.
As for glasses for 3D viewers, they’re anaglyph, but, because there is no ghosting, they do not need to use saturated-color filters. Light color tints are sufficient to direct the views but don’t seriously affect color rendition or cause eye rivalry.
At least that is my recollection from having seen the system demonstrated in 1979 in New York by Digital Optical Technology Systems (DOTS), a Netherlands company, at an exhibition called “New Advances in the Technology of Film.” The system was also broadcast on Channel Nine in Sydney, Australia, which, some 30 years later, is talking about a recent “first” terrestrial 3D broadcast.
If this is the most perfect 3D system, why isn’t anyone else talking about it? There are at least three possible reasons. One is that the DOTS system was, for many years, protected by those patents, now expired. Another is that my 31-year-old recollection might be faulty, and the gentle-anaglyph DOTS glasses might not be as gentle as my memory has them.
The last is that any type of microstereopic system provides less WOW! than other 3D systems. Even Avatar has sometimes been criticized as not being sufficiently 3D.
Something similar, of course, occurred in the early decades of stereo sound, but today we’ve moved from ping-pong to more-natural single-source techniques. Will 3DTV do the same? Should it? It’s something to ponder.