Produced by:
| Follow Us  


  • Drinking from the Fire Hose
  • What If… Aereo Wins?
  • Updated: Live from CES Blog: All the news from today’s press events

How Different Is 3D?

November 11th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

cameroSmWhen you watch a televised advertisement for an automobile, do you fear there’s a moving car in the room with you? I didn’t think so. But more on that later.

This post is about human perception of 3D imagery. It’s also about how we see moving images in general and about color, sound, carsickness, and the idea of smashing open a TV set with a hammer to allow the tiny people inside to be seen more clearly.

ABZ smallThat last suggestion probably first appeared in 1961 in an age-inappropriate alphabet tome called Uncle Shelby’s ABZ Book, written by Shel Silverstein. In it, T was for TV. The book indicated that small performing elves lived inside the television set and an adventurous child reader using a hammer to break open the tube “will see the funny little elves.”

That same year, Colin M. Turnbull of the American Museum of Natural History published “Some observations regarding the experiences and behavior of the BaMbuti Pygmies” in the American Journal of Psychology. One of the observations seems related to those little elves in the television set.

African_Buffalo_Drawing_historic small“As we turned to get back in the car, Kenge looked over the plains and down to where a herd of about a hundred buffalo were grazing some miles away. He asked me what kind of insects they were, and I told him they were buffalo, twice as big as the forest buffalo known to him. He laughed loudly and told me not to tell such stupid stories and asked me again what kind of insects they were. He then talked to himself, for want of more intelligent company, and tried to liken the buffalo to the various beetles and ants with which he was familiar.”

Those of us who grew up with television and open spaces might find both stories equally ludicrous. We know the people we see on a TV screen are full size (and don’t live inside the television set) and so are distant animals. But why do we know that?

Based on the angles their images form on our retinas, we should think the people we see on a small TV screen are tiny. We don’t only because we’ve learned what TV is. Kenge, a life-long forest dweller, had never been exposed to distant vision, so he’d never learned how small things might look when viewed from far away.

Banks V-AWhat does that have to do with 3D? Take a look at the diagram at the left. It was created by Professor Martin Banks of the Visual Space Perception Laboratory at the University of California – Berkeley. The vertical axis represents viewing distance from a movie or TV screen, the “accommodation” or eye’s-lens focusing distance. The horizontal axis represents the depth within a stereoscopic 3D image where something appears to be, the “vergence” or “convergence” distance, the distance to which the two eyes point (“vergence” is used because eyes can both converge and diverge).

The dark-colored area represents a comfortable viewing zone — a depth range where 3D viewing should not make viewers feel sick. The lighter-colored area represents a potentially uncomfortable “fusion” zone, where viewers can combine the two eye views into a single object or character, though they might not like doing so. Outside that zone, even fusing the two images into one can be a problem.

At viewing distances of at least 3.2 meters (easily achieved in cinema auditoriums; less common in homes), the comfort zone appears to extend out to an infinite depth behind the screen, and only very close vergence depths are a problem. At shorter (home) viewing distances, even significant depth behind the screen can cause discomfort, as well as in front of it.

in-three home_theaterThere’s an easy solution to the problem, one put forth in the white paper “3D in the Home.” It was previously available on the web site of the 3D company In-Three.

In accordance with the comfort-zone plotted above, the In-Three white paper said depth could extend to an infinite distance behind the screen for movie-auditorium viewing, with restriction only for imagery extending in front of the screen. As shown in the diagram at right, however, for a home-theater viewing distance of six feet, the white paper suggested restricting depth behind the screen to just four feet and depth in front of the screen to less than two feet. That depth range, too, seems well within the vergence-accommodation comfort zone.

football_field trimmedIt might be possible to restrict shooting to that depth range in a talking-heads-style public-affairs discussion. But that’s an extremely limited range.

It’s unlikely to be sufficient even for a variety or reality show, let alone for most sports. Two football players standing side-by-side perpendicularly to the camera might exceed the range all by themselves.

Another alternative, therefore, is to shoot the natural scene depth but adjust homologous points in the two eye views so that the depth presented on a home display does not stray beyond the comfort zone. Unfortunately, the shrunken depth might cause those football players to be perceived as being tiny, like the supposed buffalo insects or mythical TV-set elves.

Banks apparatus trimmedProfessor Banks is well qualified to discuss discomfort associated with viewing stereoscopic imagery. He designed an impeccable experiment that proved that a vergence-accommodation conflict could cause discomfort (one experimental subject even aborted the sequence due to extreme queasiness). At right a subject bites a bar to ensure accurate distance measurements. But Banks was by no means the first person to note the consequences of a vergence-accommodation (V-A) conflict.

The zone of comfort is often called Percival’s zone in honor of Archibald Percival, who published “The Relation of Convergence to Accommodation and Its Practical Bearing” in Ophthalmic Review in 1892 (and even in that paper, Percival attributed ideas to prior work published by Franciscus Donders in 1864). The reason eye doctors have been concerned about V-A conflict relates, in part, to eyeglasses. If you wear them, you might have noticed a queasy feeling when you put on your first pair or when there was a substantial change in the prescription. But that feeling probably faded as you became accustomed to the V-A conflict.

1940_in_first-tv-network_mAnother group that was interested in V-A conflict was the original National Television System Committee (NTSC), which began meeting in 1940, the year this off-screen photo was taken. WRGB was named in honor of Dr. Walter Ransom Gail Baker, the engineer who became the head of the NTSC (the initials also stand for white-red-green-blue color systems).

The first NTSC came up with the standard for American black-&-white television, but they were also concerned about color. One of their concerns was that simple lenses (like those in our eyes) cannot focus red and blue in the same place at the same time. The change in focus is a change in accommodation, potentially leading to a V-A conflict. In other words, color TV, in theory, could have made people sick.

In fact, the NTSC concluded that it wouldn’t, based on such work as a paper by Technicolor research director Leonard Troland published in the 1926 American Journal of Physiological Optics specifically related to color motion pictures and the V-A conflict.  But, even if color TV would have made viewers sick in 1926, would it always have done so?

Ciotat2Consider, for example, a short movie shot by the Lumiere brothers in 1895, L’arrivée d’un train en gare de La Ciotat (The Arrival of a Train at the Station of La Ciotat). The original looked a little better than what’s shown here, but it was black-&-white and silent. And it’s clear that the train is not heading straight towards the camera.

Nevertheless, here is a report (translated from the original French) from Henri de Parville, an audience member at an early screening. “One of my neighbors was so much captivated that she sprang to her feet… and waited until the car disappeared before she sat down again.” The same reaction was not reported from screenings of other movies, such as one of workers leaving the Lumiere factory. In other words, it seems as though the crude, silent, black-&-white movie made at least one audience member react as though there were a locomotive in the screening room.

Tone TestAbout a quarter-century later, Thomas Edison conducted what he called “tone tests,” at which audience members were blindfolded or placed in a dark room and asked if they could tell the difference between a live opera singer and a mechanical phonograph recording of one. Here’s a contemporary account from the Pittsburgh Post in 1919 about a test conducted at a concert hall. “It did not seem difficult to determine in the dark when the singer sang and when she did not. The writer himself was pretty sure about it until the lights were turned on again and it was discovered that [the singer] was not on the stage at all and that the new Edison [phonograph] alone had been heard.”

It might seem ridiculous to readers today that a viewer could be scared by a silent, black-&-white movie of a train or that a listener couldn’t tell the difference between a live singer and a mechanical recording of one (in fairness, I should point out that one of the singers revealed, many years later, that she’d taught herself to sound like a phonograph recording). But that’s because we’ve learned to perceive the differences between those recordings and reality.

There are many examples of such perception education. You might have outgrown your childhood carsickness, for example, just as sailors get over seasickness.

In 3D, research into the amount of time it takes subjects to fuse stereoscopic images has found not only improvement with experience but even the ability of those who underwent the experiments to fuse stereoscopic images more rapidly when tested again after a very long period of no exposure to stereoscopic images.  3D perception, it seems, comes back, just like riding a bicycle. And some eye doctors specialize in training people with stereoscopic perception problems.

There are two pages of health warnings in the manuals of Samsung 3DTVs, and at least some of them may be very well justified by such issues as the vergence-accommodation conflict.  But that doesn’t mean viewers will always have problems watching 3DTV.

Tags: , , , , , , , , , , , , , , , , , , , , , ,

IBC 2010 – 2D, 3D, 4D, 5D

October 25th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

There was plenty of 3D at the International Broadcasting Convention (IBC) in Amsterdam this year.  At the awards ceremony, alone, the audience was frequently asked to don 3D glasses to see clips from the winners (before viewing a portion of the not-yet-released 3D movie Tron: Legacy).  But the first sentence of the first comment posted on the question “What did you see around at IBC2010?” posted on the LinkedIn Digital TV Professionals Group was “Lots of 3DTV Demos that nobody was looking at” (from Alticast senior vp Anthony Smith-Chaigneau), and two other group members quickly agreed.


In fact, some of the 3D demos were very viewed, including the ones in Sony’s exhibit, based largely around their MPE200 processor.  Introduced at NAB in April, the MPE200 was then capable primarily of correcting stereoscopic camera-alignment errors, as shown above.  It has become so popular that one announcement of Sony 3D equipment sales at the show (to Presteigne Charter) included 13 MPE200 processors but only 10 HDC1500R cameras (with two required per 3D rig).

At IBC 2010, the MPE200 was joined in that correction function by Advanced 3D Systems’ The Stereographer’s Friend (  Whereas the MPE200 currently has a four-frame latency, The Stereographer’s Friend is said to do its corrections within just one frame and for lower cost.

TS-5 smallSome stereoscopic camerazepar small rigs are said to be so precise that correction is not necessary.  Although some had seen it previously, 3ality’s small, relatively lightweight TS-5 rig (shown at left) was officially introduced at IBC 2010.  Zepar introduced an even-smaller stereoscopic lens system (shown at right) for a single camera, reducing the need for correction.  Such 3D-lens systems normally raise concerns of resolution and sensitivity loss, but Zepar’s is intended to be mounted on the Vision Research Phantom 65, which has plenty of each.

At IBC 2010, however, Sony’s MPE200 was no longer just a correction box; three more functions were introduced.  One is 2D-to-3D upconversion.  Sony was not alone in that area, either.  One new competitor is SterGen, an Israel-based company with a system intended specifically for sports.  According to their web site (, they offer “better quality than real 3D shooting.”


Then there’s graphics insertion.  In that new function, the MPE200 was joined by Screen Subtitling’s 3DITOR, which analyzes not only the depth characteristics of the current frame but also the depth history.  Above, one of the depth-measurement tools is shown (based on an image from Wild Ocean, ©2010 Yes/No Productions Ltd and Giant Screen Films).  The company offers a white paper on the myriad issues of 3D text:

Another new MPE200 function is stitching, the ability to combine pictures from multiple cameras into one big panorama and then derive a stereoscopic camera image from a portion of the result.  The European research lab imec had shown a stereoscopic virtual camera at NAB in April (and brought it to IBC, too:, and BBC R&D described a system even earlier (


Much of the interest in stitching at IBC 2010, however, was unrelated to 3D.  It was associated, instead, with the Hego OB1 system, which uses a package of six cameras in one location to create the panorama.  It won awards from Broadcast Engineering and TVBEurope magazines.  Certainly, the system uses interesting technology, but so does Sony’s MPE200.  Perhaps Hego’s winning the awards had something to do with how the OB1 was demonstrated, with bikini-clad beach-volleyball players on the IBC Beach, as shown above in a portion of a photo by Wes Plate (  That’s the camera array at the upper right.

Sisvel tile

In fact, there was plenty new at the show that was not 3D.  There was more 3D, of course.  In the area of distribution, Dolby pushed its version of 3D encoding and Sisvel brought a new “tile” format, shown above with an image from Maga Animation.  The left-eye view occupies a 1280 x 720 portion of the 1920 x 1080 frame, allowing it to be extracted for 2D HD viewing without necessarily changing existing decoders.

There were new 3D analyzers from Binocle (DisparityTagger) and Cel-Soft (Cel-Scope).  There was an iPhone/iPod app from Dashwood Cinema Solutions for stereoscopic calculations associated with Panasonic’s 3DA1 camcorder.  There was the Vision 3 camera with toe-in-free convergence that I wrote about just before the show (  There were glasses-free displays (one noting that its correct viewing distance was 4.4 meters).  There was a seven-camera 3D rig for capturing information for such displays.  There was a book about stereoscopic cinematography from 1905.  There was an eye-tracking 3D laser-display system.

That was just in the exhibits.  There were also plenty of 3D conference sessions.  IBC’s best-paper award went to a group from NDS for their paper “Does size matter? The challenges when scaling stereoscopic 3D content,” which showed that not only does apparent depth change with different screen sizes, but it also doesn’t scale.  And stereographer Kommer Kleijn punched holes in “religious” views of toe-in vs. parallel shooting in a presentation about stereoscopic shooting for people experienced in 2D.

Actually, in addition to 2D and 3D, IBC 2010 also had 4D.  It was in a small exhibit in a low-traffic hall.  The full title was Real-Sense 4D, from ETRI, the Korean Electronics and Telecommunications Research Institute.

ETRI 4D cropped

As shown above, Real-Sense 4D involves more than just an image display.  I tried it out.  When the story involved a fire, I not only saw the flames and heard them crackling but also felt the heat and smelled the smoke.  During a segment on ice skating, I felt the air rushing past and then, in a moment out of Nancy Kerrigan’s career, felt a sudden WHAP! on my legs.

Panasonic AF100 on shoulder-mount rig trimmed

As at many recent professional equipment exhibitions, there was also 5D, specifically the Canon Eos 5D Mark II DSLR camera, capable of shooting HD.  But there was also something characterized by David Fox in the IBC Daily as “the HD DSLR Killer.”  It was Panasonic’s AG-AF100/101 (above), shown only in a display case at the NAB show earlier this year.  It combines the advantages of a large-size image sensor (Micro Four Thirds format) with the features of a video camcorder.  At IBC, there were many operating units, and the reaction of the IBC press corps was wildly enthusiastic about them, much more so than to Panasonic’s 3DA1 camcorder.

Polecam_HRO_69_HD_lens trimmedWhereas at IBC 2009 some 25 new camera models were introduced, at IBC 2010, besides the V3i stereoscopic camera and the AF100/101, the main introductions were Canon’s XF100 and XF105 camcorders and IDT’s palm-sized, 2K, high-speed NR5.  There were also compact versions of NHK’s 8K Super Hi-Vision cameras from Hitachi and Ikegami.  But there were significant wide-angle lens introductions from Polecam (HRO69, at left) and Theia (MY125) for 1/3-inch-format cameras, offering horizontal acceptance angles of 69 and 125 degrees, respectively. Halibut

There were also introductions in the camera-mount area, such as Bradley Engineering’s multi-axis Gyro 350 (similar looking to the older Axsys V14 but said to be lower in cost), Vinten’s encoding Vector 750i pan head, and SiSLive’s Halibut underwater track.  Vaddio’s Reveal wall-mounted camera systems are invisible until used.  Broadcast Solutions showed a tiny two-seat Smart car equipped as a five-camera studio.  That’s not merely the control equipment; the five cameras were mounted in the car.

Brick House Tally Ho!Other acquistion-related introductions at IBC included a video whiteboard system from Vaddio that does not require a computer, a version of Sennheiser’s MKE-1 lavalier microphone in which every part, from cable to connector to windscreen, is paintable to precisely match costume color, and a wireless tally system from Brick House Video.  Capable of dealing with up to eight cameras, the Tally Ho! handles both on-air and preview/iso tally, and the charger for the tally modules doubles as the system  transmitter.

Atomos Ninja Product Image trimmedAJA Ki Pro MiniIf IBC 2010 wasn’t about new cameras, it did offer many new introductions in storage and distribution.  There was, for example, AJA’s new, small, lightweight, camera-mountable Ki Pro Mini (left).  Then there was the even smaller and lighter Atomos Ninja (right), intended specifically for use with certain types of cameras.  And Cinedeck Extreme v. 2.0 allows direct use of Avid’s DNxHD codec.  Sonnet’s Qio MR brings the ability to play essentially all popular types of camcorder flash cards (including Panasonic’s P2 and Sony’s SxS) to Windows-based tower computers.

Marvin trimmedThen there were transportable systems, bigger than those above but still usable in the field.  One was the Globalstor Extremestor Transport.  Comparably sized but serving a very different function was Marvin (left), from Marvin Technologies.  It accepts almost any form of field recording and then, according to preselected options, automatically makes copies, including archival tape cartridges and DVD screening copies.

The tiny storage devices introduced at IBC 2010 were joined by tiny encoders for distribution.  The ViewCast Niagara 4100 was small, the TV1.EU miniCaster smaller, and the Teradek Cube smaller still.  Clear-Com’s HelixNet intercom won an award from TV Technology Europe.  It’s a digital intercom system using microphone-type cables like older analog systems (but also very much like Riedel’s already existing digital Performer series).

Quantel QTube trimmed

There was much more at IBC.  Cloud-based editing (an Internet Explorer screen from Quantel’s QTube is shown above), a new acoustic summing algorithm, a multi-touch video wall — and those were just some of the items in the public exhibits.  In private rooms, one could find such items as TiVo’s integration of YouTube and Sony’s 24-inch OLED and terabyte memory card.

uWand trimmed

Then there was uWand, an unusual remote control from Philips.  Like so many others, it uses infra-red signals.  Unlike those others, it receives those infra-red signals rather than emitting them, so a user can, for example, move an image from a TV screen to a digital picture frame, just by aiming the remote control.

IBC clearly isn’t just about broadcasting anymore.  For more of my take on IBC 2010, see the Power Point from the Schubin Cafe IBC review on October 12, available here:

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

3D Camera: Something Different

September 9th, 2010 | No Comments | Posted in 3D Courses, Today's Special

I’ve been writing about 3D image capture for more than 35 years.  I’ve covered side-by-side and beam-splitter rigs, parallel and toed-in lenses, integrated cameras, single-lens stereo, integral imaging, 3D illusions, and holography.  But I’ve not — until now — covered anything like Frontniche’s VC-3100 HD, made by V3i.

Frontniche 1

It’s an integrated 3D camera being introduced at the International Broadcasting Convention in Amsterdam tomorrow.  It uses dual 3-chip 2/3-inch-format 2.2-megapixel CCD sensors and has dual 18 x 7.6 mm lenses with synchronized zoom, focus, and iris functions.  It has a 7-inch viewfinder.  It even has a tally light.  In other words, ignoring its 3D aspect, it’s like a typical broadcast HD camera (with a twin attached).

Frontniche 2It is, however, a 3D camera, but unlike any other.  Its image sensors (the prism optical blocks with chips attached) move horizontally.

Frontniche makes many claims for the camera, which it calls “the world’s first all-in-one ortho-stereoscopic broadcast camera.”  Among them are a “maximum 3D effect distance” of 360 meters, more than enough to shoot one football goalpost from behind the other.  It’s also said to comply with Japan’s “Stereoscopic Image Safe Standard” law.

You can read more about it in the product brochure, from which these images were taken:

Frontniche 3

The brochure includes links to sites covering the theory of moving-sensor 3D and issues of sports shooting.  I plan to give the unit a good look at IBC.

Tags: , , , , , , , ,

What’s Next?

September 7th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

These were some of the things that could be seen at Canon Expo at New York’s Javits Convention Center last week: ice skaters pirouetting without ice, people viewing someone dressed as the Statue of Liberty from the moving deck of a fake boat, a machine that can squirt out a printed-and-bound book on demand, and a hand-holdable x-ray system.  Those weren’t directly related to the future of our business.  But what about image sensors with 120 million pixels, others (sensor chips) larger than paperback books, and yet others with more colors than merely red, green, and blue?

Canon Liberty trimmed[The photo above, by the way, like the others in this post from Canon Expo, was shot by Mark Forman <> and is used here with his permission (all other rights reserved).]

We can extrapolate from the past to make certain predictions.  It’s extremely likely, for example, that the sun will rise tomorrow (or, for those of a less-poetic bent, that the rotation of the Earth will cause…).  Otherwise, we can’t predict the future, but we’re often put in a position of having to do so:  Will this stock go up?  Will it rain on during an outdoor wedding ceremony?  Will there be a better, less-expensive camera/computer/etc. after a purchase?

That last is usually as assured as a daily sunrise, but how quickly and how great the improvement are hard to know.  For help, there are blogs like this, publications, conferences, and trade shows.

Philips Autostereo at IFA 2010The Internationale Funkausstellung  (IFA) in Berlin is an example of one of the latter.  It’s an international consumer electronics show.

At the latest IFA, among other stereoscopic 3D offerings (including 58-inch, CinemaScope-shaped, 21:9 glasses-based 3D), Philips spinoff Dimenco showed an auto-stereoscopic (no-glasses) 3D display.  Here’s a portion of a photo of it that appeared on TechRadar’s site here:

This is by no means the first time Philips has ventured into no-glasses 3D, but this one is different.  Autostereoscopic displays usually involve a number of views, and the display resolution gets divided by them.  The more views, the larger the viewing sweet spot and the better the 3D but the lower the resolution.  The new display has five views horizontally and three vertically, but it starts with twice as much resolution as “full 1080-line HD” both horizontally and vertically, so the 3D images end up with a respectable 768 x 720 for each of 15 views.

CAVE smallPerhaps such glasses-free 3D leads to a greater sensation of immersion, but there are other ways to create (or increase) an immersive sensation.  Consider, for example, the CAVE (Cave Automatic Virtual Environment), a room with stereoscopic projections on at least three walls and the floor (sometimes all surfaces).  The photo here is of a CAVE at the University of Illinois in 2001 (it was developed there roughly 10 years earlier).  SGI brought a CAVE to the National Association of Broadcasters convention shortly after it was developed.

Visitors who wore ordinary 3D glasses saw ordinary 3D — boring.  Visitors who got to wear a special pair of 3D glasses that could track their head movements, however, even though they saw exactly the same 3D as the others, were transported into a virtual world responsive to their every movement.  Unfortunately, only one viewer at a time could get the immersive experience.

At Canon Expo, however, there was “mixed reality.”  It’s based on head-mounted displays using two prisms per eye.  One, a special “free-form prism,” delivers images from a small display to the eye.  The other passes “real-world” images from in front of the viewer to both the eye and a video camera that can tell what the viewer is looking at.

Canon mixed reality trimmedThe result is definitely mixed reality, a combination of stereoscopic imagery with unprocessed vision, with the 3D virtual images conforming to objects and views in the “real world.”  Virtual images can even be mapped onto real-world surfaces, with the cameras in the headgear telling the processors how to warp the virtual images appropriately.  This photo shows a complex version of the headgear; other mixed-reality viewers at Canon Expo looked little different from some 3D glasses.  Canon’s “interactive mixed reality” brochure showed people wearing the headgear walking around and collaboratively discussing an object that doesn’t exist.

Immersive MediaAnother form of immersion involves capturing 360-degree images.  At left is the Immersive Media Dodeca® 2360 camera system, combining the images from 11 different cameras and lenses into a seamless panorama.  At Canon Expo, a 360-degree view was achieved with a single lens, a single imaging chip (8984 x 5792, with 3.2 μm pixel pitch) and a mirror shaped like a cross between a donut and a cone that is, in the words of one high-ranking Canon employee, “the single most-precise optical component the company makes.”  The whole package forms a roughly fist-sized bump.

Of course, immersiveness is only one visual sensation.  There are also sharpness and color.

If you work out the math on that  Canon 360-degree image sensor, it comes to about 50 million pixels, which is considerably more than even NHK’s Super Hi-Vision (also known as ultra high-definition television, with four times the detail of 1920 x 1080 HDTV in both the horizontal and vertical directions).  Canon ultra trimmedAcross the room from Canon’s 360-degree system, however, was their version of ultra-high resolution, with roughly eight times the detail of 1080-line HDTV in both directions.

Four Super Hi-Vision pictures could fit into one from this hyper-resolution sensor.  Canon says its resolution is comparable to the number of human optic nerves.

The full detail of the chip can only (currently) be captured at only about 1.4 frames per second, but while it is shooting hyper-detailed stills, it can (if I interpreted the information provided correctly) simultaneously capture two full-motion full-detail HDTV streams within the image.  The system uses a one-of-a-kind lens, and it’s a work in progress.

Canon giant trimmedThe hyper-resolution image sensor had a roughly full-frame 35mm format (comparable to that in the Canon EOS 5D Mark II DSLR), already roughly four-and-a-half times taller than a 2/3–inch format image sensor.  A few feet away was another new sensor that was larger — much larger.  It was made from a semiconductor wafer the size of a dinner plate, and the sensor itself was the size of an old 8-inch-square floppy disk — huge!

What do you get from such a huge sensor?  Extraordinary sensitivity and dynamic range.  One scene (said to have been shot at 60 frames per second with an aperture of f/6.8) showed stars in the sky as seen through a forest canopy — and it was easy to see that the leaves and needles of the trees were green.  In another scene, a woman walks in front of a table lamp, so she is back lit, but every detail and shade of gray in of her front was clearly visible.

Canon dome trimmedCanon Expo demonstrated advances in both immersiveness (aside from the 360-degree and mixed-reality systems, there was also the 9-meter dome projection shown at right) and in spatial sharpness (the hyper-resolution and giant image sensors, the latter because it can deliver more contrast ratio, which affects sharpness).  There are also temporal sharpness (high frame rate) and spatio-temporal sharpness, both of which affect our perceptions of sharpness.  I found no demonstrations of increased temporal or dynamic resolution at Canon Expo, but that doesn’t mean they’re not being developed.

Picture1 trimmedThe images at left are portions taken from BBC R&D White Paper number 169 on “High Frame-Rate Television” published in September 2008.  It’s available here: The upper picture shows a toy train shot at the equivalent of 50 frames per second; the lower picture shows the same train at 300-fps.  Note that the stationary tracks and ties are equally sharp in both pictures, but the higher frame rate makes the moving train sharper in the lower picture.

As this post shows, there is immersiveness, and there is sharpness (both spatial and temporal).  Is there anything else that future imaging might bring?  How about advances in color?

Cie_chromaticity_diagram_wavelengthEver since its earliest days, color video has been based on three color primaries.  As this chromaticity diagram shows, however, human vision encompasses a curved space of colors, whereas any three primaries within that space define a triangle, excluding many colors.

At Canon Expo, one portion of the new-technologies section was devoted to hand-held displays that could be tilted back and forth to show the iridescence of butterfly wings and other natural phenomena.  The demonstration wasn’t to highlight the displays but a multi-band camera that captures six color ranges instead of three.

Then there was the Tsuzuri Project exhibit at Canon Expo (  It was a gorgeous reproduction of an ancient Japanese screen.  Advanced digital technology was used to capture and reproduce the detail of the original, but then a master gold-leaf artist used his talents to complete the copy.

I look forward to future tools based on what I saw at Canon Expo as well as the BBC’s high frame-rate viewing, Immersive Media’s camera system, and even the Philips autostereoscopic display.  And I’m glad that human artists are still needed to use them.

Tags: , , , , , , , , , , ,

Good News: I See 3D

August 30th, 2010 | No Comments | Posted in 3D Courses, Today's Special

It has been a good 3D month for me.  A local cinema has been running a series of classic 3D movies from the 1950s, including Alfred Hitchcock’s Dial M for Murder, which takes place almost entirely in one room of an apartment (okay, a London flat), an ideal distance range for natural stereoscopic 3D.  The master director was also sure to fill the depth with furniture and other props.

Then I saw Step Up 3D, a younger director’s much more recent masterful use of the medium.  I highly recommend Jon Chu’s feature for 3D viewing.

I’ve been working on a 3D research project; I heard from inventor Jimmie D. Songer, a pioneer of single-lens, single-camera 3D; and then, today, ISee3D.  That last wasn’t a grammatical error and typo.  Perhaps I should have said I saw ISee3D.

They, too, have been working on single-lens, single-camera 3D.  They’ve even done it at high speed for slow motion, and they showed me 3D endoscope footage shot in the interior of a red pepper.  But the best part of the demo was the little camera in a corner of a conference room, shooting live motion full-color 3D (displayed live on an ordinary 3D screen at the opposite corner of the room).

That was one camera with one lens, picking up 3D.  You can read more about it (including a technology white paper and their base patent) here:

No, you can’t rush out and buy one just yet, but, if all goes well, attendees at next April’s NAB convention will see something developed even further.

Tags: , , , , , , ,

Can You Fix It in Post?

August 22nd, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

Can you fix it in post?

The simple answer is: Yes.

Consider Avatar.  Not only was an entire world created and populated in computers, but even a human actor’s legs were atrophied in post.  So, yes, anything can be fixed in post — given enough time and money.  In the worst case, artists would simply “paint” photorealistic images, pixel by pixel and frame by frame.

It wasn’t always so, especially in electronic imagery.  Before “paint” systems, post was extremely limited.  Cuts, dissolves, wipes, and keys were possible, but, in the days of analog recorders, even those often degraded images.  ”We’ll fix it in post” became a laughter-inducing cliché.

Today, not even counting “painting,” there are many real-time processes that can replace production activities.  Rather than having an image specialist controlling camera parameters during shooting, the raw signals from the sensors can be recorded, with a post-production colorist dealing with them.  Instead of optical filters in front of or behind the lens, post-production filters can achieve much the same effects.  Instead of worrying about large, stable camera mounts or optical image stabilizers, producers can turn to post-production image stabilization.

And then there’s 3D.


The images above are taken from the brochure for Sony’s MPE200 stereoscopic image processor.  They show some of the post-shooting corrections the system can accomplish.  At top left there is correction of inter-camera image center as a lens zooms, at top right correction of inter-camera rotation, and, at bottom, from left to right, correction of interaxial spacing, inter-camera elevation, and even inter-camera distance from the scene.

Let’s start with the interaxial-spacing adjustment.  It can move homologous points in the two eye views closer together or farther apart.  Unfortunately, that’s not the only difference between the two camera views.


The image at top above is what a single camera might see when shooting an edge of a cube or building.  Below it are the views of separated eyes.  The left eye (or left camera) sees more of the left side of the object; the right sees more of the right side.  Depending on the exact positioning, shooting distance, and object, one camera might even see things that the other doesn’t.  There’s no way that Sony’s MPE200 — or anyone else’s post-production processor — can know how to put things into the picture that weren’t there in the original.


Now consider some of the other corrections, like that of inter-camera rotation.  An HDTV frame is a 16:9 rectangle.  If one camera’s rectangle is rotated with respect to another’s, as shown above in the blue and red rectangles, the only way to get them to line up is to trim the content of each, as shown in the green rectangle.  That changes the original framing.

It’s not just a 3D problem.  With a stable mount or optical image stabilization, what the shooter sees is (not counting overscan or intentional changes in post) what the viewer sees.  With post-production image stabilization, it can be very different.

Have a look at the second (Mounts — the problem), third (Mounts — fixed in post), and fourth (Mounts — not exactly fixed) files available on this download page:’t-fix-in-post-video-acquisition/.  They were provided by Aseem Agarwala of Adobe Systems, and they demonstrate the tremendous power of post-production image stabilization.

The first clip is an example of a horribly unstable image, shot with a handheld camera.  The second clip shows the post-processed result — so smooth that it appears to have been shot by an experienced crew with a camera mounted on a dolly on track.

The third clip, however, shows the original and the stabilized versions together.  There’s no question that the image has been marvelously stabilized, but the framing is so different that the second story of the building in the background disappears completely in the corrected version.

It’s not just framing.  If there’s any process that can be perfectly duplicated in post, it’s the adjustment of the color parameters of the signal produced by a camera’s image sensors.  As long as all of the information is recorded, it makes no difference from a technical standpoint whether the adjustments are made at the camera or in a colorist’s suite.  Unfortunately, there are standpoints other than technical.

Vari MG .35

Vari MG .75

The two pictures shown above are taken from the book Goodman’s Guide to the Panasonic Varicam by Robert Goodman, AMGMedia Publishers, 2004, (and here’s a link to Goodman’s own site:  The upper picture has the master gamma set to .35; in the lower picture, it’s .75.

Neither is necessarily better, and neither is necessarily “right.”  They are simply different.

Vari DLV 500

Vari DLV 200

Above is another pair of images from the same book.  The upper one has dynamic level set to 500; the lower is at 200.  Notice that all of the detail of the collar is easily seen in the 500.  On the other hand, the face seems desaturated.  Again, neither is necessarily good or right.  But there are major differences between these four pictures (there are even more image pairs in the book, demonstrating other parameters).

If the adjustments were made in production, the director might have liked some characteristics of the image (say, the collar detail) but not others (say, the desaturated face) and changed things to compensate (different lighting, makeup, or clothing, for example).  In post, the video parameters can be changed at will, but the lighting, makeup, and clothing remain the same, unless, of course, pixel-by-pixel and frame-by-frame an artist (or, more likely, a team of artists) repaints the images as they might have been captured in the first place.

If you have enough money and time, you can do anything in post.  For the rest of us, it’s a good idea to try to achieve desired looks in production.

Tags: , , , , , , , , , , ,

3DTV Today: One Step Back

July 26th, 2010 | 3 Comments | Posted in 3D Courses, Schubin Cafe

The Society of Motion-Picture and Television Engineers (SMPTE) just completed an International Conference on Stereoscopic 3D for Media and Entertainment, what it calls “the only scientific gathering focused exclusively on 3D.”  Like many SMPTE conferences, it gazed into the future, with one presentation introducing “the need for additional image processing to get pixel-level geometry matching,” another discussing “Spatial Phase Imaging technology” that “can be made to work with any existing sensor-optics combination, making it amenable to a plethora of applications,” and yet another was titled “What Is Holographic Television, and Will It Ever Be in My Living Room?”

By the accounts of those who attended the event, it was jam-packed with terrific information.  This post, however, is not about the future of 3DTV but about its present.  And it won’t even delve into issues of 3D vision.


The image above is a portion of a television image of the face of Sir James Wilson Vincent Savile, better known as the entertainer Jimmy Savile.  It is clearly a black-&-white image, and it’s from this fascinating web site about experiments to allow color to be recovered from black-&-white video recordings:’s+experiments

The reason the color can be recovered is the same reason the image looks so bad.  When “compatible” color was created, it added a supposedly invisible color subcarrier to the video signal.  But it wasn’t invisible.  People who had been watching pristine images on their black-&-white TVs suddenly got extra dot and line patterns in their pictures.  Those with color TVs got less detail.

35mm-silentWhen35mm-sound 60 sound was added to movies, something similar happened.  At the left is the four-units-wide by three-units-high (4:3 or 1.33:1 aspect ratio) frame of silent movies.  When a soundtrack was added, it impinged on the frame, giving the picture shape a much squarer 1.16:1 aspect ratio.  The so-called Academy aperture shrunk the picture (losing resolution) to return to a somewhat wider shape (1.375:1).

Now, in this age of digital (no more dot pattern) high-definition (widescreen with no resolution loss) TV, 3DTV sets are being sold.  They can deliver an added sensation of depth (assuming all goes well).  Do they cause anything to be lost as a result?

3D 1

The picture above is a portion of one eye’s view of a 3D pair of photos shot by Pete Fasciano during a presentation he taught on July 21 about stereoscopic 3D.  I’ve trimmed it to an HDTV-like 16:9 aspect ratio.  Had it been a TV show shot in HDTV, it might have looked like the above.

3D 2

Someday, we might have a form of 3DTV that will deliver left-eye and right-eye images separately, with full spatial and temporal resolution.  Today, what we have is the HDMI 1.4a standard.  It calls for the above for 1080i HDTV 3D.  The left- and right-eye views are placed side by side and squeezed into the HDTV frame.  Instead of 1080i HDTV’s 1920 pixels across, there are just 960 for each eye’s view.

For today’s 3DTVs that use active-shutter glasses, that’s the only resolution loss.  But some 3DTVs use passive glasses, with different eye views on different scanning lines.  If that’s the case, the side-by-side configuration drops the horizontal resolution from 1920 to 960, and the passive-glasses system drops the vertical from 1080 to 540.

3D 3

HDMI 1.4a calls for the above for 720p HDTV 3D.  In this case the left-eye image is placed above the right-eye image.  Of course, that drops the vertical resolution from 720 lines to 360.  Now let’s add closed captioning.

3D 4

Above is a simulated HDTV image with a closed caption.  Were this post about stereoscopic 3D and human vision, I might have pointed out that the caption is in the screen plane, so there will be a visual conflict between it and anything it is occluding that is set to come forward from the screen plane.  But this post is not about that.

3D 5

Above is the same closed caption, generated, perhaps, by a cable, satellite, or telephone-company set-top box.  The caption appears where the box, not knowing about 3D, thinks it should appear.

3D 6

Finally, when the captioned 1080i HDTV 3D signal from the set-top box enters the 3DTV, this is the result.  The caption is split in two, is elongated, and appears only half the time.  I’ve illustrated that last by making the caption appear transparent, but it might be a little worse than that.  The portion on the right will appear when the left-eye shutter of active glasses is open; the portion on the left will appear when the right-eye shutter is open.

From a technology standpoint, it’s relatively easy to design a system in which none of these problems will exist.  The presenters and attendees of the SMPTE International Conference on Stereoscopic 3D for Media and Entertainment might be doing that right now.  Unfortunately, 3DTVs are being sold today.

Tags: , , , , , , , , ,

The Elephant in the Room: 3D at NAB 2010

April 30th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe
implicit range of 3D eyewear at NAB 2010

implicit 3D eyewear range at NAB 2010

As I roamed the exhibits at the NAB show this month, I kept wondering what other year it seemed most like.  And I was not alone.

There were plenty of important issues covered at the show, from citizen journalism to internet-connected TV.  And then there was the elephant in the room.

It would be a lie to say that 3D technologies could be found at every booth on the show floor.  But it was probably the case that there was 3D in at least every aisle.  There was so much 3D that it tended to diminish all other news.

litepanels_sola12In acquisition technology, for example, LED lighting was near ubiquitous, with focusable instruments, such as the Litepanels Sola, sometimes painfully bright.  Panasonic and Sony both showed models of future inexpensive video cameras with large-format imagers, and Aaton joined the range of those offering “digital magazines” for film cameras.  In small formats, GoPro’s Hero is a complete HD camcorder weighing just three ounces.

In storage technology, Cache-A, For-A, IBM, and Sony all showed in new offerings that tape is not dead.  Meanwhile, iVDR removable-hard-drive storage could be seen in several new products, and Canon introduced new camcorders based on Compact Flash cards.

Cinedeck looks like a viewfinder but includes built-in storage and editing capability. NextoDI’s NVS 2525 can copy either P2 or SxS cards.

In processing, Dan Carew’s Indie 2.0 blog said of Blackmagic Design’s DaVinci Resolve 7.0, “this best-in-class color correction software was formerly US$250,000 (for software and hardware) and is now available in a Mac software only verions for US$995.” Immersive Media’s 11-camera spherical views can now be stitched and streamed live.  NewTek’s TriCaster TCXD850 can deal with 22 inputs and virtual sets.  And, though you might not yet be able to figure out why you’d want this capability, Snell’s Kahuna 360 production switcher can deal with up to 16 shows at once.

In wireless distribution, there was VµbIQ’s 60 GHz uncompressed transmitter on a chip and Streambox’s Avenir for bonding up to four cellular modems to create a 20 Mbps channel.  In wired, there was Pleora’s EtherCast palm-sized bidirectional ASI-IP gateways.  And, in technologies that could be applied to either, there were Fraunhofer’s codec with a latency of just one macroblock line and a Harris-LG/Zenith proposal for expanding ATSC mobile transmission to full-channel use.

Ostendo 2In presentation, there was a reference picture monitor from Dolby (seen in almost its final form at the HPA Tech Retreat).  Several booths had OLED monitors, from 7-inch at Sony to 15-inch at TVLogic.  Wohler’s Presto router has an LCD video display on each button.  And Ostendo’s CDM43 is a curved monitor with a 30:9 aspect ratio.

Epic smallThat barely scratches the surface of the non-3D news from NAB.  And then there was 3D.

Even All-Mobile Video’s Epic 3D production truck, parked in Sony’s exhibit, wore 3D glasses.  But it was the glasses on visitors to the truck that proved more instructive.

Sony provided RealD circularly polarized glasses to visitors for looking at everything from relatively small monitors to a giant outdoor-type LED display.  As soon as those visitors entered the control room of AMV’s Epic 3D truck and donned their glasses, however, they saw ghosting — crosstalk between the two eye views.  AMV staff were prepared for the shocked looks.  ”Sit down,” they said.  ”There’s a narrow vertical angle, and you have to be head-on to the monitors.”  Sure enough, that solved the problem — at least for those who could sit.

Another potential 3D problem was mentioned in the two-day 3D Digital Cinema Summit before the show opened.  If 3D is shot for a small screen and blown up to cinema size, it can cause eye divergence.  3ality’s camera rigs indicate when this might happen, but it happened anyway on at least one cinema-sized screen at NAB, leading to some audience queasiness.

Buzz Hays of the Sony 3D Technology Center says making 3D is easy, but making good 3D is hard.  There was a lot of 3D at NAB, including both easy and hard, good and bad.

It was hard to count the number of side-by-side and beam-splitter dual-camera rigs at the show, but, in addition to those, there were integrated (one-piece) 3D cameras and camcorders, in various stages of readiness, from 17 different brands, both on and off the show floor.  It seems that all of them were said to be “the first.”


Much could be learned about 3D at the two-day Digital Cinema Summit before the show opened.  It began with Sony’s Pete Lude showing that an ordinary 2D picture can seem 3D when viewed with just one eye, leading a later speaker (me) to quip that watching with an eye patch, therefore, is an inexpensive way to get 3DTV.

3ality’s Steve Schklair followed Lude with an on-screen, live demonstration-tutorial on the effects of different 3D rig settings: height, rotation, lens interaxial, convergence, etc.  He was followed by directors, stereographers, and trainers of 3D-convergence operators, among others.

Although 3D would seem to require more equipment (two cameras and lenses plus a stereo rig at each location) and more personnel (a convergence operator per camera in addition to a stereographer), there is seemingly one saving grace.  According to Schklair and others, 3D can get away with fewer cameras and less cutting than 2D.

The same thing was said of HD, however, in its early days.  Sure enough, when I worked on one show in 1989, we used just four HD cameras feeding the HD truck and twice as many non-HD cameras feeding the non-HD truck.  In the early days, it was common practice to do separate HD and SD productions.  Today, of course, one HD production feeds all, and it typically uses as many cameras and as rapid cutting as an SD show.

Pace ShadowAtop a tower of Fujinon’s NAB booth, Pace showed something that recognizes the current economics of 3D.  With virtually no 3DTV audience, it’s hard to justify separate 3D productions, but, with such major players as ESPN, DirecTV, Discovery, and Sky involved in 3D, the elephant cannot be ignored, either.  So the Pace Shadow system places a 3D rig atop the long lens of a typical 2D sports camera.  Furthermore, it interconnects the controls (in a variety of selectable ways) so that the operator of the 2D camera need not be concerned about shooting 3D: one camera position, one operator, different 2D and 3D outputs.

Screen Subtitling came up with similarly clever solutions to the problem of 3D graphics.  Unless text is closer to the viewer (in 3D depth) than the portion of the image that it is obscuring, it can be uncomfortable to read.

Traditionally, subtitles are at the bottom of a screen, where 3D objects are closest to the viewer.  Raise the graphics to the top, and they might work in the screen plane.

Then there’s the issue of putting the graphics on the screen.  With left- and right-eye views, it might seem that two keying systems are required.  But with much 3D being distributed in a side-by-side format, a single keyer can place 3D graphics directly into the side-by-side feed.

Screen Subtitling small

copyright 2010 Inition | Niche | Pacific

Relay opticsThere was much more 3D at the show, in every field of video technology (and, perhaps even audio).  In acquisition, for example, aside from integrated cameras, 3D mounts, and even individual cameras designed specifically for 3D (like Sony’s HDC-P1), there were also 3D lens adaptors, precision-matched lenses, precision lens controls, and even relay optics intended to allow wider cameras to be placed closer together, as in this picture shot by Eric Cheng of

LED smallAt the other end of the 3D chain, there were both plasma and LCD autostereoscopic (no-glasses) displays using both lenticular and parallax-barrier technology, small OLED displays with active-shutter glasses and giant LED screens with passive circularly polarized glasses.  There were LCD and plasma screens (up to 152-inch at Panasonic) and DLP rear-projectors using active-shutter glasses, and both LCD and laser projection using passive polarized glasses.

DSC01809There were dual-panel displays with beam splitters, and displays intended to be viewed through long strips of fixed polarized materials (to accommodate all viewers’ heights).  There were many anaglyph displays in the three-different primary-and-complement color combinations.  There were 3D viewfinders using glasses and others with displays for each eye.

Burton Aerial 3D trimmedJapan’s Burton showed a laser-plasma display that creates 3D images in mid-air.  Normally, they’ve viewed through laser-protection goggles, as in the image at the right at the top of this post.  But as a safety measure, they showed them instead inside an amber tube at NAB.

InKeisoku small storage, it seems that everyone who had anything that could record images had a version that could do so in 3D.  Even Convergent Design’s tiny Nano was available in a 3D version.  The Abekas Mira is an eight-channel digital production server — or it’s a four-channel 3D digital production server.  Want an uncompressed 3D field recorder?  Keisoku Giken’s UDR-D100 was just one such product at the show.

In processing, just about every form of editing and processing had a 3D version.  Monogram showed a touch-screen 3D “truck-in-a-box” production system.  Belgium’s Imec research lab even showed licensable technology for stereoscopic virtual cameras.

There was a range of equipment and services for converting 2D to 3D either in real time or not, automatically and with human assistance.  And there was a large range of processing equipment designed to fix 3D problems, such as camera rotation and height variation.

Sony’s MPE200 is one such device, with a U.S. list price of $38,000.  The MPES3D01/01 software to run it, however, is another $22,500.  With the least-expensive 3D camera at the show (Minoru 3D) retailing for under $60 at, it might be said that 3D is cheap, but good 3D costs.

There was 3D test equipment from many manufacturers.  There was high-speed 3D (Antelope/Vision Research). Belden 1694D trimmed There was 3D coax (Belden 1694D, complete with anaglyph color coding).  Ryerson University is doing eye-tracking research on what viewers look at in 3D and whether it’s different from HD and 4K.

So why was I wondering what year it was?  At NAB shows there have been many technologies shown that never went anywhere.  We still await voice-recognition production switchers, for example, and also voice-recognition captioning.  But those have generally been shown by only one company or a small number of exhibitors.

Digital video effects were among the fastest technologies to penetrate the industry.  First shown at NAB in 1973, they were commonly seen in homes by the end of the decade.

Then there was HDTV.  Its penetration after NAB introduction took much longer, even if dated only from 1989, when an entire exhibition hall was devoted to the subject (there were many earlier NAB displays).  Estimates vary, but U.S. household penetration of HDTV 21 years later seems to be in the vicinity of half.

extravisionAt least HDTV did eventually penetrate U.S. households.  Visitors to NAB conventions in the early 1980s could see aisle after aisle of exhibits claiming compatibility with one or both competing standards for teletext.  One standard was being broadcast on CBS and NBC; the other on TBS.  There were professional and consumer equipment manufacturers and services offering support.  Based on the quantity and diversity of promotion at NAB, it was hard to imagine that teletext would not take off in the U.S.

So, will 3DTV emulate digital effects, HDTV, U.S. teletext, or none of the above?  Time will tell.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

How Old Is Your Stereographer?

April 12th, 2010 | 1 Comment | Posted in 3D Courses, Schubin Snacks

Speaking at the Sports Video Group Chairman’s Forum in Las Vegas Saturday night, Professor Martin Banks of the Visual Space Perception Laboratory at the University of California – Berkeley raised an interesting issue regarding 3D comfort.  Stereographers (directors of 3D cinematography and videography) are responsible for, among other things, the visual comfort of the audience.  One factor in that comfort is vergence-accommodation conflict, the difference between the focal distance to the screen and the “distance” to which the eyes are pointing.

As people age, unfortunately, they become less able to focus at different distances, a normally occurring condition called “presbyopia.”  And that means that a stereographer with presbyopia can’t properly judge vergence-accommodation conflict.

Tags: , , , ,

3D Brings Science to Showbiz

April 5th, 2010 | No Comments | Posted in 3D Courses, Schubin Snacks

The Woods Hole Oceanographic Institution is respected worldwide as the largest non-profit ocean research, engineering, and education organization.  Now, through its Advanced Imaging and Visualization Laboratory, it’s also involved in for-hire 3D production and post, offering complete 3D rigs that weigh as little as four pounds as well as systems that will operate from 14,000 feet below sea level to outer space.

There’s more here:

Tags: ,
Web Statistics