Produced by:
| Follow Us  

NAB 2013 Wrap Up at the SMPTE DC chapter, May 23, 2013

June 2nd, 2013 | No Comments | Posted in Download, Today's Special

Mark Schubin’s NAB 2013 wrap up.

Presented to the SMPTE DC chapter, May 23, 2013.

Video (TRT 40:02)

Tags: , , , , , , , , , , , , , , , , , , , , , , , ,

Introduction and Technology Year in Review (HPA, Feb. 20, 2013)

February 20th, 2013 | No Comments | Posted in Download, Today's Special

Introduction and Technology Year in Review

HPA, February 20, 2013

Video (17:46 TRT)

Tags: , , , , , , , , , , , ,

4K* from 40,000 Feet, CCW, 4K Acquisition: The Possibilities & Challenges (Nov.15, 2012)

November 20th, 2012 | No Comments | Posted in Download, Today's Special

4K* from 40,000 Feet (Nov. 15, 2012)

4K Acquisition: The Possibilities & Challenges session
Content Creation World
New York, NY

Video (10:29 TRT)

Tags: , , , , , , ,


August 31st, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe


What should come after HDTV? There’s certainly a lot of buzz about 3D TV. Such directors as James Cameron and Douglas Trumbull are pushing for higher frame rates. Several manufacturers have introduced TVs with a 21:9 (“CinemaScope”) aspect ratio instead of HDTV’s 16:9. Some think we should increase dynamic range (the range from dark to light). Some think it should be a greater range of colors. Japan’s Super Hi-Vision offers 22.2-channel surround sound. And then there’s 4K.

In simple terms, 4K has approximately twice as much detail as HDTV in both the horizontal and vertical directions. If the orange rectangle above is HDTV, the blue one is roughly 4K. It’s called 4K because there are 4096 picture elements (pixels) per line.

This post will not get much more involved with what 4K is. The definition of 4096 pixels per line says nothing about capture or display.  Even at lower resolutions, some cameras use a complete image sensor for each primary color; others use some sort of color filtering on a single image sensor. At left is Colin Burnett’s depiction of the popular Bayer filter design. Clearly, if such a filtered image sensor were shooting another Bayer filter offset by one color element, the result would be nothing like the original.

Optical filtering and “demosaicking” algorithms can reduce color problems, but the filtering also reduces resolution. Some say a single color-filtered image sensor with 4096 pixels per line is 4K; others say it isn’t. That’s an argument for a different post.  This one is about why 4K might be considered useful.

An obvious answer is for more detail resolution. But maybe that’s not quite as obvious as it seems at first glance. The history of video technology certainly shows ever-increasing resolutions, from eight scanning lines per frame in the 1920s to HDTV’s….

As can be seen above, in 1935, a British Parliamentary Report declared that HDTV should have no fewer than 240 lines per frame. Today’s HDTV has 720 or 1080 “active” (picture-carrying) lines per frame, and 4K has a nominal 2160, but even ordinary 525-line (~480 active) TV was considered HDTV when it was first introduced.

Human visual acuity is often measured with a common Snellen eye chart, as shown at left above. On the line for “normal” vision (20/20 in the U.S., 6/6 in other parts of the world), each portion of the “optotype” character occupies one arcminute (1′, a sixtieth of a degree) of retinal angle, so there are 30 “cycles” of black and white lines per degree.

Bernard Lechner, a researcher at RCA Laboratories at the time, studied television viewing distances in the U.S. and determined they were about nine feet (Richard Jackson, a researcher at Philips Laboratories in the UK at the same time, came up with a similar three meters). As shown above, a 25-inch 4:3 TV screen provides just about a perfect match to “normal” vision’s 30 cycles per degree when “525-line” television is viewed at the Lechner Distance — roughly seven times the picture height.

HDTV should, under the same theory, be viewed from a smaller multiple of the screen height (h). For 1080 active lines, it should be 7.15 x 480/1080, or about 3.2h. Looked at another way, at a nine-foot viewing distance, the height should be about 34 inches, a diagonal screen size of about 60 inches, and, indeed, 60-inch (and larger) HDTV screens are not uncommon (and so are closer viewing distances).

For 4K (again, using the same theory), it should be a screen height of about 68 inches. Add a few inches for a screen bezel and stand, and mount it on a table, and suddenly the viewer needs a minimum ceiling height of nine feet!

Of course, cinema auditoriums don’t have domestic ceiling heights. Above is an elevation of a typical old-style auditorium, courtesy of Warner Bros. Technical Operations. The scale is in picture heights. Back near the projection booth, standard-definition resolution seems adequate. Even in the fifth row, HD resolution seems adequate. Below, however, is a modern, stadium-seating cinema auditorium (courtesy of the same source).

This time, even a viewer with “normal” vision in the last row could see greater-than-HD detail, and 4K could well serve most of the auditorium. That’s one reason why there’s interest in 4K for cinema distribution.

Another is questions about that theory of “normal” vision. First of all, there are lines on the Snellen eye chart (which dates back to 1862) below the “normal” line, meaning some viewers can see more resolution.

Then there are the sharp lines of the optotypes. A wave cycle would have gently shaded transitions between white and black, which might make the optotype more difficult to identify on an eye chart. Adding in higher frequencies, as shown below, makes the edges sharper, and 4K offers higher frequencies than does HD.

Then there’s sharpness, which is different from resolution. Words that end in -ness (brightness, loudness, sharpness, etc.) tend to be human psychophysical sensations (psychological responses to physical stimuli) rather than simple machine-measurable characteristics (luminance, sound level, resolution, contrast, etc.). Another RCA Labs researcher, Otto Schade, showed that sharpness is proportional to the square of the area under a modulation-transfer function (MTF) curve, a curve plotting contrast ratio against resolution.

One of the factors affecting an MTF curve is the filtering inherent in sampling, as is done in imaging. An ideal filter might use a sine of x divided by x function, also called a SINC function. Above is a SINC function for an arbitrary image sensor and its filters. It might be called a 2K sensor, but the contrast ratio at 2K is zero, as shown by the red arrow at the left.

Above is the same SINC function. All that has changed is a doubling of the number of pixels (in each direction). Now the contrast ratio at 2K is 64%, a dramatic increase (again, as shown by the red arrow at the left). Of course, if the original sensor offered 64% at 2K, the improvement offered by 4K would be much less dramatic, a reason why the question of what 4K is is not trivial.

Then there’s 3D.  Some of the issues associated with 3D shooting relate to the use of two cameras with different image sensors and processing. One camera might deliver different gray scale, color, or even geometry from the other.

Above is an alternative, two HD images (one for each eye’s view) on a single 4K image sensor. A Zepar stereoscopic lens system on a Vision Research Phantom 65 camera serves that purpose. It’s even available for rent.

There are other reasons one might want to shoot HD-sized images on a 4K sensor. One is image stabilization. The solid orange rectangle above represents an HD image that has been jiggled out of its appropriate position, the lighter orange rectangle behind it with the dotted border. There are many image-stabilization systems available that can straighten out a subject in the center, but they do so by trimming away what doesn’t fit, resulting in the smaller, green rectangle. If a 4K sensor is used, however, the complete image can be stabilized.

It’s not just stabilization. An HD-sized image shot on a 4K sensor can be reframed in post production. The image can be moved left or right, up or down, rotated, or even zoomed out.

So 4K offers much even to people not intending to display 4K. But it comes at a cost. Cameras and displays for 4K are more expensive, and an uncompressed 4K signal has more than four times as much data as HD. If the 1080p60 (1080 active lines, progressively scanned, at roughly 60 frames per second) version of HD uses 3G (three-gigabit-per-second) connections, 4K might require four of those.

When getting 4K to cinemas or homes, however, compression is likely to be used, and, as can be seen by the MTF curves, the highest-resolution portion of the image has the least contrast ratio. It has been suggested that, in real-world images, it might take as little as an extra 5% of data rate to encode the extra detail of 4K over HD.

So, is 4K the future? The aforementioned Super Hi-Vision is already effectively 8K, and it’s scheduled to be used in next year’s Olympic Games.

Tags: , , , , , , , , , , , , , , , ,

The Other Three Dimensions of 3DTV

March 14th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

3DTV suggests that the three Ds are the dimensions of height, width, and depth.  But there are three other linear dimensions that might be worth considering: pupillary distance, viewing distance, and screen size.

convergenceHere’s a generic diagram of binocular vision, looking down.  The observer’s eyes are at either end of the short base of the triangle.  The distance between the centers of the pupils of those eyes is the pupillary distance (PD, or interpupillary distance or, sometimes, interocular distance), and the distance from the object being looked at to the eyes is the object distance (OD).  Each eye’s lens focuses (or tries to focus) on the object at the point of the triangle, a process called accommodation.  The two eyes also point to that object, a process called vergence or convergence.  And each eye sees a slightly different view, a process called stereopsis or disparity.

The diagram at the left might be called the plain geometry of 3D.  In typical 3D shooting, two cameras replace the eyes, and the PD is replaced by a distance between lens centers, which might be the same, larger, or smaller, depending on lens magnification and the desires of the stereographer.

If the lenses are separated more than is called for in normal vision, the result is something called hyperstereo, a sensation of viewing the scene through the eyes of a giant.  Distant objects that normally wouldn’t provide much of a sensation of stereopsis do, but everything seems to be closer to the observer.  Hypostereo is the opposite: The lenses are closer than called for in normal vision, objects lose their stereopsis sensation at closer distances, and the overall scene depth seems greater.

That’s shooting.  Now consider viewing.

Start with the PD.  It’s normally considered a fixed number, and, for almost any adult human, it is.  That is to say it is fixed for that one particular adult human.  It varies from person to person based on age, sex, ethnicity, location, and other factors.  That’s why your optician needs to measure your PD when you get a new pair of glasses, to ensure that the optical centers are where they’re supposed to be.

When I looked up “Interpupillary Distance” on Wikipedia recently, I got one range of PD figures and references; when I looked up “Pupillary Distance,” I got a somewhat different set.  A 2004 paper called “Variation and extrema of human interpupillary distance” recommends that the range to be considered for adults be 45 to 80 mm.  For children down to age five, the author recommends reducing the bottom end to 40 mm (and notes a 15-year-old female with a 43-mm PD).  Children younger than age 2 have even smaller PDs: For the record, in September I was measured to have a PD of 68 mm.

The variation in PD poses a problem for a stereographer.  To provide the sort of stereopsis and convergence I get in real life, the only PD to be considered should be 68 mm.  But, if something were to be shot and presented that way and that 15-year-old female with the 43-mm PD were to view it, then, for objects at an infinite distance, for which my eyes should point straight ahead, with no convergence at all, the 15-year-old female’s eyes would diverge, an unnatural condition.

Suppose, then, that the stereographer chooses the low end of the PD range, something good enough for even five-year-olds, 40 mm.  Then, when I view something at an infinite distance, instead of having my eyes point straight ahead, they’ll converge.

IsoscelesTrapezoid_900The diagram at the right shows that situation (but at an exaggerated scale).  Instead of an isosceles triangle, this time the geometric figure is an isosceles trapezoid (assuming I’m pointed directly at the screen, which is a different 3D issue).  In this case, the complete base is the viewer’s PD, VD is the viewing distance, and SID is the screen infinity distance, how far apart the two eye views are for the stereographer’s selected-observer PD.

convergence depthAt the left, I’ve extended the sides of the trapezoid to the original triangle to show where convergence-muscle feedback puts “infinity.”  Parallel lines are never supposed to meet, but the trapezoid sides aren’t parallel.

If you work out the math for an SID of 40 mm, my PD of 68 mm, and a viewing distance in a movie theater of 50 feet from the screen, the result is that the feedback that my convergence muscles send to my brain when looking at something that’s supposed to be at an infinite distance is that infinity is around 121 feet.

That might seem awfully close for a number you’re not supposed to be able to count to, but, in the grand scheme of vision, it’s not.  Here’s a medical web site dealing with vision issues.  Notice that the top of the page defines “infinity,” in medical visual terms, as simply greater than 20 feet:

Different depth cues have different strengths at different distances.  Convergence and accommodation offer strong depth cues up close, but they become relatively insignificant as viewing distances approach that medical definition of infinity, greater than 20 feet.

Viewing distance is another of those under-considered dimensions of 3DTV.  In the example I gave above, I chose a viewing distance of 50 feet, not unusual for a cinema auditorium.  But 3DTV is viewed at closer distances.  If I were to change my viewing distance to the television-viewing Lechner Distance of nine feet, my convergence-based sensation of infinity drops to less than 22 feet — still beyond that medical definition, though I’d now be well within the range where convergence counts, leading to a stimulus conflict between stereopsis and convergence.  But there’s still that third dimension, screen size.

To this point, I haven’t mentioned the screen size, because it hasn’t mattered.  I’ve simply stated that the five-year-olds and I were watching whatever screen size the stereographer intended.  The left- and right-eye views for something at infinite distance are separated on the screen by the 40-mm bottom of the PD range.  But suppose the five-year-olds and I go to see a movie and then later bring home a 3D DVD or Blu-ray disk of the same content.

How large is the largest screen stereographers should consider?  Is it 100 feet?  If there’s a 40-mm SID on the screen, all of the numbers above still hold.  But, if the same material, unmodified, gets put on the consumer playback medium, the ratio between the largest intended screen and the home screen becomes important.

If the largest intended screen is 100 feet and the home screen is 32 inches, then 40 mm on the giant screen becomes just over 1 mm on the home screen.  If I were to sit nine feet away, my convergence-based sensation of something at infinity would put it at about nine feet away, the same as my viewing distance; it would hardly be behind the screen at all.  And it’s not just objects at “infinity” that matter.

negative parallaxAt right is yet another diagram.  This time, suppose that there is a negative parallax on the screen matching the viewer’s PD.  Negative parallax indicates that the right-eye view is to the left of the left-eye view.  It’s easy to see from the diagram that the object appears to come out of the screen by half of the viewing distance.  But what is that distance?

Suppose what’s coming out of the screen is an arm, from shoulder to hand.  If the viewer is sitting four feet from a TV screen, the arm is a reasonable two feet long.  If the viewer is sitting 50 feet away from a cinema screen with the same negative parallax, the same arm becomes 25 feet long.

Perhaps other visual scaling factors come into play in such situations.  After all, when we see a close up on a 2D movie screen, we don’t suddenly think the character is a gigantic monster.  But the multiple dimensions of 3DTV seem to complicate matters.

Some have proposed an alternative.  In one paper, it has been called “Just Enough Reality” (  In another, it’s called “Kinder Gentler Stereo” (

The technical term is microstereopsis.  The “bad” news about it is that it doesn’t exactly duplicate the top triangle in this post.  But, thanks to variation in human PD as well as varying viewing distances and screen sizes, it’s unlikely that any real-world 3DTV system will match that triangle.

Sony_3D_240fps_camera_1The good news is that microstereopsis doesn’t necessarily require two lenses per 3D position.  Some of the 3D that Sony showed at the 2009 Consumer Electronics Show was shot with a single-lens microstereopsis camera (the one that drew much interest at the CEATEC show last year).  But an older system (shown in the 1970s by Digital Optical Technology Systems) didn’t require even a camera with dual image sensors.

Microstereopsis might turn out to be a bad idea — or it might be a good one.  Despite the 82 years since the first 3DTV broadcast, it’s still a young field.

Tags: , , , , , , , , ,
Web Statistics