Produced by:
| Follow Us  

The Schubin Talks: Introduction to Next-Generation-Imaging by Mark Schubin

August 11th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


This series of video presentations by Mark Schubin is designed to help broadcast and media professionals better understand three key concepts that are changing the way content is created and delivered.

This introduction looks at the technical enhancements that can make video look better. It includes a brief overview of the three topics to be covered in the series:

The Schubin Talks: Introduction to Next-Generation-Imaging is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (264 MB / TRT 22:56):
The Schubin Talks: Introduction to Next-Generation-Imaging


Tags: , , , , , , , , , , , , , , , ,

IBC-ing the Future

September 23rd, 2012 | 1 Comment | Posted in 3D Courses, Schubin Cafe

This is what happened on Monday night, September 10, at the International Broadcasting Convention (IBC) in Amsterdam: A crowd crammed into the large (1750-seat) auditorium to see the future–well, a future. They saw Hugo in stereoscopic 3D.

The movie, itself, is hardly futuristic. It was released in 2011, and it takes place almost a century ago.

So was it, perhaps, astoundingly, glasses-free? No. And it wasn’t the first 3D movie screened at IBC. It wasn’t even the first of IBC 2012; Prometheus was shown two nights earlier. But it was a special event. According to one participant who had previously seen Hugo stereoscopically, “It was awesome–like a different movie!”

The big deal? The perceived screen brightness was that of a well-projected 2D movie, perhaps four to five times greater than that of typical stereoscopic 3D movie projection.

It was said to be the world’s first laser-projected screening of a full-length movie (and in stereoscopic 3D), and it used an astoundingly bright, 63,000-lumen Christie Digital projector. Above left is a picture of Christie’s Dr. Don Shaw discussing it before the screening. You can read more about it in Christie’s press release here:

Can you buy that projector? Not today and maybe never. That’s why the audience saw only a possible future, but IBC has a pretty terrific track record of predicting the future. Today, for example, television is digital–whether via broadcast, cable, satellite, internet, or physical media–and virtual sets and virtual graphics are common; both digital TV and virtual video could be found at IBC in 1990, part of which is shown below.

As can be seen from Sony’s giant white “beach cabana” above, IBC had outgrown the convention facilities in Brighton, England, where it was located that year. The following convention (in 1992, because it was held every two years at the time) moved to Amsterdam’s RAI convention center, which has been adding new exhibit halls seemingly each year to try to keep up (there are now 14).

After the move, IBC became an annual event, show-goers could relax on a sand beach (left) and eat raw herring served by people in klederdracht (right), and finding the futures became easier. They were stuck into a Future Zone.

It’s not that everything new was put into the Future Zone. At IBC 2011, in a regular exhibit hall, Sony introduced its HDC-2500, one of the most-advanced CCD cameras ever made; at IBC 2012, Grass Valley introduced the LDX series, based on their CMOS Xensium sensor, perhaps one of the most-advanced lines of CMOS cameras ever made. And they’re supposed to be upgradable–someday–to the high-dynamic-range mode I showed in my coverage of IBC 2009 here:

ARRI makes cameras that, in theory, at least, compete with those Grass Valley and Sony ones. ARRI also makes lighting instruments. The future of lighting (and much of the present) seems to be LED-based. But ARRI demonstrated in its booth how some LED lights can produce wildly different looking colors on different cameras, including (labeled by brand) Grass Valley and Sony.  Their point was that ARRI’s new L7-T (tungsten color-temperature) LED lighting (left) looks pretty much the same on those different cameras.

Grass Valley’s LDX line drew crowds, but whether they came for the engineering of the cameras or to look at the leggy model in hot pants shouldering one was not entirely clear. IBC 2012 set a new official attendance record, but, with 14 exhibit halls (well, 13 plus one devoted to meetings rooms), not every exhibitor had crowds all the time, even if they were deserved. Consider Silentair’s mobile unit (right). It’s rated at a noise level of NR15 (comparable to the U.S. NC15). According to, concert halls and recording studios often use the much looser NR25 rating. The unit comes in multiple colors and can be installed in about half an hour by unskilled labor.

Where might it be used? Perhaps in or near Dreamtek’s Broadcastpod (left). About the size of an old amusement-park photo booth (near left), it comes complete with HD camera, prompter glass (far left), media management, and behind-the-talent graphics. It’s also well lit, acoustically treated and ventilated.

There were many other delights in the exhibit halls, from a Boeing 737 simulator with real-time, near-photorealistic graphics 17,920 pixels wide from Professional Show (right) to theatrically presented 810-frame-per-second (fps) 4K images shot by the For-A FT-One. There were Geniatech’s USB-stick pay-TV tuner with replaceable security card (left) and the Fraunhofer Institut’s work on computational photography (like using the masked-pixels resolution-expansion system described at this year’s HPA Tech Retreat to increase dynamic range, too) and light-field capture for motion video.

The most light-field equipment at the show could be found at the European Union’s 3D VIVANT project booth in the Future Zone. Hungary’s Holografika showed its Holovizio light-field display. In perhaps their most-amazing demo, they depicted five playing cards, edge-on to the screen. From the front, they looked like five vertical lines, but the photos above, shot at their IBC location (complete with extraneous reflections), show (not quite as well as being there) what they looked like from left or right.

One could walk from one edge of the display to the other, and the view was always appropriate to the angle and offered 1280 x 768 resolution to each eye at each location. Unfortunately, that meant that the whole display was close to 80 megapixels, and no camera system at IBC could provide matching pictures.

The top award-winning IBC conference paper was “Fully Automatic Conversion of Stereo to Multiview for Autostereoscopic Displays” by Christian Riechert and four other authors from Fraunhofer’s image processing department.  The process is shown at right.

Holografika showed some upconversions from stereoscopic views, but those didn’t fully utilize the capability of their display. In fact, none of the autostereoscopic displays at IBC could (in my opinion) match the glasses-required versions. One of the best-looking of the latter was a Sony 4K LCD using alternate-line polarization; with passive glasses, it offered 3840 x 1080 simultaneously to each eye.

Right behind Holografika in the 3D VIVANT booth, however, was Brunel University, and they had some camera systems that might, someday, properly stimulate something like the Holovizio display. At left is one of their holoscopic lens adaptors on an ARRI Alexa camera. The long tube is just for relaying the image, and, by the end of the show, they added a small, easily hand-holdable prototype without the relay tubes. The Brunel University area also featured a crude-resolution glasses-free display made from an LCD computer monitor and not much thicker than the unmodified original.

Across the aisle from the 3D VIVANT booth, DeCS Media was showing another way to capture 3D for autostereoscopic display with a single lens–that is, a single image-capturing lens (any lens on any camera) and a DeCS Media module to capture depth information (as shown at right). Even Fraunhofer’s Christian Riechert, in the Q&A session following the presentation of the award-winning paper, pointed out that, if separate depth information is available, the process of multi-view generation is simplified. DeCS Media says their process works live (though disocclusion would require additional processing).

There was something else of interest  in the 3D VIVANT booth: the Institut für Rundfunktechnik’s MARVIN (Microphone Array for Realtime and Versatile INterpolation), a ball (left), about the size of a soccer ball, containing microphones that capture sound in 3D and can be configured in many different ways. The IRT demoed MARVIN with position-sensing headphones; as the listener moved, the sound vectors changed appropriately.

Looking even more like a soccer ball was Panospective’s ball camera (shown at right in an image by Jonas Pfeil, It can be thrown (and, perhaps, even kicked), and, when it reaches its maximum height, its multiple cameras (36 of them!) capture images covering 360 degrees spherically. Viewers holding a tablet can see any part of the image, seamlessly, by moving the tablet around.

The Panospective ball’s images might be spatial, but they are neither stereoscopic nor light field. The same might be said of Mediapro Research’s Project FINE demonstrations. Using a few cameras–not necessarily shooting from every direction–they can reconstruct the space in which an event is captured and place virtual cameras anywhere within it (even “shooting” straight down from a non-existent aircraft). In just the few months since their demo at the NAB convention, they seem to have advanced considerably.

Another stereoscopic-3D revelation in the Future Zone related to lip-sync. It was printed on a couple of posters from St. Petersburg State University of Film and Television in Russia. The researchers, A. Fedina, E. Grinenko, K. Glasman, and E. Zakharova, shot a typical news-anchor setup in 3D and then tested sensitivity to lip sync in both 2D and shutter-glasses stereoscopic 3D. One poster is shown at left. Their experimental results show that 3D viewers are almost twice as sensitive as 2D viewers to audio-leading-video lip-sync error (27 ms vs. 50).

The IBC 2012 Future Zone was by no means limited to 3D. Other posters covered such topics as integrating social media into media asset management and using crowdsourcing to add metadata to archives.

Social media and crowdsourcing suggest personal computers and hand-held devices–the legendary second and third screens. But viewers appear to over-report new-media use and under-report plain, non-DVR, television viewing. How can we know what viewers actually do?

One exhibitor at the IBC 2012 Future Zone was Actual Customer Behaviour. With permission, they spy on actual viewers as they actually use various screens. Then experts in advertising, anthropology, behavior, ethnography, marketing, and psychology figure out what’s going on, including engagement. Their 1-3-9 Media Lab, for example, is named for the nominal viewing distances (in feet) of handheld devices, computer screens, and TV screens. But lab head Sarah Pearson notes that TV viewing distance can vary significantly just from leaning back when relaxing or leaning in with excitement.

There were other technology demonstrations. Japan’s National Institute of Information and Communications Technology, which, in the past, has shown such amazing technologies as holographic video and tactile-feedback (with aroma!), had a possibly more practical but no less amazing compact free-space optical link with autotracking and a current capacity of 1.28 terabits per second (enough to carry more than 860 uncompressed HD-SDI channels).

There were still more Future Zone exhibitors, such as the BBC, Korea’s Electronics and Telecommunications Research Institute, and Nippon Telephone and Telegraph. And, outside the Future Zone, one could find such exhibitors as the European SAVAS project for live, automated subtitling. Then there was NHK, the Japan Broadcasting Corporation, whose Science and Technology Research Laboratories (STRL) won IBC’s highest award this year, the International Honour for Excellence. NHK’s STRL is where modern HDTV originated and where its possible replacement, ultra HDTV, with 16 times more pixels than normal 1920 x 1080 HDTV, is still being perfected.

Part of NHK’s exhibit at the IBC 2012 Future Zone was two 85-inch UHDTV LCD screens showing material shot at the Olympic Games in London this summer. NHK has previously shown UHDTV via projection on a large screen. The 1-3-9 powers-of-three viewing-distance progression might continue to 27-feet for a multiplex cinema screen and 81 feet for IMAX, but NHK’s Super Hi-Vision (their term for UHDTV) was always viewed from closer distances. The 85-inch direct-view screens were attractive in a literal sense. They attracted viewers to get closer and closer to the screens to see fine detail.

Another NHK Super Hi-Vision (SHV) demo involved shooting and displaying at 120 frames per second (fps) instead of 60. At far right above is the camera used. Just above the lens is a display showing 120-fps images and to its left one showing 60-fps. The difference in sharpness was dramatic. But to the right of the 120-fps images and to the left of the 60-fps were static portions of the image, and they looked sharper than either moving version. At the left in the picture above is the moving belt the SHV camera was shooting, and it looked sharper than even the 120-fps images.

So maybe 120-fps isn’t the limit. Maybe it should be more like 300-fps. Might that appear at some future IBC? Actually, it was described (and demonstrated) at IBC 2008:


Tags: , , , , , , , , , , , , , , , , , , ,

3D: The Next Big Thing?

December 31st, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

SR Memory at the February 2010 HPA Tech Retreat; photo by Adam Wilt

The annual Tech Retreat of the Hollywood Post Alliance (HPA) is where many new technologies get introduced. Sony reportedly “introduced” its F65 camera and SR-memory technologies at this year’s exhibition of the National Association of Broadcasters (NAB) in Las Vegas in April; more than a year earlier, both were described for HPA Tech Retreat attendees. Panasonic’s Varicam and Sony’s HDCAM SR are just two of the other technologies that were introduced at previous HPA Tech Retreats.

Stereoscopic 3D (S3D) is no exception. The 2011 retreat, last February, saw the introduction of the SRI stereoscopic test pattern (below) and a SoliDDD multiview autostereoscopic display, among many other demos, and a presentation from Germany’s RheinMain University of Applied Sciences showed actual measured crosstalk (ghosting) for many commercial S3D systems, with names named. The 2012 retreat coming up in February is expected to feature an S3D lens adapter for use in almost any PL-mount system and binocular-vision Royal Society Research Fellow Jenny Read, who has degrees in astrophysics, neuroscience, and psychology.

So, is S3D the next big thing in home entertainment? Here’s what appeared in The New York Times: “…this week, a special study group of experts on stereoscopic television is meeting in Washington to try to decide which system should be adopted. Should the group reach agreement, the system it endorses would be proposed to the International Telecommunications Union, which is considering adopting a global standard for 3-D television.”

Yes, that appeared in The New York Times, in an article headlined “3-D TV Thrives Outside the U.S.”

It appeared on April 22.

It appeared on April 22 of the year 1980, more than 30 years ago. And, roughly 30 years before that, on May 3, 1953, Business Week ran the headline “3-D Invades TV,” describing ongoing S3D broadcasts that began that year.

One might think those broadcasts were simply of movies that used color filters (anaglyph) to separate the left- and right-eye views. They weren’t. Color TV was almost nonexistent at the time. Instead, the S3D TV broadcasts that began in 1953 used side-by-side images with a polarizing screen placed over the picture-tube faceplate and prismatic polarized glasses (right) for viewing. And even those weren’t the first S3D television broadcasts.

The 1930 book Fundamentals of Television, by Thomas Benson, begins its section on S3D with the following sentence. “There are, of course, several possible methods of accomplishing stereoscopic television.” The author could be so definitive not only because John Logie Baird had already broadcast S3D television in 1928 (the receiver, with stereoscope viewing device, is shown at left) but also because of the many patents that covered it, such Georges Valensi’s number 577,762 in France in 1922.

If S3D television was first broadcast more than 80 years ago (and was discussed even earlier), why should it be considered the next big thing now? There are some good reasons.

Tiny image sensors now allow side-by-side stereoscopic video acquisition (and that lens adapter at the upcoming HPA Tech Retreat could expand that capability to even more cameras). Digital correction processing now allows differences between image pairs to be changed in production or post.  There are now systems for automatic stereoscopic alignment.  And entropy-based bit-rate-reduction (digital compression) systems now allow two eye views to be recorded or transmitted in much less than twice the rate of a single view.

Then there are display systems. Most modern S3D cinemas use a system involving circularly polarized viewing glasses, with an optical “plate” in front of the projector to switch polarization as appropriate between the alternating left-eye and right-eye views. The system is being suggested for home TVs, too.

Above is a figure from a U.S. patent that covers the polarization-rotation plate system for such S3D viewing. The patent is number 4,541,691, issued to Thomas S. Buzak of Beaverton, Oregon. Some might recognize that location as the headquarters of the test-&-measurement company Tektronix, and, indeed the patent was assigned to them. It was applied for in 1983 and issued in 1985, at a time when Tektronix was in the video-image display business, largely using picture tubes, as can be seen at the left side of another figure (below) from the patent. Tektronix described and demonstrated the system with both direct-view (home TV-type) and projected (cinema-type) displays starting in 1984.

Perhaps the most-advanced form of S3D eyewear is individual goggles with built-in picture displays. They’re not exactly a new idea, either, as this portion of an image (left) from the March 1949 issue of Radio-Electronics magazine shows. The diagonal lines are “rabbit-ears” antennas. In this case, idea is an appropriate description.

Some say any form of glasses is the bane of S3D, especially in homes. They prefer some form of autostereoscopic display, an S3D display that can be viewed without glasses (or other intervention between viewer and screen).

There have been some major developments in this technology recently. If you’ve seen Mission Impossible – Ghost Protocol, you saw a theoretical eye-tracking autostereoscopic display screen intended to fool a guard. The illusion, unfortunately, gets destroyed when more guards show up, and the system can’t figure out whose eyes to track, causing shifting images.

If, however, you had attended Ian Sexton’s presentation on advanced autostereoscopic displays in the panel “Tomorrow’s 3D: A Glimpse from Today” at 3D World in New York in October, you’d have seen that tracking the eyes of multiple viewers is not really a problem. The prototypical light engine of the European HELIUM3D (High-Efficiency Laser-based multI-User Multu-modal 3D Display) project he described is shown above right.

Of course, HELIUM3D was by no means the first multi-viewer autostereoscopic display. At left (click for a larger view) is the parallax-barrier grid being applied to the screen of a cinema in Moscow prior to its showing of a glasses-free 3D movie, Concert, in February 1941 (right). Glasses-free “Stereo Kino” auditoriums later opened in other cities in and influenced by the Soviet Union.

All S3D viewing-control mechanisms (the ones used to ensure that the appropriate view goes to the correct eye) have historic origins. In the photo of the 1928 S3D TV receiver above, the viewing-control mechanism can be seen to be a Holmes-type prismatic-lensed stereoscope, dating to the mid-19th century. The use of the word anaglyph to describe colored glasses dates to a French S3D-movie system in 1893. Projection of S3D images onto a metallic screen so as to allow the use of polarized glasses dates back at least to 1891.

Perhaps the most popular current form of home S3D TV uses shutter glasses that allow the eyes to see the screen alternately as the different views are displayed. Such shutters require synchronization to the display, usually accomplished through infra-red signaling. Are they, at least, a recent innovation?

The Teleview system (above) premiered at a New York cinema in 1922 (showing an S3D science-fiction movie with special effects). As can be seen from the illustration, each audience member had an individual viewing device. The device was a rapid, synchronized, view-alternating shutter, as shown at left (click on the image for a larger view). But even that wasn’t the earliest active-shutter 3D-viewing system. The recent SMPTE book 3D Cinema and Television Technology: The First 100 Years, edited by Michael D. Smith, Peter Lude, and Bill Hogan, with introductions by Ray Zone, begins with a 1919 paper by the Society’s founder indicating that shutter-based viewing systems were already well known by that date.

Indeed they were! Above is a portion of a drawing from British patent 711, issued March 23, 1853 to Antoine Claudet. The mechanism at the top right shuttled the sliding shutter bar at the top left back and forth so that a viewer looking into the eyepieces shown at the bottom would see the appropriate view in the appropriate eye.

That was probably the earliest form of S3D shuttering, but it wasn’t the earliest S3D photographic motion-picture patent. The latter (but earlier) achievement belongs to Jules Duboscq, an instrument maker who was head of special effects at the Paris Opera. He got that post by creating an electric-light sunrise effect there in 1849, thirty years before Edison’s light-bulb demonstration. To achieve the effect, he had to create not only the illumination system but also a power source for it. Nature magazine later praised his development of a means of precipitating the toxic fumes from the batteries used, so as not to poison the patrons of the opera.

On November 12, 1852, in an addendum to his French patent 13,069, he described a “stéréofantascope” or “bioscope,” an S3D movie system. Coincidentally, the patent addendum describes the first photographic movie system, years before even Eadweard Muybridge’s work.

There is a surviving Duboscq Bioscope S3D motion-picture disc (shown at left, click to enlarge) at the Museum of the History of Science at the University of Ghent, Belgium. It has 12 stereo pairs of sequential albumen photographic prints of a steam engine.

Given that we are rapidly approaching the 160th anniversary of S3D moving-image viewing, it might be hard to think of S3D as the next big thing. On the other hand, tomorrow is another year.

Tags: , , , , , , , , , , , , , , , , , , , , ,

World Opera Project Needs Help

October 10th, 2011 | 1 Comment | Posted in 3D Courses, Schubin Snacks

Those of you who follow my activities know that one of them is discussing how, over the course of the last four centuries, opera helped create the modern media world: electronic home entertainment, stereo-sound transmission, pay-cable, headphones, movies, and more. Here’s a press release about a recent lecture I did on that subject at the Library of Congress:

Opera is still pushing the limits of media technology. The Metropolitan Opera’s new production of Siegfried, for example, utilizes advanced computer graphics, controlled by image and positional sensors, projected in multiple depth planes, through the action of complex warp engines. it has been described as providing glasses-free 3D to an entire opera-house audience. And then there’s the World Opera Project (WOP), based north of the Arctic Circle in Tromsø, Norway.

The brainchild of Professor Niels Windfeld Lund, the WOP is working to create a future in which performers anywhere in the world can join together to form a complete opera presentation anywhere in the world. It involves high-speed data transmission (the project has utilized the lines normally associated with the European Center for Nuclear Research (CERN), home of the Large Hadron Collider), telepresence technologies, performer cueing, and more.

Beginning in 2006, Professor Lund assembled an amazing team involving government and academic laboratories, performing-arts institutions, and even manufacturers around the world. The project has already created several demonstrations of what might be achieved. You can read a bit about it at the WOP site here:

Unfortunately, the project is now in danger of running out of funds. I say “unfortunately” not only because I would like to see the World Opera Project continue but also because of what it might mean for the future of our industry. The many labs that have been working on the project might develop image and sound acquisition, processing, distribution, and presentation technologies that could be used in the movies and television of the future.

The next level of funding is not very large. If you think you can help plant the seeds of tomorrow’s technology, please contact Professor Lund: niels.windfeld.lund at


Tags: , , ,


August 31st, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe


What should come after HDTV? There’s certainly a lot of buzz about 3D TV. Such directors as James Cameron and Douglas Trumbull are pushing for higher frame rates. Several manufacturers have introduced TVs with a 21:9 (“CinemaScope”) aspect ratio instead of HDTV’s 16:9. Some think we should increase dynamic range (the range from dark to light). Some think it should be a greater range of colors. Japan’s Super Hi-Vision offers 22.2-channel surround sound. And then there’s 4K.

In simple terms, 4K has approximately twice as much detail as HDTV in both the horizontal and vertical directions. If the orange rectangle above is HDTV, the blue one is roughly 4K. It’s called 4K because there are 4096 picture elements (pixels) per line.

This post will not get much more involved with what 4K is. The definition of 4096 pixels per line says nothing about capture or display.  Even at lower resolutions, some cameras use a complete image sensor for each primary color; others use some sort of color filtering on a single image sensor. At left is Colin Burnett’s depiction of the popular Bayer filter design. Clearly, if such a filtered image sensor were shooting another Bayer filter offset by one color element, the result would be nothing like the original.

Optical filtering and “demosaicking” algorithms can reduce color problems, but the filtering also reduces resolution. Some say a single color-filtered image sensor with 4096 pixels per line is 4K; others say it isn’t. That’s an argument for a different post.  This one is about why 4K might be considered useful.

An obvious answer is for more detail resolution. But maybe that’s not quite as obvious as it seems at first glance. The history of video technology certainly shows ever-increasing resolutions, from eight scanning lines per frame in the 1920s to HDTV’s….

As can be seen above, in 1935, a British Parliamentary Report declared that HDTV should have no fewer than 240 lines per frame. Today’s HDTV has 720 or 1080 “active” (picture-carrying) lines per frame, and 4K has a nominal 2160, but even ordinary 525-line (~480 active) TV was considered HDTV when it was first introduced.

Human visual acuity is often measured with a common Snellen eye chart, as shown at left above. On the line for “normal” vision (20/20 in the U.S., 6/6 in other parts of the world), each portion of the “optotype” character occupies one arcminute (1′, a sixtieth of a degree) of retinal angle, so there are 30 “cycles” of black and white lines per degree.

Bernard Lechner, a researcher at RCA Laboratories at the time, studied television viewing distances in the U.S. and determined they were about nine feet (Richard Jackson, a researcher at Philips Laboratories in the UK at the same time, came up with a similar three meters). As shown above, a 25-inch 4:3 TV screen provides just about a perfect match to “normal” vision’s 30 cycles per degree when “525-line” television is viewed at the Lechner Distance — roughly seven times the picture height.

HDTV should, under the same theory, be viewed from a smaller multiple of the screen height (h). For 1080 active lines, it should be 7.15 x 480/1080, or about 3.2h. Looked at another way, at a nine-foot viewing distance, the height should be about 34 inches, a diagonal screen size of about 60 inches, and, indeed, 60-inch (and larger) HDTV screens are not uncommon (and so are closer viewing distances).

For 4K (again, using the same theory), it should be a screen height of about 68 inches. Add a few inches for a screen bezel and stand, and mount it on a table, and suddenly the viewer needs a minimum ceiling height of nine feet!

Of course, cinema auditoriums don’t have domestic ceiling heights. Above is an elevation of a typical old-style auditorium, courtesy of Warner Bros. Technical Operations. The scale is in picture heights. Back near the projection booth, standard-definition resolution seems adequate. Even in the fifth row, HD resolution seems adequate. Below, however, is a modern, stadium-seating cinema auditorium (courtesy of the same source).

This time, even a viewer with “normal” vision in the last row could see greater-than-HD detail, and 4K could well serve most of the auditorium. That’s one reason why there’s interest in 4K for cinema distribution.

Another is questions about that theory of “normal” vision. First of all, there are lines on the Snellen eye chart (which dates back to 1862) below the “normal” line, meaning some viewers can see more resolution.

Then there are the sharp lines of the optotypes. A wave cycle would have gently shaded transitions between white and black, which might make the optotype more difficult to identify on an eye chart. Adding in higher frequencies, as shown below, makes the edges sharper, and 4K offers higher frequencies than does HD.

Then there’s sharpness, which is different from resolution. Words that end in -ness (brightness, loudness, sharpness, etc.) tend to be human psychophysical sensations (psychological responses to physical stimuli) rather than simple machine-measurable characteristics (luminance, sound level, resolution, contrast, etc.). Another RCA Labs researcher, Otto Schade, showed that sharpness is proportional to the square of the area under a modulation-transfer function (MTF) curve, a curve plotting contrast ratio against resolution.

One of the factors affecting an MTF curve is the filtering inherent in sampling, as is done in imaging. An ideal filter might use a sine of x divided by x function, also called a SINC function. Above is a SINC function for an arbitrary image sensor and its filters. It might be called a 2K sensor, but the contrast ratio at 2K is zero, as shown by the red arrow at the left.

Above is the same SINC function. All that has changed is a doubling of the number of pixels (in each direction). Now the contrast ratio at 2K is 64%, a dramatic increase (again, as shown by the red arrow at the left). Of course, if the original sensor offered 64% at 2K, the improvement offered by 4K would be much less dramatic, a reason why the question of what 4K is is not trivial.

Then there’s 3D.  Some of the issues associated with 3D shooting relate to the use of two cameras with different image sensors and processing. One camera might deliver different gray scale, color, or even geometry from the other.

Above is an alternative, two HD images (one for each eye’s view) on a single 4K image sensor. A Zepar stereoscopic lens system on a Vision Research Phantom 65 camera serves that purpose. It’s even available for rent.

There are other reasons one might want to shoot HD-sized images on a 4K sensor. One is image stabilization. The solid orange rectangle above represents an HD image that has been jiggled out of its appropriate position, the lighter orange rectangle behind it with the dotted border. There are many image-stabilization systems available that can straighten out a subject in the center, but they do so by trimming away what doesn’t fit, resulting in the smaller, green rectangle. If a 4K sensor is used, however, the complete image can be stabilized.

It’s not just stabilization. An HD-sized image shot on a 4K sensor can be reframed in post production. The image can be moved left or right, up or down, rotated, or even zoomed out.

So 4K offers much even to people not intending to display 4K. But it comes at a cost. Cameras and displays for 4K are more expensive, and an uncompressed 4K signal has more than four times as much data as HD. If the 1080p60 (1080 active lines, progressively scanned, at roughly 60 frames per second) version of HD uses 3G (three-gigabit-per-second) connections, 4K might require four of those.

When getting 4K to cinemas or homes, however, compression is likely to be used, and, as can be seen by the MTF curves, the highest-resolution portion of the image has the least contrast ratio. It has been suggested that, in real-world images, it might take as little as an extra 5% of data rate to encode the extra detail of 4K over HD.

So, is 4K the future? The aforementioned Super Hi-Vision is already effectively 8K, and it’s scheduled to be used in next year’s Olympic Games.

Tags: , , , , , , , , , , , , , , , ,

The Best 3D Conference

July 7th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

Do you think you know something about stereoscopic 3D? Test yourself with the six basic journalistic questions: who, what, when, where, how, and why.

Who wrote the first article on holographic television to appear in the SMPTE Journal? What is cyclovergence? When is a seven-second stereo delay appropriate? Where can lens centering be corrected instantly in any stereoscopic camera rig without digital processing? How does viewing distance affect scene depth? And why doesn’t pseudostereo destroy depth perception?

All of those questions, and many more, were answered last month at the Society of Motion-Picture and Television Engineers’ (SMPTE’s) 2nd-annual International Conference on Stereoscopic 3D for Media & Entertainment, held in New York’s Hudson Theater (left, photo by Ken Carroza). From my point of view, it was the best stereoscopic-3D event that has ever taken place anywhere.

Full disclosure: As a member of the trade press, I was admitted free (and got some free food). The same is true at most conferences I attend. And the complimentary entry and food has never stopped me from panning events I didn’t like. This one I loved!

The thrills started with the very first presentation, “Getting the Geometry Right,” by Jenny Read, a research fellow at the Institute of Neuroscience at Newcastle University. Read’s Oxford doctorate was in theoretical astrophysics before she moved into visual neuroscience, spending four years at the U.S. National Institutes of Health.

I’ve provided that snippet of her biographical info here as a small taste of the caliber of the presenters at the conference. There were many other vision scientists, but other presenters were associated with movie studios, manufacturers, shooters, and service providers. And the audience gamut also ran from engineering executives at television networks and movie studios to New York public-access-cable legend Ugly George (right; click picture to expand).

Back to Dr. Read’s presentation, I cannot confirm that everyone in the audience did the same, but, as far as I could see, attendees were taking notes frantically almost as soon as she started speaking. Consider, for example, cyclovergence. Everyone knows our eyes can pivot up & down (around the x-axis) and left & right (around the y-axis), but did you know they can also rotate clockwise & counterclockwise (around the z-axis)? That’s cyclovergence, and it’s actually common.

The main topic of Read’s presentation was vertical disparities between the two eye views. Those caused by camera or lens misalignments are typically processed out, but ordinary vision includes vertical disparities introduced by eye pointing.

At right is an illustration from Peter Wilson and Kommer Kleijn’s presentation last year at the International Broadcasting Convention (IBC), “Stereoscopic Capture for 2D Practitioners.” If our eyes converge on something, the theoretical rectangular plane of convergence becomes two trapezoids with vertical disparities. So vertical disparities are not necessarily problematic for human vision.

Read showed other ways vertical disparities can get introduced. At left is what two eyes would see if looking towards the right (as might be the case when sitting to the left of a stereoscopic display screen). Then she explained how our brains convert vertical disparity information into depth information, so an oblique screen view can change the appearance of people into something seeming like living cut-out puppets.

That was clearly not the only mechanism for changing apparent depth. Below is a graph from “Effect of Scene, Camera, and Viewing Parameters on the Perception of 3D Imagery,” presented by Brad Collar of Warner Bros. and Michael D. Smith. They showed depth in a scene, what it looks like when seen on a cinema-sized screen, and what it looks like on other screens, such as those used for home viewing. The depth collapsed from its cinema look to its home look, effectively going from normal character roundness to the appearance of cardboard cut outs. But the graph below shows what happened after processing to restore the depth. The roundness came back, but everything was pushed behind the screen (the red vertical line).

Some viewers have complained about miniaturization in 3D TV, such as burly football players looking like little dolls. But our visual systems can perform amazing feats of depth correction.

In a presentation titled “Depth Cue Interactions in Stereoscopic 3D Media,” Robert Allison of York University noted a few of the cues viewers use for depth perception, including occlusion and perspective. Then he showed a real-world 3D scene. It appeared to be in 3D, and it offered a stereoscopic sensation, though something seemed to be wrong. In fact, it was intentionally pseudostereoscopic, with left- and right-eye views reversed. But the non-stereoscopic depth cues kept the apparent depth correct. Later he showed how even just contrasty lighting can increase apparent depth sensation.

Of course, lost roundness (and associated miniaturization) aren’t the only perceptual issues associated with stereoscopic 3D. In “Focusing and Fixating on Stereoscopic Images: What We Know and Need to Know,” Simon Watt of Bangor University showed some of the latest information on viewer discomfort caused by a conflict between vergence (the distance derived from where the eyes point) and accommodation (the distance derived from what the eyes are focused on, which is the screen).

The chart at right is based on some of the latest work from the Visual Space Perception Laboratory at the University of California – Berkeley. It shows that the approximate comfort zone is based, as might be expected, only on viewing distance. In a cinema, viewers should be comfortable with depth going away from them to infinity and coming out of the screen almost to hit them on their faces. At a TV-viewing distance, even far depth can be uncomfortable. At a computer-viewing distance, the comfort range is smaller still.

Viewing distances for handheld devices should result in even narrower comfort zones, but that’s only if they’re stereoscopic.  There are other options that were discussed at the SMPTE conference. Ichiro Kawakami of Japan’s National Institute of Information and Communications Technology (NICT) described a glasses-free 200-projector-based autostereoscopic display. Douglas Lanman of the MIT Media Lab actually brought a demonstration of a layered light-field system that attendees could hold. As shown below (click for a larger view, more info here:, they have come up with a mechanism for reproducing the original light field.

On the same day that The New York Times reported on a camera that allows focus to be controlled after a picture is taken (, Professor Marc Levoy of Stanford University explained to attendees at the SMPTE conference how it’s done and the application of “computational cinematography” to 3D. And then there’s holography.

Mark Lucente of Zebra Imaging gave a presentation titled “The First 20 Years of Holographic Video — and the Next 20.” But he was sort of contradicted (at least in terms of the earliest date) by the next presentation, from V. Michael Bove of MIT, titled “Live Holographic TV: from Misconceptions to Engineering.”

In 1962, Emmett Leith and Juris Upatnieks of the University of Michigan created what is generally considered the first 3D hologram. In 1965, they published a paper (left) in the SMPTE Journal about what would be required for holographic 3D TV (Lucente was referring to actual, not theoretical displays; see his comment below).

Perhaps live entertainment holography is still not quite around the corner. That’s okay. The SMPTE conference offered plenty of practical information that can be used today.

Consider “New Techniques to Compensate Mistracking within Stereoscopic Acquisition Systems,” by Canon’s Larry Thorpe. Dual-camera rigs can be adjusted so that optical centers are exactly where they should be, which is not necessarily where the camera bodies would suggest. There are tolerances in lens mounts on both the lens and camera portions that can add up to significant errors. And once zooming is added, all bets are off.

Two groups of lens elements move to effect the magnification change and focus compensation, and a third group moves to adjust focus. It’s a mess! But lens manufacturers have introduced optical image stabilization systems, one version of which is shown at right. With the appropriate controls, those stabilizing elements can be used to keep the images stereoscopically centered throughout the zoom and focus ranges.

Then there was “S3D Shooting Guides: Needed Tools for Stereo 3D Shooting” by Panasonic’s Michael Bergeron. The presentation compared indicators used by videographers to achieve appropriate gray scale and color to indicators they might use to achieve appropriate depth.

In a similar vein, Bergeron extended the concept of the “seven-second” obscenity delay (which allows producers of live programming to cut from inappropriate material to “safe” pictures and sounds) to a “seven-second stereo delay” that would allow stereographers to cut away from, say, window violations in live programming.

There was much more at the conference. Instead of only stereoscopic TVs with only active glasses or spatial-resolution-reducing patterns for passive glasses, a RealD presentation described a passive-glasses system with full resolution. Martin Banks, of the Berkeley lab, described the temporal effects of different frame rates and image-presentation systems (as shown above).

Unlike this post, which must be viewed without benefit of 3D glasses, “I Can See Clearly Now — in 3D,” by Norm Hurst of SRI/Sarnoff, was presented entirely in stereoscopic 3D. That helped audience members see how a test pattern can be used to determine 3D characteristics with nothing more than a stereoscopic 3D display.  It was previewed at the HPA Tech Retreat in February (

I’ve mentioned only about half of the presentations at the conference, and I’ve offered only a tiny fraction of the content of even those. SMPTE used dual 2K images on a 4K Sony projector to allow stereoscopic content examples to be viewed with RealD glasses but without view alternation (though even that arrangement introduced an interesting artifact identified by Hurst’s test pattern). As many of the speakers pointed out, we still have a lot to learn about 3D. And, if you didn’t attend SMPTE’s conference, you’ll need to learn more still. Better at least join SMPTE so you can read the full papers that get published in the Journal (

The following comment was received from Mark Lucente:

“As I described in my talk, holographic video had been theorized and discussed for decades (going back to Dennis Gabor in the 1950s!). However, researchers at the MIT Media Lab (Prof. Stephen Benton, myself, and one other graduate student) were the first to ever BUILD a working real-time 3D holographic display system — in 1990. The title of my talk ‘The First 20 Years of Holographic Video…’ refers to 20 years of the existence of working displays, rather than theoretical.

“On a related note, just to be sure, Emmett Leith and Juris Upatnieks made the first laser-based 3D holograms. These were photographic — not real-time video. In other words, they were permanent recordings of 3D imagery, not real-time display of moving images. They were both brilliant, and contributed to the theoretical foundations of what eventually (in 1990) became the first-ever actual working holographic video system at MIT.”

Tags: , , , , , , , , , , , ,   

NAB 2011 Wrapup, Washington, DC SMPTE Section, May 19, 2011

June 1st, 2011 | No Comments | Posted in Download, Today's Special

NAB 2011 Wrapup
Washington, DC SMPTE Section
May 19, 2011

(38 slides / 43 minutes)


Tags: , , , , , , , ,

Ex uno plures

March 27th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

HPA breakfast roundtable - copyright @morningstar productions 2011

There were many wonders at the 17th-annual HPA Tech Retreat in February in the California desert. And many of the more-than-500 attendees at the Hollywood Post Alliance were left wondering. One thing they wondered about was how to accommodate all viewers from a single master or feed.

As usual, many manufacturers introduced new products at the event (it’s where, in the past, Panasonic first showed its Varicam and Sony first showed HDCAM SR). But this year even the best products gave one pause.

Consider, for example, the Kernercam 3D rig, shown at left. It is transportable from set to set in three relatively small packing cases (far left). It takes just a few minutes to go from those cases to shooting. Each individual camera subassembly (bottom right of the image at left, shown with a Sony P1 camera) is pre-adjusted to the desired stereoscopic alignment parameters. After that, the two camera modules (with almost any desired cameras) just snap into the overall rig, with no readjustment necessary. The mounts are so rugged that repeatedly snapping cameras in and out or even hitting them does not change the 3D alignment.

That’s great, right? For many purposes, it probably is. But some stereoscopic camera-rig manufacturers, such as 3ality, are justifiably proud that their rigs do not use fixed alignment and can, therefore, be adjusted even during shots.

The choice of a super-rugged, fixed mount or a less-rugged, remotely adjustable mount is just that, a choice, and directors, cinematographers, & videographers have been making choices all their professional lives. The result of those choices adds up to a desired effect. Or does it?

Sony also introduced new products at this year’s HPA Tech Retreat. One, SR Memory, with the ability to store up to a terabyte of data on a solid-state memory “card” and a transfer rate allowing four live uncompressed HD streams simultaneously, falls into that category of choice. It’s also a wonder of new technology (though retreat attendees were given a preview in 2010, as shown in the picture at right, from Adam Wilt’s excellent coverage of last year’s HPA Tech Retreat,

Another new Sony introduction, OLED reference monitors, might have introduced a different kind of wonder. Some in attendance were delighted by what seemed like perfect image reproduction in something that (in one size, at least) will fit in a standard equipment rack. Others thought that existing larger devices already offer sufficiently good reference monitoring.

copyright @morningstar productions 2011

The way Sony conducted its demonstration, the new monitor was placed between Sony’s own reference-grade LCD and CRT monitors. With 24-frame-per-second source material, the CRT image flickered perceptibly. In black image areas, the LCD was noticeably lighter. The OLED suffered from neither problem. But is that necessarily good?

Many home viewers still watch TV on picture tubes. Many others watch on LCD displays. Others watch plasma or DLP. Some view images roughly 60 times a second, others 120, 240, or even 480 times a second. Some watch TV in dimly lit living rooms. Others watch on mobile devices outdoors in the sun. Still others watch content shot with the same cameras on giant projection screens in cinema auditoriums or even bigger LED screens in sports stadiums. The problem is that we are no longer shafted.

We were originally shafted in 1925 — literally! In that year, John Logie Baird was probably the first person to achieve a recognizable video image of a human face. A picture of the apparatus he used is shown at right. At far right is the original subject, a ventriloquist’s dummy’s head called Stooky Bill. The spinning disks on the shaft were used for image scanning. But the shaft extended from the camera section to a display section in the next room. It was impossible to be out of sync.

Another television pioneer was Philo Taylor Farnsworth, probably the first person to achieve all-electronic television (television in which neither the camera nor the display use mechanical scanning). His first image, in 1927, was a stationary dollar sign.

Although Farnsworth deserves credit for achieving all-electronic television, he was not the first to conceive it. Boris Rosing came up with the picture tube in 1907 in Russia, and the following year Alan Archibald Campell Swinton came up with the concept of all-electronic television in Britain. His diagram (left) was published a few years later. Although the idea of tube-based cameras might seem strange today, the first video camera to be shown at an NAB exhibit that did not use a tube didn’t appear until 1980 (and then only in prototype form), and tubeless HD cameras didn’t begin to appear until 1992.

Tube-based cameras and TVs with picture tubes didn’t have the physical shaft of Baird’s first apparatus, but they were still effectively shafted. When the electron beam in the camera’s tube(s) was at the upper left, the electron beam in the viewer’s picture tube was in the same position. Tape could delay the whole program, but it didn’t change the relationship.

The introduction of solid-state imaging devices changed things. An image might be captured all at once but displayed a line at a time, resulting in “rubbery” table legs as a camera panned past them. Camera tubes and solid-state imaging devices also had other differences. We’ve learned to work with those differences as well as the ones between different display technologies.

Now there’s 3D. I’ve written before about 3D’s other three dimensions, and their effect on depth perception: pupillary distance (between the eyes, especially different between adults and children), screen size, and viewing distance. See, for example, There are other issues associated with individual viewers, who might be blind in one eye, stereo blind, have limited fusion ranges (depths at which the two stereoscopic images can fuse into one), long acquisition times (until fusion occurs), etc.

There are also display-technology issues. One is ghosting. A presentation in the HPA Tech Retreat’s main program was called “Measurement of the Ghosting Performance of Stereo 3D Systems for Digital Cinema and 3DTV,” presented by Wolfgang Ruppel of RheinMain University of Applied Sciences in Germany. Ruppel presented test charts used to measure various types of ghosting for commonly used cinema and TV display systems. A trimmed version of one of his slides appears at left. It’s taken (with permission) from Adam Wilt’s once-again excellent coverage of the 2011 HPA Tech Retreat (which includes the full slides and the names of the stereoscopic display systems,

Ruppel’s paper also looked at the effects of ghosting suppression systems and noted color shifting. Some systems shifted colors towards yellow, others towards blue, and at least two systems shifted the colors differently for the two eyes! Can one master recording deliver accurate color results to cinemas when one auditorium might use one 3D display system and another a different one?

In one of the demo rooms, SRI (Sarnoff Labs) demonstrated a different test pattern for checking stereoscopic 3D parameters. It is shown above with the left- and right-eye views side by side. The crosstalk (ghosting) scale is shown at right in a demonstration of the way it would look with 4% crosstalk. The pattern can also be used to check synchronization between eye views, using the small, moving white rectangles shown just to the right of center below the eye-view identification.

There were other Sarnoff demonstrations, however, that indicated that synchronization of eye views is not as simple as making them appear when they are supposed to. Consider, for example, the current debate about the use of active glasses vs. passive glasses in 3DTVs.

Active glasses shutter the right eye during the left eye’s view and then shutter the right eye during the left eye’s view. Passive glasses usually involve a pattern of polarizers on the screen sending portions of the image (typically every other row) to one eye and the rest to the other (although there are also passive-glasses systems that use a full-image optical-retarder plate to alternate between left-eye and right-eye images).

Above are side-by-side right-eye and left-eye random-dot-type images used in another of the SRI demos. If you cross your eyes so they form a single image, you should see a circular disc, slightly to the right of the center, floating above the background.

That’s a still image.  SRI’s demo had multiple displays of moving images.  One used active glasses and another simultaneous-image passive glasses.

When the sequence was set for the left- and right-eye views to move the disc simultaneously side to side, that’s exactly what viewers looking at the passive display saw. But, with the exact same signal feeding the active-glasses display, viewers of that one saw the disc moving in an elliptical path into and out of the screen as well as back and forth. With the selection of a different playback file, the Sarnoff demonstrators could make the active-glasses view be side to side and the passive-glasses view be elliptical.

copyright @morningstar productions 2011

The random-dot nature of the image assured that no other real-world depth cues could interfere. But how significant would the elliptical change be in real-world images?

That’s one thing SRI wants to figure out, so they can come up with a mechanism to rate the quality of stereoscopic images in the same way that their JND (just-noticeable differences) technology has been used to evaluate the quality of non-stereoscopic imagery in the era of bit-rate-reduced (“compressed”) recording and distribution.

It’s not easy to figure out. One SRI sequence of slowly changing depth caused one researcher to get queasy. As can be seen at left, however, it didn’t bother another viewer at all.

We’re just beginning to learn about the many factors that can affect both 2D (consider those CRT, OLED, and LCD displays at the Sony demo, as well as others not shown) and 3D viewing. But there’s no turning back.

The motto carried in the beak of the eagle on the Great Seal of the United States is often translated as “Out of Many, One.” The title of this post means “Out of One, Many,” the problem faced by those creating moving-image programming in the post-shafted era.

That’s the front of the Great Seal. The back has two more mottoes: One, Novus Ordo Seclorum, emphasizes the impossibility of returning to the shaft. We’re in “A New Order of the Ages.” The other, Annuit Coeptis, I choose to translate as “Might As Well Smile About These Undertakings.”



Tags: , , , , , , , , , , , , , , ,

Everything Is First!

February 4th, 2011 | No Comments | Posted in 3D Courses, Schubin Snacks

The laugh-getting tag line for Garrison Keillor’s weekly Lake Wobegon stories is that “all of the children are above average.” It’s funny because it’s impossible. So, too, with 3D; not everything can be a first.

I happen to have studied in depth the intersecting histories of media technology and the art form known as opera. Included in those histories is stereoscopic 3D.

Monsters of Grace glassesWhen Philip Glass’s opera Monsters of Grace premiered in 1998, it featured a stereoscopic backdrop, and the audience wore 3D glasses to view it. At left is what appears to be the classic image of a cinema audience watching a 3D movie. But look closely at that audience.

The reason they look more like an opera audience than a movie audience is that they are an opera audience; they are watching Monsters of Grace in Los Angeles. The picture at left appears in this review:

3D Magic Flute smallThis year, there was a similar 3D production of the opera The Magic Flute at the University of Houston. Coverage, as in this example, where the photo at right appeared, appropriately mentioned the earlier Monsters of Grace: Neither opera was transmitted anywhere in 3D.

3-D Don GiovanniIn 2009, before Avatar opened, however, the opera Don Giovanni was transmitted live in 3D, from Opéra de Rennes in France to multiple cinemas and viewing rooms. One of the latter is shown at left. It was, as best I know at the moment, the first opera transmitted live in 3D.

Last November, the opera Faust was transmitted live in 3D from Stockholm’s Folkoperan. It, too, went to multiple cinemas.

Later this month, the English National Opera’s production of Lucrezia Borgia will be transmitted live in 3D. And, next month, a Carmen previously shot in 3D at London’s Royal Opera House will be shown in movie theaters.  Both are being promoted as “the first 3D opera”:;

Oh, well.

Tags: , , , , , , , , , , , ,

3D and Not 3D: The Knowledge Returns

January 30th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

Last year was a wonderful one for 3D.  In terms of worldwide and domestic box-office grosses, six of the top-10 movies released in 2010 were in 3D. And by year’s end there were almost two dozen models of integrated 3D cameras and camcorders and literally dozens of models of two-camera 3D rigs.

Alice Mad-HatterThere’s just one problem: None of those 3D cameras or camera rigs — not a single one of them — was used to create any of those six top-10 3D movies. Four of the movies were animated, and the other two, including the second-highest grosser of the year, Alice in Wonderland, were converted from 2D to 3D in post production.

That’s not a fact that is frequently mentioned. But it will be mentioned next month at the 17th annual HPA Tech Retreat® in the (perhaps appropriately named) community of Rancho Mirage, California.

The first retreat predates even its sponsoring organization, the Hollywood Post Alliance. And, although it might seem natural that post-production processing of 3D is an appropriate topic for HPA, the retreat is limited to neither post nor Hollywood.

It has featured presenters from locations ranging from New Zealand to Norway and Argentina to Australia and from organizations ranging from broadcast networks to manufacturers, the military, and movie exhibitors. If someone there is from NATO, that could stand for the National Association of Theater Owners or the North Atlantic Treaty Organization (both have made presentations in the past). You’ll find more on the retreat in this earlier post:

Stereoscopic 3D has been a prominent feature of the retreat for many years. Presenters on the topic have included Professor Martin Banks of the Visual Space Perception Laboratory at the University of California-Berkeley. Topics have included the BBC’s research on virtual stereoscopic cameras. And then there are the demonstrations.

sixflags_thumbFor the 2008 retreat, HPA arranged to convert an auditorium at a local multiplex to 3D so participants could judge for themselves everything from the 3D Hannah Montana movie to different forms of 2D-to-3D conversions prepared by In-Three. Long before turning into a product, JVC demonstrated the technology in its 2D-to-3D converter at the 2009 retreat.

At that same retreat, RabbitHoles Media showed multiple versions of full-motion, full-color, high-detail holography (one is shown above right in a shot taken from Jeff Heusser’s coverage of the 2009 retreat for; you can see it in motion here At last year’s retreat, Dolby demonstrated 3D HD encoded at roughly 7 Mbps.

Virtual 3D and 2D-to-3D conversion are just two forms that will be discussed in a presentation called “Alternatives to Two-Lens 3D.” And here are some of the other 3D sessions that will be on this year’s program: 3D Digital Workflow, Avid 3D Stereoscopic Workflow, Live 3D: Current Workarounds and Needed Tools, 3D Image Quality Metrics, Subtitling for Stereographic Media, Will 3D Become Mainstream?, Single-Lens Stereoscopy, Home 3D a Year Later, Storage Systems for 3D Post, Measurement of the Ghosting Performance of Stereo 3D systems for Digital Cinema and 3DTV, and Photorealistic 3D Models via Camera-Array Capture. Participants will range from 3D equipment manufacturers to 3D distributors to the 3D@Home Coalition.

If the 2011 HPA Tech Retreat seems like a great 3D event, that’s probably because it is. But it’s a lot more, too. If you’re interested in advanced broadcast technology, for example, here are some of the sessions on that topic: ATSC Next-Generation Broadcast Television, Information Theory for Terrestrial DTV Broadcasting, Near-Capacity BICM-ID-SSD for Future DTTB, DVB-T2 in Relation to the DVB-x2 Family, the Application of MIMO in DVB, Hybrid MIMO for Next-Generation ATSC, 3D Audio Transmission, Next-Generation Handheld & Mobile, High-Efficiency Video Coding, Convergence in the UHF Band, Global Content Repositories for Distributed Workflows, Content Protection, Pool Feeds & Shared Origination, Multi-Language Video Description, Consumer Delivery Mayhem, Networked Television Sets, Interoperable Media, FCC Taking Back Spectrum, the CALM Act, Making ATSC Loudness Easy, Media Fingerprinting, Embracing Over-the-Top TV, and Image Quality for the Era of Digital Delivery.

Broadcast-tech presenters will come from, among others: ABC, CBS, Fox, NBC, PBS, Sinclair Broadcast Group, and NAB; ATSC, BBC, Canada’s CRC, China’s Tsinghua University, the European Broadcasting Union, Germany’s Technische Universität Braunschweig, Korea’s Kyungpook National University, and Japan’s NHK Science & Technology Research Labs; AmberFin, DTS, Linear Acoustic, Microsoft, Rohde & Schwarz, Roundbox, Rovi, and Verance; Comcast, Starz, and TiVo.

Not interested in 3D or broadcast? How about reference monitoring, with presentations on LCD, OLED, and plasma, new research results from Gamma Guru Charles Poynton, and an expected major new product introduction from a major manufacturer?

What about workflow? Warner Bros. will present their evaluation of 13 different workflows at a “supersession” on the subject. The supersession will feature major studios and post facilities and is expected to cover everything from scene to screen. If that’s not enough, there will be other sessions on interoperable mastering and interoperable media, file-based workflows, and “Hollywood in the Cloud.”

Point.360 trimmedInterested in archiving? Merrill Weiss and Karl Paulsen will be presenting an update on the Archive Exchange Format, a large panel will discuss (and possibly argue about) the many aspects of LTO-5, and there will even be a session on new technology for archiving on, yes, film.  At left are some images from Point.360 Digital Film Labs (left is the original and right is their film-archived version).

There will be much more: hybrid routing, consumer electronics update, Washington update, global content repositories and other storage networks, shooting with HD SLRs, movie restoration (including a full screening of a masterpiece), standards update, new audio technologies for automating digital pre-distribution processes — even surprises about cable bend radius. The full program may be found here:

In short, whatever you might want to know about motion-image production and distribution and related fields, there will probably be somebody there who knows the answer. Is this information available elsewhere, at, say, a SMPTE conference?  Perhaps it is.  But next month, SMPTE’s executive vice president, engineering vice president, and director of engineering will all be at the HPA Tech Retreat.

Tags: , , , , , , , , , , , , , , ,
Web Statistics