Produced by:
| Follow Us  

Where Are We Going, & How Did We Get Here (The Past) by Mark Schubin

June 1st, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded during “An Evening with Mark Schubin” at the SMPTE New England Section, Dedham Holiday Inn on May 14, 2015.

Learn the extraordinary history of the technology of motion pictures and television. Did you know that the first live video images and the first projected photographic motion pictures were both in the same year, and that year was 1879? That horizontal scanning lines, pixels, and transmitter/receiver synchronization was patented in 1843? That photographic motion pictures were patented in 1852 (and were stereoscopic)? If that’s not enough, Mark promises to show some older moving images — much older. Much, much, much older.

Direct Link ( 114 MB /  TRT 48:00):
Where Are We Going, & How Did We Get Here (The Past) by Mark Schubin


Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Ex uno plures

March 27th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

HPA breakfast roundtable - copyright @morningstar productions 2011

There were many wonders at the 17th-annual HPA Tech Retreat in February in the California desert. And many of the more-than-500 attendees at the Hollywood Post Alliance were left wondering. One thing they wondered about was how to accommodate all viewers from a single master or feed.

As usual, many manufacturers introduced new products at the event (it’s where, in the past, Panasonic first showed its Varicam and Sony first showed HDCAM SR). But this year even the best products gave one pause.

Consider, for example, the Kernercam 3D rig, shown at left. It is transportable from set to set in three relatively small packing cases (far left). It takes just a few minutes to go from those cases to shooting. Each individual camera subassembly (bottom right of the image at left, shown with a Sony P1 camera) is pre-adjusted to the desired stereoscopic alignment parameters. After that, the two camera modules (with almost any desired cameras) just snap into the overall rig, with no readjustment necessary. The mounts are so rugged that repeatedly snapping cameras in and out or even hitting them does not change the 3D alignment.

That’s great, right? For many purposes, it probably is. But some stereoscopic camera-rig manufacturers, such as 3ality, are justifiably proud that their rigs do not use fixed alignment and can, therefore, be adjusted even during shots.

The choice of a super-rugged, fixed mount or a less-rugged, remotely adjustable mount is just that, a choice, and directors, cinematographers, & videographers have been making choices all their professional lives. The result of those choices adds up to a desired effect. Or does it?

Sony also introduced new products at this year’s HPA Tech Retreat. One, SR Memory, with the ability to store up to a terabyte of data on a solid-state memory “card” and a transfer rate allowing four live uncompressed HD streams simultaneously, falls into that category of choice. It’s also a wonder of new technology (though retreat attendees were given a preview in 2010, as shown in the picture at right, from Adam Wilt’s excellent coverage of last year’s HPA Tech Retreat,

Another new Sony introduction, OLED reference monitors, might have introduced a different kind of wonder. Some in attendance were delighted by what seemed like perfect image reproduction in something that (in one size, at least) will fit in a standard equipment rack. Others thought that existing larger devices already offer sufficiently good reference monitoring.

copyright @morningstar productions 2011

The way Sony conducted its demonstration, the new monitor was placed between Sony’s own reference-grade LCD and CRT monitors. With 24-frame-per-second source material, the CRT image flickered perceptibly. In black image areas, the LCD was noticeably lighter. The OLED suffered from neither problem. But is that necessarily good?

Many home viewers still watch TV on picture tubes. Many others watch on LCD displays. Others watch plasma or DLP. Some view images roughly 60 times a second, others 120, 240, or even 480 times a second. Some watch TV in dimly lit living rooms. Others watch on mobile devices outdoors in the sun. Still others watch content shot with the same cameras on giant projection screens in cinema auditoriums or even bigger LED screens in sports stadiums. The problem is that we are no longer shafted.

We were originally shafted in 1925 — literally! In that year, John Logie Baird was probably the first person to achieve a recognizable video image of a human face. A picture of the apparatus he used is shown at right. At far right is the original subject, a ventriloquist’s dummy’s head called Stooky Bill. The spinning disks on the shaft were used for image scanning. But the shaft extended from the camera section to a display section in the next room. It was impossible to be out of sync.

Another television pioneer was Philo Taylor Farnsworth, probably the first person to achieve all-electronic television (television in which neither the camera nor the display use mechanical scanning). His first image, in 1927, was a stationary dollar sign.

Although Farnsworth deserves credit for achieving all-electronic television, he was not the first to conceive it. Boris Rosing came up with the picture tube in 1907 in Russia, and the following year Alan Archibald Campell Swinton came up with the concept of all-electronic television in Britain. His diagram (left) was published a few years later. Although the idea of tube-based cameras might seem strange today, the first video camera to be shown at an NAB exhibit that did not use a tube didn’t appear until 1980 (and then only in prototype form), and tubeless HD cameras didn’t begin to appear until 1992.

Tube-based cameras and TVs with picture tubes didn’t have the physical shaft of Baird’s first apparatus, but they were still effectively shafted. When the electron beam in the camera’s tube(s) was at the upper left, the electron beam in the viewer’s picture tube was in the same position. Tape could delay the whole program, but it didn’t change the relationship.

The introduction of solid-state imaging devices changed things. An image might be captured all at once but displayed a line at a time, resulting in “rubbery” table legs as a camera panned past them. Camera tubes and solid-state imaging devices also had other differences. We’ve learned to work with those differences as well as the ones between different display technologies.

Now there’s 3D. I’ve written before about 3D’s other three dimensions, and their effect on depth perception: pupillary distance (between the eyes, especially different between adults and children), screen size, and viewing distance. See, for example, There are other issues associated with individual viewers, who might be blind in one eye, stereo blind, have limited fusion ranges (depths at which the two stereoscopic images can fuse into one), long acquisition times (until fusion occurs), etc.

There are also display-technology issues. One is ghosting. A presentation in the HPA Tech Retreat’s main program was called “Measurement of the Ghosting Performance of Stereo 3D Systems for Digital Cinema and 3DTV,” presented by Wolfgang Ruppel of RheinMain University of Applied Sciences in Germany. Ruppel presented test charts used to measure various types of ghosting for commonly used cinema and TV display systems. A trimmed version of one of his slides appears at left. It’s taken (with permission) from Adam Wilt’s once-again excellent coverage of the 2011 HPA Tech Retreat (which includes the full slides and the names of the stereoscopic display systems,

Ruppel’s paper also looked at the effects of ghosting suppression systems and noted color shifting. Some systems shifted colors towards yellow, others towards blue, and at least two systems shifted the colors differently for the two eyes! Can one master recording deliver accurate color results to cinemas when one auditorium might use one 3D display system and another a different one?

In one of the demo rooms, SRI (Sarnoff Labs) demonstrated a different test pattern for checking stereoscopic 3D parameters. It is shown above with the left- and right-eye views side by side. The crosstalk (ghosting) scale is shown at right in a demonstration of the way it would look with 4% crosstalk. The pattern can also be used to check synchronization between eye views, using the small, moving white rectangles shown just to the right of center below the eye-view identification.

There were other Sarnoff demonstrations, however, that indicated that synchronization of eye views is not as simple as making them appear when they are supposed to. Consider, for example, the current debate about the use of active glasses vs. passive glasses in 3DTVs.

Active glasses shutter the right eye during the left eye’s view and then shutter the right eye during the left eye’s view. Passive glasses usually involve a pattern of polarizers on the screen sending portions of the image (typically every other row) to one eye and the rest to the other (although there are also passive-glasses systems that use a full-image optical-retarder plate to alternate between left-eye and right-eye images).

Above are side-by-side right-eye and left-eye random-dot-type images used in another of the SRI demos. If you cross your eyes so they form a single image, you should see a circular disc, slightly to the right of the center, floating above the background.

That’s a still image.  SRI’s demo had multiple displays of moving images.  One used active glasses and another simultaneous-image passive glasses.

When the sequence was set for the left- and right-eye views to move the disc simultaneously side to side, that’s exactly what viewers looking at the passive display saw. But, with the exact same signal feeding the active-glasses display, viewers of that one saw the disc moving in an elliptical path into and out of the screen as well as back and forth. With the selection of a different playback file, the Sarnoff demonstrators could make the active-glasses view be side to side and the passive-glasses view be elliptical.

copyright @morningstar productions 2011

The random-dot nature of the image assured that no other real-world depth cues could interfere. But how significant would the elliptical change be in real-world images?

That’s one thing SRI wants to figure out, so they can come up with a mechanism to rate the quality of stereoscopic images in the same way that their JND (just-noticeable differences) technology has been used to evaluate the quality of non-stereoscopic imagery in the era of bit-rate-reduced (“compressed”) recording and distribution.

It’s not easy to figure out. One SRI sequence of slowly changing depth caused one researcher to get queasy. As can be seen at left, however, it didn’t bother another viewer at all.

We’re just beginning to learn about the many factors that can affect both 2D (consider those CRT, OLED, and LCD displays at the Sony demo, as well as others not shown) and 3D viewing. But there’s no turning back.

The motto carried in the beak of the eagle on the Great Seal of the United States is often translated as “Out of Many, One.” The title of this post means “Out of One, Many,” the problem faced by those creating moving-image programming in the post-shafted era.

That’s the front of the Great Seal. The back has two more mottoes: One, Novus Ordo Seclorum, emphasizes the impossibility of returning to the shaft. We’re in “A New Order of the Ages.” The other, Annuit Coeptis, I choose to translate as “Might As Well Smile About These Undertakings.”



Tags: , , , , , , , , , , , , , , ,
Web Statistics