Produced by:
| Follow Us  

NAB 2015 Wrap-up by Mark Schubin

June 13th, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded May 20, 2015
SMPTE DC Bits-by-the-Bay, Chesapeake Beach Resort

Direct Link ( 44 MB /  TRT 34:01):
NAB 2015 Wrap-up by Mark Schubin


Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,

B4 Long

April 22nd, 2015 | No Comments | Posted in Schubin Cafe


logo_nabshowSomething extraordinary happened at this month’s annual convention of the National Association of Broadcasters in Las Vegas. Actually, it was more a number of product introductions — from seven different manufacturers — adding up to something extraordinary: the continuation of the B4 lens mount into the next era of video production.

Perhaps it’s best to start at the beginning. The first person to publish an account of a working solid-state television camera knew a lot about lens mounts. His name was Denis Daniel Redmond, his account of “An Electric Telescope” was published in English Mechanic and World of Science on February 7, 1879, and the reason he knew about lens mounts was that, when he wasn’t devising new technologies, he was an ophthalmic surgeon.

Baird apparatus 2 croppedIt would be almost half a century longer before the first recognizable video image of a human face could be captured and displayed, an event that kicked off the so-called mechanical-television era, one in which some form of moving component scanned the image in both the camera and the display system. At left above, inventor John Logie Baird posed next to the apparatus he used. The dummy head (A) was scanned by a spiral of lenses in a rotating disk.

1931 Jenkins cameraA mechanical-television camera designed by SMPTE-founder Charles Francis Jenkins, shown at right, used a more-conventional single lens, but it, too, had a spinning scanning disk. There was so much mechanical technology that the lens mount didn’t need to be made pretty.

The mechanical-television era lasted only about one decade, from the mid-1920s to the mid-1930s. It was followed by the era of cathode-ray-tube (CRT) based television: camera tubes and picture tubes. Those cameras also needed lenses.

FernsehkanonenThe 1936 Olympic Games in Berlin might have been the first time that really long television lenses were used — long both in focal length and in physical length. They were so big (left) that the camera-lens combos were called Fernsehkanone, literally “television cannon.” The mount was whatever was able to support something that large and keep it connected to the camera.

In that particular case, the lens mount was bigger than the camera. With the advent of color television and its need to separate light into its component colors, cameras grew.

TK-41 KMTVAt right is an RCA TK-41 camera, sometimes described as being comparable in size and weight to a pregnant horse; its viewfinder, alone, weighed 45 lbs. At its front, a turret (controlled from the rear) carried a selection of lenses of different focal lengths, from wide angle to telephoto. Behind the lens, a beam splitter fed separate red, green, and blue images to three image-orthicon camera tubes.

The idea of hand-holding a TK-41 was preposterous, even for a weight lifter. But camera tubes got smaller and, with them, cameras.

TK-44P 1972 2-person croppedRCA’s TK-44, with smaller camera tubes, was adapted into a “carryable” camera by Toronto station CFTO, but it was so heavy that the backpack section was sometimes worn by a second person, as shown at the left. The next generation actually had an intentionally carryable version, the TKP-45, but, even with that smaller model, it was useful for a camera person to be a weightlifter, too.

HL-35At about the same time as the two-person adapted RCA TK-44, Ikegami introduced the HL-33, a relatively small and lightweight color camera. The HL stood for “Handy-Looky.” It was soon followed by the truly shoulder-mountable HL-35, shown at right.

The HL-35 achieved its small form factor through the use of 2/3-inch camera tubes. The outside diameter of the tubes was, indeed, 2/3 of an inch, about 17 mm, but, due to the thickness of the tube’s glass and other factors, the size of the image was necessarily smaller, just 11 mm in diagonal.

Many 2/3-inch-tubed cameras followed the HL-35. As with cameras that used larger tubes, the lens mount wasn’t critical. Each tube could be moved slightly into the best position, and its scanning size and geometry could also be adjusted. Color-registration errors were common, but they could be dealt with by shooting a registration chart and making adjustments.

B4The CRT era was followed by the era of solid-state image sensors. They were glued onto color-separation prisms, so the ability to adjust individual tubes and scanning was lost. NHK, the Japan Broadcasting Corporation, organized discussions of a standardized lens-camera interface dealing with the physical mount, optical parameters, and electrical connections. Participants included Canon, Fuji, and Nikon on the lens side and Hitachi, Ikegami, JVC, Matsushita (Panasonic), Sony, and Toshiba on the camera side.

To allow the use of 2/3-inch-format lenses from the tube era, even though they weren’t designed for fixed-geometry sensors, the B4 mount (above left) was adopted. But there was more to the new mount than just the old mechanical connection. There were also specifications of different planes for the three color sensors, types of glass to be used in the color-separation prism and optical filters, and electrical signal connections for iris, focus, zoom, and more.

When HDTV began to replace standard definition, there was a trend toward larger image sensors, again — initially camera tubes. After all, more pixels should take up more space. Sony’s solid-state HDC-500 HD camera used one-inch-format image sensors instead of 2/3-inch. But existing 2/3-inch lenses couldn’t be used on the new camera. So, even though those existing lenses were standard-definition, the B4 mount continued, newly standardized in 1992 as Japan’s Broadcast Technology Association S-1005.

Lockheed Martin sensorLockheed Martin cameraThe first 4K camera also sized up — way up. Lockheed Martin built a 4K camera prototype using three solid-state sensors (called Blue Herring CCDs, shown at left), and the image area on each sensor was larger than that of a frame of IMAX film.

As described in a paper in the March 2001 SMPTE Journal, “High-Performance Electro-Optic Camera Prototype” by Stephen A. Stough and William A. Hill, that meant a large prism. And a large prism meant a return to a camera size not easily shouldered (shown above at right).

Bayer filterThat was a prototype. The first cameras actually to be sold that were called 4K took a different approach, a single large-format (35 mm movie-film-sized) sensor covered with a patterned color filter.

An 8×8 Bayer pattern is shown at right, as drawn by Colin M. L. Burnett. The single sensor and its size suggested a movie-camera lens mount, the ARRI-developed positive-lock or PL mount.

separated Bayer colorsOne issue associated with the color-patterned sensors is the differences in spatial resolution between the colors. As seen at left, the red and blue have half the linear spatial resolution of the sensor (and of the green). Using an optical low-pass filter to prevent red and blue aliases would eliminate the extra green resolution; conversely, a filter that works for green would allow red and blue aliases. And, whether it’s called de-Bayering, demosaicking, uprezzing, or upconversion, changing the resolution of the red and blue sites to that of the overall sensor requires some processing.

Abel chartAnother issue is related to the range of image-sensor sizes that use PL mounts. At right is a portion of a guide created by AbelCine showing shot sizes for the same focal-length lens used on different cameras <>. In each case, the yellowish image is what would be captured on a 35-mm film frame, and the blueish image is what the particular camera captures from the same lens. The windmill at the left, prominent in the Canon 5D shot, is not in the Blackmagic Design Cinema Camera shot.

Whatever their issues, thanks to their elimination of a prism, the initial crop of PL-mount digital-cinematography cameras, despite their large-format image sensors, were relatively small, light, and easily carried. Their size and weight differences from the Lockheed Martin prototype were dramatic.

There was a broad selection of lenses available for them, too — but not the long-range zooms with B4-mount lenses needed for sports and other live-event production. It’s possible to adapt a B4 lens to a PL-mount camera, but an optically perfect adaptor would lose more than 2.5 stops (equivalent to needing about six times more light). Because nothing is perfect, the adaptor would introduce its own degradations to the images from lenses designed for HD, not 4K (or Ultra HD, UHD). And a large-format long-range zoom lens would be a difficult project. So multi-camera production remained largely B4-mount three-sensor prism-based HD, while single-camera production moved to PL-mount single-sensors with more photo-sensitive sites (commonly called “pixels”).

Then, at last year’s NAB Show, Grass Valley showed a B4-mount three-sensor prism-based camera labeled 4K. Last fall, Hitachi introduced a four-chip B4-mount UHD camera. And, at last week’s NAB Show, Ikegami, Panasonic, and Sony added their own B4-mount UHD cameras. And both Canon and Fujinon announced UHD B4-mount long-range zoom lenses.

Grass-Valley-LDX-86The camera imaging philosophies differ. The Grass Valley LDX 86 is optically a three-sensor HD camera, so it uses processing to transform the HD to UHD, but so do color-filtered single-sensor cameras; it’s just different processing. The Grass Valley philosophy offers appropriate optical filtering; the single-sensor cameras offer resolution assistance from the green channel.

sk_uhd4000_xl_1Hitachi’s SK-UHD4000 effectively takes a three-sensor HD camera and, with the addition of another beam-splitting prism element, adds a second HD green chip, offset from the others by one-half pixel diagonally. The result is essentially the same as the color-separated signals from a Bayer-patterned single higher-resolution sensor, and the processing to create UHD is similar.

Panasonic AK-UC3000Panasonic’s AK-UC3000 uses a single, color-patterned one-inch-format sensor. To use a 2/3-inch-format B4 lens, therefore, it needs an optical adaptor, but the adaptor is built into the camera, allowing the electrical connections that enable processing to reduce lens aberrations. Also, the optical conversion from 2/3-inch to one-inch is much less than that required to go to a Super 35-mm movie-frame size.

Ikegami_23-inch_CMOS_4K_cameraSony-HDC-4300Both Ikegami’s UHD camera (left) and Sony’s HDC-4300 (right) use three 2/3-inch-format image sensors on a prism block, but each image sensor is truly 4K, making them the first three-sensor 4K cameras since the Lockheed Martin prototype.  By increasing the resolution without increasing the sensor size, however, they have to contend with photo-sensitive sites a quarter of the area of those on HD-resolution chips, reducing sensitivity.

It might seem strange that camera manufacturers are moving to B4-mount 2/3-inch-format 4K cameras at a time when there are no B4-mount 4K lenses, but the same thing happened with the introduction of HD. Almost any lens will pass almost any spatial resolution, but the “modulation transfer function” or MTF (the amount of contrast that gets through at different spatial resolutions) is usually better in lenses intended for higher-resolution applications, and the higher the MTF the sharper the pictures look.

UA80x9 tilt (2) (1280x805)UA22x8 tilt revised (3) (1280x961)With all five of the major manufacturers of studio/field cameras moving to 2/3-inch 4K cameras, lens manufacturers took note. Canon showed a prototype B4-mount long-range 4K zoom lens, and Fujinon actually introduced two models, the UA80x9 (left) and the UA22x8 (right). The lenses use new coatings that increase contrast and new optical designs that increase MTF dramatically even at HD resolutions.

There is no consensus yet on a shift to 4K production, but 4K B4-mount lenses on HD cameras should significantly improve even HD pictures.  That’s nice!

Tags: , , , , , , , , , , , , , , , , , , , ,

The Blind Leading

December 10th, 2011 | No Comments | Posted in Schubin Cafe

Once upon a time, people were prevented from getting married, in some jurisdictions, based on the shade of their skin colors. Once upon a time, a higher-definition image required more pixels on the image sensor and higher-quality optics.

Actually, we still seem to be living in the era indicated by the second sentence above. At the 2012 Hollywood Post Alliance (HPA) Tech Retreat, to be held February 14-17 (with a pre-retreat seminar on “The Physics of Image Displays” on the 13th) at the Hyatt Grand Champions in Indian Wells, California <>, one of the earliest panels in the main program will be about 4K cameras, and representatives from ARRI, Canon, JVC, Red, Sony, and Vision Research will all talk about cameras with far more pixel sites on their image sensors than there are in typical HDTV cameras; Sony’s, shown at the left, has roughly ten times as many.

That’s by no means the limit. The prototypical ultra-high-definition television (UHDTV) camera shown at the right has three image sensors (from Forza Silicon), each one of which has about 65% more pixel sites than on Sony’s sensor. There is so much information being gathered that each sensor chip requires a 720-pin connection (and Sony’s image sensor is intended for use in just a single-sensor camera, so there are actually about five times more pixel sites).  But even that isn’t the limit! As I pointed out last year, Canon has already demonstrated a huge hyper-definition image sensor, with four times the number of pixels of even those Forza image sensors used in the camera at the right <>!

Having entered the video business at a time when picture editing was done with razor blades, iron-filing solutions to make tape tracks visible, and microscopes, and when video projectors utilized oil reservoirs and vacuum pumps, I’ve always had a fondness for the physical characteristics of equipment. Sensors will continue to increase in resolution, and I love that work. At the same time, I recognize some of the problems of an inexorable path towards higher definition.

The standard-definition camera that your computer or smart phone uses for video conferencing might have an image sensor with a resolution characterized as 640×480 or 0.3 Mpel (megapixels), even if that same smart phone has a much-higher-resolution image sensor pointing the other way for still pictures. That’s because video must make use of continually changing information. At 60 frames per second, that 0.3 Mpel camera delivers more pixels in one second than an 18 Mpel sensor shooting a still image.

Common 1080-line HDTV has about 2 Mpels. So called “4K” has about 8 Mpels. It’s already tough to get a great HDTV lens; how will we deal with UHDTV’s 33-Mpel “8K”?

A frame rate of 60-fps delivers twice as much information as 30-fps; 120-fps is twice as much as 60-fps. How will we ever manage to process high-frame-rate UHDTV?

Perhaps it’s worth consulting the academies. In U.S. entertainment media, the highest awards are granted by the Academy of Motion Picture Arts & Sciences (the Academy Award or Oscar), the Academies (there are two) of Television Arts & Sciences (the Emmy Award), and the Recording Academy (the Grammy Award). Win all three, and you are entitled to go on an EGO (Emmy-Grammy-Oscar) trip!

In the history of those awards, only 33 people have ever achieved an EGO trip. And only two of those also won awards from the Audio Engineering Society (AES), the Institute of Electrical and Electronics Engineers (IEEE), and the Society of Motion-Picture and Television Engineers (SMPTE). You’re probably familiar with the last name of at least one of those two, Ray Dolby, shown at left during his induction into the National Inventors Hall of Fame in 2004.

The other was Thomas Stockham. Some in the audio community might recognize his name.  He was at one time president of the AES, is credited with creating the first digital-audio recording company (Soundstream), and was one of the investigators of the 18½-minute gap in then-President Richard Nixon’s White House tapes regarding the Watergate break-in.

Those achievements appeal to my sense of appreciation of physical characteristics. The Soundstream recorder (right) was large and had many moving parts. And the famous “stretch” of Nixon’s secretary Rose Mary Woods (left), which would have been required to accidentally cause the gap in the recording, is a posture worthy of an advanced yogi (Stockham’s investigative group, unfortunately for that theory, found that there were multiple separate instances of erasure, which could not have been caused by any stretch). But what impressed (and still impresses) me most about Stockham’s work has no physical characteristics at all.  It’s pure mathematics.

On the last day of the HPA Tech Retreat, as on the first day, there will be a presentation on high-resolution imaging. But it will have a very different point of view. Siegfried Foessel of Germany’s Fraunhofer research institute will describe “Increasing Resolution by Covering the Image Sensor.” The idea is that, instead of using a higher-resolution sensor, which increases data-readout rates, it’s actually possible to use a much-lower-resolution image sensor, with the pixel sites covered in a strange pattern (a portion of which is shown at the right). Mathematical processing then yields a much-higher-resolution image — without increasing the information rate leaving the sensor.

In the HPA Tech Retreat demo room, there should be multiple demonstrations of the power of mathematical processing. Cube Vision and Image Essence, for example, are expected to be demonstrating ways of increasing apparent sharpness without even needing to place a mask over the sensor. Lightcraft Technology will show photorealistic scenes that never even existed except in a computer. And those are said to have gigapixel (thousand-megapixel) resolutions!

All of that mathematical processing, to the best of my knowledge, had no direct link to Stockham, but he did a lot of mathematical processing, too. In the realm of audio, his most famous effort was probably the removal of the recording artifacts of the acoustical horn into which the famous opera tenor Enrico Caruso sang in the era before microphone-based recording (shown at left in a drawing by the singer, himself).

As Caruso sang, the sound of his voice was convolved with the characteristics of the acoustic horn that funneled the sound to the recording mechanism. Recovering the original sound for the 1976 commercial release Caruso: A Legendary Performer required deconvolving the horn’s acoustic characteristics from the singer’s voice.  That’s tough enough even if you know everything there is to know about the horn. But Stockham didn’t, so he had to use “blind” deconvolution. It wasn’t the first time.

He was co-author of an invited paper that appeared in the Proceedings of the IEEE in August 1968. It was called “Nonlinear Filtering of Multiplied and Convolved Signals,” and, while some of it applied to audio signals, other parts applied to images. He followed up with a solo paper, “Image Processing in the Context of a Visual Model,” in the same journal in July 1972. Both papers have been cited many hundreds of times in more-recent image-processing work.

One image in both papers showed the outside of a building, shot on a bright day; the door was open, but the inside was little more than a black hole (a portion of the image is shown above left, including artifacts of scanning the print article with its half-tone images). After processing, all of the details of the equipment inside could readily be seen (a portion of the image is shown at right, again including scanning artifacts). Other images showed effective deblurring, and the blur could be caused by either lens defocus or camera instability.

Stockham later (in 1975) actually designed a real-time video contrast compressor that could achieve similar effects. I got to try it. I aimed a bright light up at some shelves so that each shelf cast a shadow on what it was supporting. Without the contrast compressor, virtually nothing on the shelves could be seen; with it, fine detail was visible. But the pictures were not really of entertainment quality.

That was, however, in 1975, and technology has marched — or sprinted — ahead since then. The Fraunhofer Institut presentation at the 2012 HPA Tech Retreat will show how math can increase image-sensor resolution. But what about the lens?

A lens convolves an image in the same way that an old recording horn convolved the sound of an acoustic gramophone recording. And, if the defects of one can be removed by blind deconvolution, so might those of the other. An added benefit is that the deconvolution need not be blind; the characteristics of the lens can be identified. Today’s simple chromatic-aberration corrections could extend to all of a lens’s abberations, and even its focus and mount stability.

Is it a merely a dream?  Perhaps.  But, at one time, so was the repeal of so-called anti-miscegenation laws.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , ,

How Good Is Good Enough?

April 30th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

As usual, there were many new, useful products announced at this month’s annual convention of the National Association of Broadcasters (NAB) in Las Vegas. As usual, there were also many new trends, one sparked by the the U.S. Congress and another by last month’s earthquake & tsunami in Japan.

At the event’s Digital Cinema Summit, not only 3D but also higher frame rates, greater spatial resolutions, and increased bit depths and color gamuts were discussed. Yet the announcement that startled me most was near the beginning of Panasonic’s press conference.

Normally, I don’t pay much attention to manufacturer sales announcements. They might indicate real interest in a product, but the sales could also be the result of many other factors, including sweetheart deals and existing infrastructure.

Panasonic’s announcement was about the 2012 Olympic Games in London. Like the Super Bowl and other grand events, the quadrennial Olympics are opportunities to showcase new video technologies. At the 1984 Games, for example, Panasonic introduced its fluorescent-discharge-tube-based Astrovision giant color screens with pictures visible in broad daylight.

What new technology might the company provide for the world’s top sporting event, taking place more than a year after NAB 2011? Might it be something to do with 3D? Panasonic introduced a new integrated 3D camcorder at the show, the AG-3DP1, with larger image sensors (1/3-inch format), greater-range lenses (17x), and AVC-Intra recording onto dual P2 solid-state memory cards.

Might it be something to do with AVC-Ultra, the company’s highest-grade video bit-rate-reduction system, capable of dealing with 1080-line HD at 60 progressively-scanned pictures per second or other signal types including 3D and Hollywood’s 2K 4:4:4? Might it be something beyond even that?

Alas, no. The startling (to me) announcement was that “the official recording format for capturing the London 2012 Olympic Games,” as specified by Olympic Broadcasting Services London (OBSL), the host broadcaster, will be–ready?–DVCPRO HD. Next year’s NAB show will be the 13th annual equipment show since that format was introduced (and the 14th since it was announced).

As the image above right indicates, DVCPRO HD was introduced as a tape-cassette-based recording format (although Panasonic noted that OBSL “will also use the P2 HD series with solid-state memory cards”). Like HDCAM before it, DVCPRO HD is also a sub-sampling recording format; it doesn’t capture full horizontal resolution even in luma (brightness detail). But it would appear that OBSL considers it good enough.

“Good enough” was a phrase that came to my mind at many places on the NAB Show exhibit floor this year. Consider Sony’s new OLED reference monitors. The BVM series was introduced at February’s Hollywood Post Alliance (HPA) Tech Retreat. They have a slight color shift with viewing angle but otherwise seem ideal for the production-truck market, where a 42-inch plasma screen in video control is generally out of the question.  And their price is in the range of similarly sized reference monitors using other technologies.

At NAB 2011, Sony expanded its offerings with a PVM OLED series at less than a quarter of the price (a discount of about 77%). Not only that, but the PVM monitors are even thinner than the BVM and include built-in controls.

Obviously, there have to be some drawbacks, given the extreme price difference. The signal processing in the PVM is not as high in quality as in the BVM, the flexibility is limited, and the OLED panels for the PVM are chosen from the manufactured stock after the top-of-the-line BVM panels have been selected and removed.

That might mean a bit less color-shift-free viewing angle. But another flaw was mentioned for the PVM panels: possible dead pixels.

In a sense, that’s no different from what Sony has done since its first chip-based cameras. Perfect image sensors went into the broadcast series, slightly flawed into the professional series, and more flawed into the consumer series. In cameras, however, bad pixels can be effectively “removed” by taking an average of the good pixels around them. In a display panel, there is nothing between the dead pixel and the eye to do any averaging (though Sony promised bad pixels would be off, never on).

In the choice between Sony’s BVM and PVM OLED monitors, the trade-off is clearly between cost and quality. At some other exhibits at NAB 2011, the parameters were less obvious. Consider, for example, the 52-inch “Professional 3D display” from Dimenco shown at the Triaxes Vision booth. It was said to have a “stunning and crystal-clear 3D image.”

From a 3D perspective, the autostereoscopy (glasses-free 3D) was superb. The image could easily be fused into 3D, and there was a broad viewing angle. The reason that part of the viewing experience was so good is that the displayed used 28 different views created from “2D-plus-Depth” information. Unfortunately, the display starts with ordinary HD resolution of 1920 pixels across. Divide that by 28 views, and you get some idea of how not-exactly-crystal-clear I perceived the resulting image.

That might be an extreme example, but there were many others at the show. Almost every 3D display there traded off either spatial resolution (in passive-glasses systems) or temporal resolution (in active glasses) or both.

Almost every display did that. One that did not could be found at the Calibre exhibit in the North Hall. Among other products, Calibre makes scalers, and their PremierViewProHD-IW includes what the company calls “3D Left/Right Extraction & Alignment for Passive 3D Projection Systems.”

In brief, the scalers take the “frame-packed” 3D signal from a Blu-ray disc and convert it to two, separate HD signals, one for the left eye and one for the right. Each signal is fed to its own projector, simple polarizing filters are clamped in front of the projection lenses, and simple passive glasses are used for viewing, with no loss of spatial or temporal resolution.

The system might be used for viewing 3D dailies. That would require a relatively inexpensive way to create 3D Blu-ray discs. That’s what Pico House’s Easy 3D does. It requires only a laptop with a BD-RE drive. The trade-off on this one is that its input format is AVCHD–ideal for a small, relatively inexpensive camcorder like Panasonic’s AG-3DA1, not so good for systems recording on other formats.

Is AVCHD good enough for dailies? Is any bit-rate-reduced format good enough for mastering? I’ll get to those questions in part II.

Tags: , , , , , , , , , ,

Sony OLED Reference Monitors

February 8th, 2011 | No Comments | Posted in Schubin Snacks

22_bvme250_side trimmedAt next week’s HPA Tech Retreat in Rancho Mirage, California, Sony will introduce their TriMaster EL series of OLED reference monitors, the 16.5-inch BVM E170 and the 24.5-inch BVM E250. Here are some of their characteristics:

  • 30,000-hour panel life
  • better energy efficiency than even LCD
  • faster pixels than even CRT
  • more contrast than even CRT
  • P3 color gamut
  • thickness (BVM E250) of just 148 mm (as shown at right), not increased by rear cable connections
  • SD scaling improved over the BVM L series
  • HDMI, Display Port, and 3 Gbps HD-SDI
  • negligible processing latency

The new monitors will be unveiled Tuesday before what might be the world’s most critical audience, in the HPA’s hands-on demo area. And, given the other demonstrations and presentations at that event, you really should be there. Here’s the latest schedule:

If by some freak of fate you cannot attend the HPA Tech Retreat, however, you’ll still be able to view the new Sony technology at the New York Public Television Quality Group Workshop on March 2. The workshops, funded by the Corporation for Public Broadcasting, have already enlightened audiences in San Francisco, St. Paul, Boston, and Nashville.

The upcoming New York workshop is not restricted to participants involved in public television. Here’s the agenda for that event, which will feature such production luminaries as Tom Holmes and Billy Steinberg:

I hope to see you at one or both of these great events.

Tags: , , , , , , , , , ,

The Elephant in the Room: 3D at NAB 2010

April 30th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe
implicit range of 3D eyewear at NAB 2010

implicit 3D eyewear range at NAB 2010

As I roamed the exhibits at the NAB show this month, I kept wondering what other year it seemed most like.  And I was not alone.

There were plenty of important issues covered at the show, from citizen journalism to internet-connected TV.  And then there was the elephant in the room.

It would be a lie to say that 3D technologies could be found at every booth on the show floor.  But it was probably the case that there was 3D in at least every aisle.  There was so much 3D that it tended to diminish all other news.

litepanels_sola12In acquisition technology, for example, LED lighting was near ubiquitous, with focusable instruments, such as the Litepanels Sola, sometimes painfully bright.  Panasonic and Sony both showed models of future inexpensive video cameras with large-format imagers, and Aaton joined the range of those offering “digital magazines” for film cameras.  In small formats, GoPro’s Hero is a complete HD camcorder weighing just three ounces.

In storage technology, Cache-A, For-A, IBM, and Sony all showed in new offerings that tape is not dead.  Meanwhile, iVDR removable-hard-drive storage could be seen in several new products, and Canon introduced new camcorders based on Compact Flash cards.

Cinedeck looks like a viewfinder but includes built-in storage and editing capability. NextoDI’s NVS 2525 can copy either P2 or SxS cards.

In processing, Dan Carew’s Indie 2.0 blog said of Blackmagic Design’s DaVinci Resolve 7.0, “this best-in-class color correction software was formerly US$250,000 (for software and hardware) and is now available in a Mac software only verions for US$995.” Immersive Media’s 11-camera spherical views can now be stitched and streamed live.  NewTek’s TriCaster TCXD850 can deal with 22 inputs and virtual sets.  And, though you might not yet be able to figure out why you’d want this capability, Snell’s Kahuna 360 production switcher can deal with up to 16 shows at once.

In wireless distribution, there was VµbIQ’s 60 GHz uncompressed transmitter on a chip and Streambox’s Avenir for bonding up to four cellular modems to create a 20 Mbps channel.  In wired, there was Pleora’s EtherCast palm-sized bidirectional ASI-IP gateways.  And, in technologies that could be applied to either, there were Fraunhofer’s codec with a latency of just one macroblock line and a Harris-LG/Zenith proposal for expanding ATSC mobile transmission to full-channel use.

Ostendo 2In presentation, there was a reference picture monitor from Dolby (seen in almost its final form at the HPA Tech Retreat).  Several booths had OLED monitors, from 7-inch at Sony to 15-inch at TVLogic.  Wohler’s Presto router has an LCD video display on each button.  And Ostendo’s CDM43 is a curved monitor with a 30:9 aspect ratio.

Epic smallThat barely scratches the surface of the non-3D news from NAB.  And then there was 3D.

Even All-Mobile Video’s Epic 3D production truck, parked in Sony’s exhibit, wore 3D glasses.  But it was the glasses on visitors to the truck that proved more instructive.

Sony provided RealD circularly polarized glasses to visitors for looking at everything from relatively small monitors to a giant outdoor-type LED display.  As soon as those visitors entered the control room of AMV’s Epic 3D truck and donned their glasses, however, they saw ghosting — crosstalk between the two eye views.  AMV staff were prepared for the shocked looks.  “Sit down,” they said.  “There’s a narrow vertical angle, and you have to be head-on to the monitors.”  Sure enough, that solved the problem — at least for those who could sit.

Another potential 3D problem was mentioned in the two-day 3D Digital Cinema Summit before the show opened.  If 3D is shot for a small screen and blown up to cinema size, it can cause eye divergence.  3ality’s camera rigs indicate when this might happen, but it happened anyway on at least one cinema-sized screen at NAB, leading to some audience queasiness.

Buzz Hays of the Sony 3D Technology Center says making 3D is easy, but making good 3D is hard.  There was a lot of 3D at NAB, including both easy and hard, good and bad.

It was hard to count the number of side-by-side and beam-splitter dual-camera rigs at the show, but, in addition to those, there were integrated (one-piece) 3D cameras and camcorders, in various stages of readiness, from 17 different brands, both on and off the show floor.  It seems that all of them were said to be “the first.”


Much could be learned about 3D at the two-day Digital Cinema Summit before the show opened.  It began with Sony’s Pete Lude showing that an ordinary 2D picture can seem 3D when viewed with just one eye, leading a later speaker (me) to quip that watching with an eye patch, therefore, is an inexpensive way to get 3DTV.

3ality’s Steve Schklair followed Lude with an on-screen, live demonstration-tutorial on the effects of different 3D rig settings: height, rotation, lens interaxial, convergence, etc.  He was followed by directors, stereographers, and trainers of 3D-convergence operators, among others.

Although 3D would seem to require more equipment (two cameras and lenses plus a stereo rig at each location) and more personnel (a convergence operator per camera in addition to a stereographer), there is seemingly one saving grace.  According to Schklair and others, 3D can get away with fewer cameras and less cutting than 2D.

The same thing was said of HD, however, in its early days.  Sure enough, when I worked on one show in 1989, we used just four HD cameras feeding the HD truck and twice as many non-HD cameras feeding the non-HD truck.  In the early days, it was common practice to do separate HD and SD productions.  Today, of course, one HD production feeds all, and it typically uses as many cameras and as rapid cutting as an SD show.

Pace ShadowAtop a tower of Fujinon’s NAB booth, Pace showed something that recognizes the current economics of 3D.  With virtually no 3DTV audience, it’s hard to justify separate 3D productions, but, with such major players as ESPN, DirecTV, Discovery, and Sky involved in 3D, the elephant cannot be ignored, either.  So the Pace Shadow system places a 3D rig atop the long lens of a typical 2D sports camera.  Furthermore, it interconnects the controls (in a variety of selectable ways) so that the operator of the 2D camera need not be concerned about shooting 3D: one camera position, one operator, different 2D and 3D outputs.

Screen Subtitling came up with similarly clever solutions to the problem of 3D graphics.  Unless text is closer to the viewer (in 3D depth) than the portion of the image that it is obscuring, it can be uncomfortable to read.

Traditionally, subtitles are at the bottom of a screen, where 3D objects are closest to the viewer.  Raise the graphics to the top, and they might work in the screen plane.

Then there’s the issue of putting the graphics on the screen.  With left- and right-eye views, it might seem that two keying systems are required.  But with much 3D being distributed in a side-by-side format, a single keyer can place 3D graphics directly into the side-by-side feed.

Screen Subtitling small

copyright 2010 Inition | Niche | Pacific

Relay opticsThere was much more 3D at the show, in every field of video technology (and, perhaps even audio).  In acquisition, for example, aside from integrated cameras, 3D mounts, and even individual cameras designed specifically for 3D (like Sony’s HDC-P1), there were also 3D lens adaptors, precision-matched lenses, precision lens controls, and even relay optics intended to allow wider cameras to be placed closer together, as in this picture shot by Eric Cheng of

LED smallAt the other end of the 3D chain, there were both plasma and LCD autostereoscopic (no-glasses) displays using both lenticular and parallax-barrier technology, small OLED displays with active-shutter glasses and giant LED screens with passive circularly polarized glasses.  There were LCD and plasma screens (up to 152-inch at Panasonic) and DLP rear-projectors using active-shutter glasses, and both LCD and laser projection using passive polarized glasses.

DSC01809There were dual-panel displays with beam splitters, and displays intended to be viewed through long strips of fixed polarized materials (to accommodate all viewers’ heights).  There were many anaglyph displays in the three-different primary-and-complement color combinations.  There were 3D viewfinders using glasses and others with displays for each eye.

Burton Aerial 3D trimmedJapan’s Burton showed a laser-plasma display that creates 3D images in mid-air.  Normally, they’ve viewed through laser-protection goggles, as in the image at the right at the top of this post.  But as a safety measure, they showed them instead inside an amber tube at NAB.

InKeisoku small storage, it seems that everyone who had anything that could record images had a version that could do so in 3D.  Even Convergent Design’s tiny Nano was available in a 3D version.  The Abekas Mira is an eight-channel digital production server — or it’s a four-channel 3D digital production server.  Want an uncompressed 3D field recorder?  Keisoku Giken’s UDR-D100 was just one such product at the show.

In processing, just about every form of editing and processing had a 3D version.  Monogram showed a touch-screen 3D “truck-in-a-box” production system.  Belgium’s Imec research lab even showed licensable technology for stereoscopic virtual cameras.

There was a range of equipment and services for converting 2D to 3D either in real time or not, automatically and with human assistance.  And there was a large range of processing equipment designed to fix 3D problems, such as camera rotation and height variation.

Sony’s MPE200 is one such device, with a U.S. list price of $38,000.  The MPES3D01/01 software to run it, however, is another $22,500.  With the least-expensive 3D camera at the show (Minoru 3D) retailing for under $60 at, it might be said that 3D is cheap, but good 3D costs.

There was 3D test equipment from many manufacturers.  There was high-speed 3D (Antelope/Vision Research). Belden 1694D trimmed There was 3D coax (Belden 1694D, complete with anaglyph color coding).  Ryerson University is doing eye-tracking research on what viewers look at in 3D and whether it’s different from HD and 4K.

So why was I wondering what year it was?  At NAB shows there have been many technologies shown that never went anywhere.  We still await voice-recognition production switchers, for example, and also voice-recognition captioning.  But those have generally been shown by only one company or a small number of exhibitors.

Digital video effects were among the fastest technologies to penetrate the industry.  First shown at NAB in 1973, they were commonly seen in homes by the end of the decade.

Then there was HDTV.  Its penetration after NAB introduction took much longer, even if dated only from 1989, when an entire exhibition hall was devoted to the subject (there were many earlier NAB displays).  Estimates vary, but U.S. household penetration of HDTV 21 years later seems to be in the vicinity of half.

extravisionAt least HDTV did eventually penetrate U.S. households.  Visitors to NAB conventions in the early 1980s could see aisle after aisle of exhibits claiming compatibility with one or both competing standards for teletext.  One standard was being broadcast on CBS and NBC; the other on TBS.  There were professional and consumer equipment manufacturers and services offering support.  Based on the quantity and diversity of promotion at NAB, it was hard to imagine that teletext would not take off in the U.S.

So, will 3DTV emulate digital effects, HDTV, U.S. teletext, or none of the above?  Time will tell.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Someone Will Be There Who Knows the Answer

January 15th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

The Oversight Executive for Motion Intelligence of the Office of the Under Secretary of Defense for Intelligence is scheduled to be in the southern California desert next month.  So are the chief technology officers (CTOs) of both Panasonic and Sony.  So is the head of the Visual Space Perception Laboratory at the University of California – Berkeley.  So is one of the developers of Cablecam.  So is the CTO of Cable Television Laboratories.  So is a co-inventor of MP3.  So is the mysterious Mo Henry, whose credit has appeared in movies ranging from Apocalypse Now to Zombieland.

Golf_vertical_mountain_viewThe list could go on and on.  Hundreds of top technical executives will be there. CTOs and VPs of Hollywood studios and television networks will be there.  So will the head of emerging technologies of the European Broadcasting Union.  So will the VP of standards of the Advanced Television Systems Committee (ATSC) and the director of engineering and standards of the Society of Motion-Picture and Television Engineers (SMPTE).  Where will they be?

It’s the 16th annual Hollywood Post Alliance Tech Retreat, February 16-19 at Rancho Las Palmas conference center in Rancho Mirage, California.  But every part of that title can convey a false impression.

HPA_logoHPA, for example, is not yet 16 years old, but the retreat is older.  When the organization that created it, the Association for Imaging Technology and Sound, went belly up, HPA’s founders thought the retreat was too important to die, so they took it over.  After 9/11, when other events went down in attendance, the retreat went up.  It has actually had to turn people away on occasion because it has sold out.

Similarly, “Hollywood” and “Post” are misleading.  The event is not (and has never been) in Hollywood.  Its participants come from all over the world, from NATO smallNew Zealand nato-logoto Norway, and from Bombay to Buenos Aires.  If someone at the retreat is from NATO, that could be the North Atlantic Treaty Organization or the National Association of Theater Owners (both have sent representatives, sometimes at the same retreat); similarly, there have been representatives from MPEG the Moving Picture Experts Group and MPEG the Motion Picture Editors Guild. More »

Tags: , , , , , , , , , , , , ,

Walkin’ in a Camera Wonderland

September 20th, 2009 | 3 Comments | Posted in 3D Courses, Schubin Cafe
If you want to see products that don’t appear in U.S. trade-press magazines, you need to go beyond NAB, SMPTE, and InfoCOMM. You need to go to the International Broadcasting Convention.


IBC is my favorite trade show. I can leave work, catch an evening flight to Amsterdam, and take a train directly from the airport to the convention center. If I’m hungry, some exhibitor will be providing food. Thirsty? Water, various forms of coffee, juices, beer, and wine flow freely. IBC even throws a party to which everyone is invited. But none of that is why I like it so much.

Americans tend to forget that we are not alone. Back in the days of RCA cameras, you needed to come to IBC to see those of the UK-based manufacturer Pye.

Today, we tend to think of NAB as an international show. Cameras are shown there by such Japanese manufacturers as Hitachi, JVC, Panasonic, Sony, and Toshiba. And Grass Valley’s cameras at NAB come from Europe. So why bother with IBC? More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , ,
Web Statistics