Produced by:
| Follow Us  

The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution by Mark Schubin

September 1st, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


A look at 4K and the different ways to acquire in 4K. A must see for those who know that 4K will be in their future but who are not sure what it means for them today. Or if it should mean anything.

Other videos in the series include:

The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (185 MB / TRT 17:53):
The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution


Tags: , , , , , , , , , , , , , , , , , , , , , , ,

The Schubin Talks: Next-Generation-Imaging, Higher Dynamic Range by Mark Schubin

August 25th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


Considered the biggest improvement available, using high dynamic range can make productions easier as shaders will have less to do and subjects moving from sunlight to shadows will be easily visible. Should this be what broadcasters hold out for? Or are there things about HDR that can make it tricky if not done correctly?

Other videos in the series include:

The Schubin Talks: Next-Generation-Imaging is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (179 MB / TRT 16:38):
The Schubin Talks: Next-Generation-Imaging, Higher Dynamic Range


Tags: , , , , , , , , , , , , , , , , ,

The Schubin Talks: Next-Generation-Imaging, Higher Frame Rate by Mark Schubin

August 18th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


Do more frames really mean better quality? Does increasing frames change the nature of the video we perceive? These are the questions answered by Mark Schubin in this presentation on higher frame rate.

Other videos in the series include:

The Schubin Talks: Next-Generation-Imaging is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (204 MB / TRT 17:22):
The Schubin Talks: Next-Generation-Imaging, Higher Frame Rate by Mark Schubin


Tags: , , , , , , , , , , , , , , , , , , , ,

The Schubin Talks: Introduction to Next-Generation-Imaging by Mark Schubin

August 11th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


This series of video presentations by Mark Schubin is designed to help broadcast and media professionals better understand three key concepts that are changing the way content is created and delivered.

This introduction looks at the technical enhancements that can make video look better. It includes a brief overview of the three topics to be covered in the series:

The Schubin Talks: Introduction to Next-Generation-Imaging is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (264 MB / TRT 22:56):
The Schubin Talks: Introduction to Next-Generation-Imaging


Tags: , , , , , , , , , , , , , , , ,

NAB 2015 Wrap-up by Mark Schubin

June 13th, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded May 20, 2015
SMPTE DC Bits-by-the-Bay, Chesapeake Beach Resort

Direct Link ( 44 MB /  TRT 34:01):
NAB 2015 Wrap-up by Mark Schubin


Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,

B4 Long

April 22nd, 2015 | No Comments | Posted in Schubin Cafe


logo_nabshowSomething extraordinary happened at this month’s annual convention of the National Association of Broadcasters in Las Vegas. Actually, it was more a number of product introductions — from seven different manufacturers — adding up to something extraordinary: the continuation of the B4 lens mount into the next era of video production.

Perhaps it’s best to start at the beginning. The first person to publish an account of a working solid-state television camera knew a lot about lens mounts. His name was Denis Daniel Redmond, his account of “An Electric Telescope” was published in English Mechanic and World of Science on February 7, 1879, and the reason he knew about lens mounts was that, when he wasn’t devising new technologies, he was an ophthalmic surgeon.

Baird apparatus 2 croppedIt would be almost half a century longer before the first recognizable video image of a human face could be captured and displayed, an event that kicked off the so-called mechanical-television era, one in which some form of moving component scanned the image in both the camera and the display system. At left above, inventor John Logie Baird posed next to the apparatus he used. The dummy head (A) was scanned by a spiral of lenses in a rotating disk.

1931 Jenkins cameraA mechanical-television camera designed by SMPTE-founder Charles Francis Jenkins, shown at right, used a more-conventional single lens, but it, too, had a spinning scanning disk. There was so much mechanical technology that the lens mount didn’t need to be made pretty.

The mechanical-television era lasted only about one decade, from the mid-1920s to the mid-1930s. It was followed by the era of cathode-ray-tube (CRT) based television: camera tubes and picture tubes. Those cameras also needed lenses.

FernsehkanonenThe 1936 Olympic Games in Berlin might have been the first time that really long television lenses were used — long both in focal length and in physical length. They were so big (left) that the camera-lens combos were called Fernsehkanone, literally “television cannon.” The mount was whatever was able to support something that large and keep it connected to the camera.

In that particular case, the lens mount was bigger than the camera. With the advent of color television and its need to separate light into its component colors, cameras grew.

TK-41 KMTVAt right is an RCA TK-41 camera, sometimes described as being comparable in size and weight to a pregnant horse; its viewfinder, alone, weighed 45 lbs. At its front, a turret (controlled from the rear) carried a selection of lenses of different focal lengths, from wide angle to telephoto. Behind the lens, a beam splitter fed separate red, green, and blue images to three image-orthicon camera tubes.

The idea of hand-holding a TK-41 was preposterous, even for a weight lifter. But camera tubes got smaller and, with them, cameras.

TK-44P 1972 2-person croppedRCA’s TK-44, with smaller camera tubes, was adapted into a “carryable” camera by Toronto station CFTO, but it was so heavy that the backpack section was sometimes worn by a second person, as shown at the left. The next generation actually had an intentionally carryable version, the TKP-45, but, even with that smaller model, it was useful for a camera person to be a weightlifter, too.

HL-35At about the same time as the two-person adapted RCA TK-44, Ikegami introduced the HL-33, a relatively small and lightweight color camera. The HL stood for “Handy-Looky.” It was soon followed by the truly shoulder-mountable HL-35, shown at right.

The HL-35 achieved its small form factor through the use of 2/3-inch camera tubes. The outside diameter of the tubes was, indeed, 2/3 of an inch, about 17 mm, but, due to the thickness of the tube’s glass and other factors, the size of the image was necessarily smaller, just 11 mm in diagonal.

Many 2/3-inch-tubed cameras followed the HL-35. As with cameras that used larger tubes, the lens mount wasn’t critical. Each tube could be moved slightly into the best position, and its scanning size and geometry could also be adjusted. Color-registration errors were common, but they could be dealt with by shooting a registration chart and making adjustments.

B4The CRT era was followed by the era of solid-state image sensors. They were glued onto color-separation prisms, so the ability to adjust individual tubes and scanning was lost. NHK, the Japan Broadcasting Corporation, organized discussions of a standardized lens-camera interface dealing with the physical mount, optical parameters, and electrical connections. Participants included Canon, Fuji, and Nikon on the lens side and Hitachi, Ikegami, JVC, Matsushita (Panasonic), Sony, and Toshiba on the camera side.

To allow the use of 2/3-inch-format lenses from the tube era, even though they weren’t designed for fixed-geometry sensors, the B4 mount (above left) was adopted. But there was more to the new mount than just the old mechanical connection. There were also specifications of different planes for the three color sensors, types of glass to be used in the color-separation prism and optical filters, and electrical signal connections for iris, focus, zoom, and more.

When HDTV began to replace standard definition, there was a trend toward larger image sensors, again — initially camera tubes. After all, more pixels should take up more space. Sony’s solid-state HDC-500 HD camera used one-inch-format image sensors instead of 2/3-inch. But existing 2/3-inch lenses couldn’t be used on the new camera. So, even though those existing lenses were standard-definition, the B4 mount continued, newly standardized in 1992 as Japan’s Broadcast Technology Association S-1005.

Lockheed Martin sensorLockheed Martin cameraThe first 4K camera also sized up — way up. Lockheed Martin built a 4K camera prototype using three solid-state sensors (called Blue Herring CCDs, shown at left), and the image area on each sensor was larger than that of a frame of IMAX film.

As described in a paper in the March 2001 SMPTE Journal, “High-Performance Electro-Optic Camera Prototype” by Stephen A. Stough and William A. Hill, that meant a large prism. And a large prism meant a return to a camera size not easily shouldered (shown above at right).

Bayer filterThat was a prototype. The first cameras actually to be sold that were called 4K took a different approach, a single large-format (35 mm movie-film-sized) sensor covered with a patterned color filter.

An 8×8 Bayer pattern is shown at right, as drawn by Colin M. L. Burnett. The single sensor and its size suggested a movie-camera lens mount, the ARRI-developed positive-lock or PL mount.

separated Bayer colorsOne issue associated with the color-patterned sensors is the differences in spatial resolution between the colors. As seen at left, the red and blue have half the linear spatial resolution of the sensor (and of the green). Using an optical low-pass filter to prevent red and blue aliases would eliminate the extra green resolution; conversely, a filter that works for green would allow red and blue aliases. And, whether it’s called de-Bayering, demosaicking, uprezzing, or upconversion, changing the resolution of the red and blue sites to that of the overall sensor requires some processing.

Abel chartAnother issue is related to the range of image-sensor sizes that use PL mounts. At right is a portion of a guide created by AbelCine showing shot sizes for the same focal-length lens used on different cameras <>. In each case, the yellowish image is what would be captured on a 35-mm film frame, and the blueish image is what the particular camera captures from the same lens. The windmill at the left, prominent in the Canon 5D shot, is not in the Blackmagic Design Cinema Camera shot.

Whatever their issues, thanks to their elimination of a prism, the initial crop of PL-mount digital-cinematography cameras, despite their large-format image sensors, were relatively small, light, and easily carried. Their size and weight differences from the Lockheed Martin prototype were dramatic.

There was a broad selection of lenses available for them, too — but not the long-range zooms with B4-mount lenses needed for sports and other live-event production. It’s possible to adapt a B4 lens to a PL-mount camera, but an optically perfect adaptor would lose more than 2.5 stops (equivalent to needing about six times more light). Because nothing is perfect, the adaptor would introduce its own degradations to the images from lenses designed for HD, not 4K (or Ultra HD, UHD). And a large-format long-range zoom lens would be a difficult project. So multi-camera production remained largely B4-mount three-sensor prism-based HD, while single-camera production moved to PL-mount single-sensors with more photo-sensitive sites (commonly called “pixels”).

Then, at last year’s NAB Show, Grass Valley showed a B4-mount three-sensor prism-based camera labeled 4K. Last fall, Hitachi introduced a four-chip B4-mount UHD camera. And, at last week’s NAB Show, Ikegami, Panasonic, and Sony added their own B4-mount UHD cameras. And both Canon and Fujinon announced UHD B4-mount long-range zoom lenses.

Grass-Valley-LDX-86The camera imaging philosophies differ. The Grass Valley LDX 86 is optically a three-sensor HD camera, so it uses processing to transform the HD to UHD, but so do color-filtered single-sensor cameras; it’s just different processing. The Grass Valley philosophy offers appropriate optical filtering; the single-sensor cameras offer resolution assistance from the green channel.

sk_uhd4000_xl_1Hitachi’s SK-UHD4000 effectively takes a three-sensor HD camera and, with the addition of another beam-splitting prism element, adds a second HD green chip, offset from the others by one-half pixel diagonally. The result is essentially the same as the color-separated signals from a Bayer-patterned single higher-resolution sensor, and the processing to create UHD is similar.

Panasonic AK-UC3000Panasonic’s AK-UC3000 uses a single, color-patterned one-inch-format sensor. To use a 2/3-inch-format B4 lens, therefore, it needs an optical adaptor, but the adaptor is built into the camera, allowing the electrical connections that enable processing to reduce lens aberrations. Also, the optical conversion from 2/3-inch to one-inch is much less than that required to go to a Super 35-mm movie-frame size.

Ikegami_23-inch_CMOS_4K_cameraSony-HDC-4300Both Ikegami’s UHD camera (left) and Sony’s HDC-4300 (right) use three 2/3-inch-format image sensors on a prism block, but each image sensor is truly 4K, making them the first three-sensor 4K cameras since the Lockheed Martin prototype.  By increasing the resolution without increasing the sensor size, however, they have to contend with photo-sensitive sites a quarter of the area of those on HD-resolution chips, reducing sensitivity.

It might seem strange that camera manufacturers are moving to B4-mount 2/3-inch-format 4K cameras at a time when there are no B4-mount 4K lenses, but the same thing happened with the introduction of HD. Almost any lens will pass almost any spatial resolution, but the “modulation transfer function” or MTF (the amount of contrast that gets through at different spatial resolutions) is usually better in lenses intended for higher-resolution applications, and the higher the MTF the sharper the pictures look.

UA80x9 tilt (2) (1280x805)UA22x8 tilt revised (3) (1280x961)With all five of the major manufacturers of studio/field cameras moving to 2/3-inch 4K cameras, lens manufacturers took note. Canon showed a prototype B4-mount long-range 4K zoom lens, and Fujinon actually introduced two models, the UA80x9 (left) and the UA22x8 (right). The lenses use new coatings that increase contrast and new optical designs that increase MTF dramatically even at HD resolutions.

There is no consensus yet on a shift to 4K production, but 4K B4-mount lenses on HD cameras should significantly improve even HD pictures.  That’s nice!

Tags: , , , , , , , , , , , , , , , , , , , ,

Technology Year in Review

February 18th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special
Annual Technology Year in Review recorded at the 2015 HPA Tech Retreat, Hyatt Regency Indian Wells, CA
February 11, 2015

Direct Link (13 MB / 10:36 TRT): Technology Year in Review


Tags: , , , , , , , , , , , , , , , , , ,

Understanding Frame Rate

January 23rd, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded on January 20, 2015 at the SMPTE Toronto meeting.

In viewing tests, increased frame rate delivers a greater sensation of improvement than increased resolution (at a fraction of the increase in data rate), but some viewers of the higher-frame-rate Hobbit found the sensation unpleasant. How does frames-per-second translate into pixels-per-screen-width? One common frame rate is based on profit; another is based on an interpretation of Asian spirituality. Will future frame rates have to take image contrast into consideration?

Direct Link (61MB / 34:34 TRT): Understanding Frame Rate – SMPTE Toronto


Tags: , , , , , , , , , , , , , , , , , , ,

UHD: Beyond the Hype

January 5th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special

Recorded November 12, 2014, NAB’s CCW+SATCON, Javits Convention Center, New York.

With CES 2015 beginning tomorrow, Mark Schubin asks: What do viewers appreciate most about UHD? Higher resolution, frame rate, dynamic range? Wider color gamut? More immersive sound? What do those mean for production, post, and distribution? Can more become less? Follow the beyond-HD journey from scene to seen.

Direct Link (66 MB / 1:02:20 TRT): UHD: Beyond the Hype


Tags: , , , , , , , , , , , , , ,

Everything Else

December 3rd, 2014 | No Comments | Posted in Schubin Cafe

IBCVideotape is dying. Whether it will be higher in spatial resolution, frame rate, dynamic range, color gamut, and/or sound immersion; whether it will be delivered to cinema screens, TV sets, smartphones, or virtual-image eye wear; whether it arrives via terrestrial broadcast, satellite, cable, fiber, WiFi, 4G, or something else; the moving-image media of the future will be file based. But Hitachi announced at the International Broadcasting Convention in Amsterdam (IBC) in September that Gearhouse Broadcast was buying 50 of its new SDK-UHD4000 cameras.

Does the one statement have anything to do with the other? Perhaps it does. The moving-image media of the future will be file based except for everything else.

zoopraxiscope_diskIt might be best to start at the beginning. In 1879, the public became aware of two inventions. One, called the zoopraxiscope, created by Eadweard Muybridge, showed projected photographic motion pictures. The other, called an electric telescope, created by Denis Redmond, showed live motion pictures.

Zoopraxiscope_16485d by trialsanderrorsNeither was particularly good. Muybridge’s zoopraxiscope could show only a 12- or 13-frame sequence over and over. Redmond’s electric telescope could show only “built-up images of very simple luminous objects.” But, for more than three-quarters of a century, they established the basic criteria of their respective media categories: movies were recorded photographically; video was live.

1879 Redmond

baird playbackIt’s not that there weren’t crossover 1936 intermediate filmattempts. John Logie Baird came up with a mechanism for recording television signals in the 1920s. One of the camera systems for the live television coverage of the 1936 Olympic Games, built into a truck, used a movie camera, immediately developed its film, and shoved it into a video scanner, all in one continuous stream. But, in general, movies were photographic and video was live.

When Albert Abramson published “A Short History of Television Recording” in the Journal of the SMPTE in February 1955, the bulk of what he described was, in essence, movie cameras shooting video screens. He did describe systems that could magnetically record video signals directly, but none had yet become a product.

1956-4-22 NYT Gould videotapeThat changed the following year, when Ampex brought the first commercial videotape recorder to market. New York Times TV critic Jack Gould immediately thought of home video. “Why not pick up the new full-length motion picture at the corner drugstore and then run it through one’s home TV receiver?” But he also saw applications on the production side. “A director could shoot a scene, see what he’s got and then reshoot then and there.” “New scenes could be pieced in at the last moment.”

1965 Harlow ElectronovisionEven in his 1955 SMPTE paper, Abramson had a section devoted to “The Electronic Motion Picture,” describing the technology developed by High Definition Films Ltd. In 1965, in a race to beat a traditional, film-shot movie about actress Jean Harlow to theaters, a version was shot in eight days using a process called Electronovision. It won but didn’t necessarily set any precedents. Reviewing the movie in The New York Times on May 15, Howard Thompson wrote,”The Electronovision rush job on Miss Harlow’s life and career is also a dimly-lit business technically. Maybe it’s just as well. This much is for sure: Whatever the second ‘Harlow’ picture looks and sounds like, it can’t be much worse than the first.”

Today, of course, it’s commonplace to shoot both movies and TV shows electronically, recording the results in those files. A few movies are still shot on film, however, and a lot of television isn’t recorded in files, either; it’s live.

Super-Bowl-2014-Seahawks-vs-BroncosAs this is being written, the most-watched TV show in the U.S. was the 2014 Super Bowl; next year, it will probably be the 2015 Super Bowl. In other countries, the most-watched shows are often their versions of live football.

The Metropolitan Opera: Live  in HD exit lightingIt’s not just sports — almost all sports — that are seen live. So are concerts and awards shows. And, of late, there is even quite a bit of live programming being seen in movie theaters — on all seven continents (including Antarctica) — ranging from ballet, opera, and theater to museum-exhibition openings. In the UK, alone, box-office revenues for so-called event cinema doubled from 2012 to 2013 and are already much higher in 2014.

2014 Peter PanFiles need to be closed before they can be moved, and live shows need to be transmitted live, so live shows are not file-based. They can be streamed, but, for the 2014 Super Bowl, the audience that viewed any portion via live stream was about one-half of one percent of the live broadcast-television audience (and the streaming audience watched for only a fraction of the time the broadcast viewers watched, too). NBC’s live broadcast of The Sound of Music last year didn’t achieve Super Bowl-like ratings, but it did so well that the network is following up with a live Peter Pan this year. New conferences this fall, such as LiveTV:LA, were devoted to nothing but live TV.

B4 mountWhat about Hitachi’s camera? Broadcast HD cameras typically use 2/3-inch-format image sensors, three of them attached to a color-separation prism. The optics of the lens mount for those cameras, called B4, are very well defined in standard BTA S-1005-A. It even specifies the different depths at which the three color images are to land, with the blue five microns behind the green and the red ten microns behind.

Most cameras said to be of “4K” resolution (twice the detail both horizontally and vertically of 1080-line HD) use a single image sensor, often of the Super 35 mm image format, with a patterned color filter atop the sensor. The typical lens mount is the PL format. That’s fine for single-camera shooting; there are many fine PL-mount lenses. But for sports, concerts, awards shows, and even ballet, opera, and theater, something else is required.

FernsehkanonenThe intermediate-film-based live camera system at the 1936 Berlin Olympic Games was the size of a truck.  Other, electronic video cameras were each called, in German, Fernsehkanone, literally television cannon. It’s not that they fired projectiles; it’s that they were the size and shape of cannons. The reason was the lenses required to get close-ups of the action from a distance far enough so as not to interfere with it. And what was true in the Olympic stadium in 1936 remains true in stadiums, arenas, and auditoriums today. Live, multi-camera shows, whether football or opera, are typically shot with long-range zoom lenses, perhaps 100:1.

Unfortunately, the longest-range zoom lens for a PL mount is a 20:1, and it was just introduced by Canon this fall; previously, 12:1 was the limit. And that’s why Gearhouse Broadcast placed the large order for Hitachi SDK-UHD4000 cameras.

Hitachi Gearhouse

Those cameras use 2/3-inch-format image sensors and take B4-mount lenses, but they have a fourth image sensor, a second green one, offset by one-half pixel diagonally from the others, allowing 4K spatial detail to be extracted. Notice in the picture above, however, that although the camera is labeled “4K” the lens is merely “HD.” Below is a modulation-transfer-function (MTF) graph of a hypothetical HD lens. “Modulation,” in this case, means contrast, and the transfer function shows how much gets through the lens at different levels of detail.

lens MTF

Up to HD detail fineness, the lens MTF is quite good, transferring roughly 90% of the incoming contrast to the camera. But this hypothetical curve shows that at 4K detail fineness the lens transfers only about 40% of the contrast.

The first HD lenses had limited zoom ranges, too, so it’s certainly possible that affordable long-zoom-range lenses with high MTFs will arrive someday. In the meantime, PL-mount cameras recording files serve all of the motion-image industry — except for everything else.


Tags: , , , , , , , , , , , , , , , , , ,
Web Statistics