Produced by:
| Follow Us  

Wow or Woe?

July 24th, 2016 | No Comments | Posted in Schubin Cafe

SMPTE 100rgbFINAL_small

image_681x647_from_3259,1543_to_4115,2357Today, July 24, 2016. as this image from page 13 of the July 25, 1916 issue of The Evening Star in Washington, D.C. indicates, is the 100th anniversary of the first meeting of the Society of Motion Picture Engineers, the group that became SMPTE in 1950 when a T for television was added. The article noted that, besides Hubbard (then secretary of the Bureau of Standards), other speakers at that meeting in the nation’s capitol included a professor from George Washington University and someone from the U.S. Patent Office. Perhaps most significant, however, was the last line: “The next meeting is to be in New York October 2.”

It’s not that there was something special about the date or the city; it’s that the Society was very peripatetic. After New York came Atlantic City, then Chicago, New York (again), Rochester, Cleveland, Philadelphia, Pittsburgh, Montreal, and Dayton, all within the Society’s first five years. In the next five, it added Buffalo, Boston, Ottawa (Canada), Roscoe (New York), and Schenectady, with few repeats.

hpa_tech_retreat_uk_2016_heads_to_heythrop_park_resortI’m a “Life Fellow” of SMPTE, “Fellow” by selection and “Life” because I’m old and have long been a SMPTE member. When I began attending SMPTE’s annual conventions, they alternated between Hollywood and New York, but there was also an annual winter television conference. In odd numbered years it eventually settled in San Francisco, in even ones in other cities: Atlanta, Dallas, Denver, Detroit, Key Biscayne, Montreal, Nashville, Seattle, Toronto—even New York. As SMPTE has gotten older, it doesn’t seem to want to move around as much (or maybe potential attendees don’t). The annual convention is now just in Hollywood. Another annual conference, Entertainment Technology in the Connected Age, is also in California, as is the now-SMPTE-affiliated HPA Tech Retreat. SMPTE’s annual Future of Cinema Conference is slightly past the state line in Las Vegas. There are also SMPTE conferences in Australia and, as of this year, an HPA Tech Retreat in the UK, but, as far as the U.S. is concerned, California and Las Vegas seem to be it for national events.

TVT 2015 BBTBOf course, there are also SMPTE section meetings. The winter television conference was originally organized by the Detroit section, in conjunction with other nearby sections. Of late, some sections have been having their own longer conferences — one or two day “boot camps” in New England and Toronto and what the mayor of Chesapeake Beach, Maryland, named “Bits by the Bay” (2015 version shown at right), organized by SMPTE’s Washington section.

Two big topics have been dominating those regional conferences of late: a transition from the serial digital interface (SDI) to internet protocol (IP) and another transition from conventional HDTV to what might come beyond (higher spatial resolution, higher frame rate, higher dynamic range, wider color gamut, etc.). For a presentation at a single, national conference, one could easily imagine any SMPTE member paying travel expenses and conference fees. To do presentations at all of the regional conferences would seemingly require the support of a manufacturer or service provider with a point of view. I attended all of the regional conferences in the previous paragraph, and, indeed, most of the presentations were by people employed by manufacturers; I’m pleased to be able to report, however, that corporate viewpoints were kept to a minimum, and even those presentations with the strongest viewpoints were chock full of information.

Boot Camp VII

Hugo G_Presntation_June 2016 trimmedConsider the SMPTE Toronto Section’s Boot Camp VII. As shown above, it was nominally a two-day conference, with a cookout the evening before. But the cookout was followed on the same evening with a presentation at the Rogers Centre stadium (formerly called the SkyDome) on the latest work by Dome Productions on a transition to Ultra-High Definition (UHD), including a mobile-production truck. Many have suggested that a UHD mobile unit would have to use IP interconnection technology, but the Dome presentation explained why they’d actually chosen to stick with SDI. UHD SDI connections can entail four three-gigabit (3G) SDI cables or a single 12G. That’s enough for pictures 3840 pixels wide by 2160 scanning lines high, maybe with higher dynamic range (HDR) and wider color gamut (WCG) to boot, but doubling the frame rate, too, would seem to require 24G SDI. Is such a thing even possible?

Nemo side detailOne presentation on the conference’s second day, from John Hudson, director of strategic technology and international standardization at Semtech, suggested that 24G SDI has long been in the planning stages. Of course, other presentations showed how a video facility might move to IP. There’s a lot of work involved either way. There’s also a lot of work involved in the transition to what comes after HDTV. Color-imaging guru Charles Poynton explained, for example, a practical problem with WCG. He pointed out that the exact color of the popular animated fish, Nemo (portion shown at left), is outside of standard color gamuts. And, if some sort of simple conversion from WCG to ordinary color is done, the scales on Nemo’s skin could disappear. In other words, Nemo would cease to be a fish.

03_production_areaThen there was one of the last presentations of the conference (the 20th), that of Brian Learoyd, engineering manager of Rogers Sportsnet. He told about the tremendous effort involved in launching a UHD channel. After figuring out how to get it up and running on time, he was called into a meeting and informed that, instead of one UHD channel, those in charge wanted four. For the one, Rogers Sportsnet got content from Dome’s new facilities. For the other three, HD content was upconverted. A panel that followed, on which Learoyd participated, explained how Rogers could get away with such upconversion; the difference isn’t all that noticeable.

Another panelist, Matthew Bush, president of Triangle Post in Toronto, and a big fan of the beyond-HDTV technologies, explained working on a UHD project with HDR and WCG. They showed the result to their client, who didn’t seem to think it was a big deal until offered a side-by-side comparison with the ordinary HD version. Then it was considered a wow.

As SMPTE heads into its second hundred years, it has developed and is continuing to develop the standards to smooth the woes of the transitions to IP and beyond-HD imaging. The society has members around the world working on everything from entertainment to medical imaging to motion pictures from deep outer space. A “centennial gala” is planned in conjunction with the annual conference in Hollywood in October. But the local sections do a heck of a good job of education, too. In my opinion, they’re a big wow.

SMPTE Centennial Cake trimmed

Tags: , , , , , , , , , , , , , , , ,

The Bottom Line

January 26th, 2016 | No Comments | Posted in Schubin Cafe


Like many other innovations, high-dynamic-range (HDR) imaging can bring benefits but will require work to implement. And then there’s the bottom line.

HDR’s biggest benefit is that it offers the greatest perceptual image improvement per bit. Different researchers have independently verified the improvement, and it theoretically requires no increase in bit rate whatsoever.  In practice, to allow both standard-dynamic-range (SDR) TVs and HDR TVs to be accommodated with the same signal (and because not everyone keeps the appropriate amount of noise), the bit rate might increase a small amount — perhaps 20%.

Viewing Tests

Above are comparisons of viewer evaluations of higher spatial resolution (e.g., going from HD to 4K) at left, higher frame rate (HFR) in the middle, and HDR at right, with the vertical scales normalized. The distance from the top shows the improvement. To achieve the improvement that HDR delivers with a zero-to-20% increase in bit rate, HFR would need a 100% increase or more. Going to 4K from HD can’t even approach the HDR improvement, but, if it could, it would seem to require more than a 1600% increase in bit rate. HDR is the clear winner.

That’s one piece of HDR good news. Another is that it can deliver more colors separately from any increase in color gamut. It also allows more flexibility in shooting and post production. And it doesn’t appear to require any new technologies at any point from scene to seen.

Below is an image presented at the 2008 SMPTE/NAB Digital Cinema Summit. It was shot in a Grass Valley lab using the Xensium image sensor. The only light on the scene came from the lamp aimed at the camera at lower right, but every chip on the chart is distinguishable. From lamp filament to darkest black, there was a 10,000,000:1 contrast ratio, more than 23 stops of dynamic range. And, on the viewing end, TV sets have already been sold with HDR-level light outputs. New equipment might be needed, of course, but not new technologies.


That’s the good news. Getting everyone to agree on how HDR images should be converted to video signals, how those signals should be encoded for transmission, and how SDR and HDR TV sets should deal with a single transmission path are among the issues being worked out. They’ll be discussed at next month’s HPA Tech Retreat. And then there are interactions.

hue shift with increased luminanceSean McCarthy of Arris offered an excellent presentation on the subject at the main SMPTE conference last fall. Appropriately, it was called “How Independent Are HDR, WCG [wide color gamut], and HFR in Human Visual Perception and the Creative Process?” Those viewing HDR-vs.-SDR demos have sometimes commented that image-motion artifacts seem worse in HDR, suggesting that HDR might require HFR or restrictions on scene motion; McCarthy’s paper explains the science involved. It also explains how color hues can shift in unusual ways, becoming yellower above certain wavelengths and bluer below as light level increases, as shown in an excerpt from an illustration in McCarthy’s paper above at right (higher light level is at top).

Then there’s time.  McCarthy’s paper explains how perceived brightness can change over time as human vision adapts to higher light levels. And there’s also an inability to see dark portions of an image after adaptation to bright scenes. “In bright home and mobile viewing environments,” McCarthy notes, “both light and dark adaptation to [changes] in illumination may be expected to proceed on a time scale measured in seconds. In dark home and theater environments, rapid changes going back and forth from [darker to lighter light levels] might result in slower dark adaptation.” In other words, after a commercial showing a bright seashore or ski slope, viewers will need some recovery time before they can perceive dim shadow detail.

Billiards_ballsHDR also brings concerns about electric power.  It’s often said that the high end of the HDR range will be used only for “speculars,” short for specular reflections, like glints of lights on shiny objects, as shown on these billiard balls, from Dave Pape’s computer-graphics lighting course. If so, an HDR TV set would be unlikely to need significantly more electric power than an SDR TV set.

Samsung SUHDTV_UHDA_Main_2 (2)Those snow and seashore scenes, however, could need a lot more power if shown at peak light output. At right is a scene shown in promotional material for a Samsung HDR-capable TV, with bright snow, ice, and clouds. Below is a section of the technical specifications of the Samsung SUHD JS8500 series 65-inch TV. As shown below, the “typical power consumption” is 82 watts, but the “maximum power consumption” is 255 watts, more than three times higher. The monitor used in Dolby’s HDR demos is liquid cooled.

Samsung 65-inch SUHD specs cropped

All of the above are issues that need to be worked out, from standards and recommended practices to aesthetic decisions. And working such issues out is not really new. Consider those motion artifacts. Even old editions of the American Cinematographer Manual included tables of “35mm Camera Recommended Panning Speeds.” As for power, old TV sets from the era of tube-based circuitry used more power even with smaller and dimmer pictures. But then there’s the bottom line, the lowest light level of the dynamic range.

uhd_alliance_uhd_premium_logo_headerConsider the HDR portion of the requirements for the “Ultra HD Premium” logo shown above that Samsung TV. According to a UHD Alliance press release on January 4, to get the designation, aside from double HD resolution in both the horizontal and vertical directions and some other characteristics, a TV must conform to the SMPTE ST2084 electro-optic transfer function and must offer “a combination of peak brightness and black level either more than 1000 nits peak brightness and less than 0.05 nits black level or more than 540 nits peak brightness and less than 0.0005 nits black level.” The latter is a ratio of more than a million to one.

The high end of those ranges is beyond most current video displays but achieved by some. Again, new equipment might be required but not new technology. And the bottom end seems achievable, too. Turn off a TV, and it emits no light. Manufacturers just need to be able to have black pixels pretty close to “off.”

Ma8thew TV (2)What the viewer sees, however, is a different matter. At right is an image of a TV set posted by Ma8thew and used in the Wikipedia page “Technology of television.” The TV set appears to be off, but a lot of light can be seen on its screen. The light is reflected off the screen from ambient light in the room. Cedric Demers posted “Reflections of 2015 TVs” on The lowest reflection listed was 0.4%, the highest was 1.9%. Of course, that’s between 0.4% and 1.9% of the light hitting the TV set. How much light is that?

Luxury-holiday-letting-Hyeres-Le-Mas-des-iles-d-Or_10 (2)At left is a portion of an image of the TV room of a luxury vacation rental in France, listed on IHA holiday ads. The television set is off. It shows a bright reflected view of the outdoors. It looks very nice outside — possibly too nice to stay in and watch TV. But, if one were watching TV, presumably one would draw the drapes closed. If the windows were thus completely blocked off and not a single lamp were on in the room, would that be dark enough to appreciate the 0.0005-nit black level of an Ultra HD Premium HDR TV?

It would probably not be. What’s the problem? For one thing, the viewer(s).

Consider a movie-theater auditorium. When the movie comes on, all the lights (except exit lights) go off. The walls, floor, and seats are typically made of dark, non-reflective materials. Scientists from the stereoscopic-3D exhibition company RealD measured the reflectivity of auditorium finishes (walls and carpet), seating, and audiences and concluded that the last were the biggest contributors to light reflected back to the screen (especially when they wear white T-shirts). Discussing the research at an HDR session in a cinema auditorium at last fall’s International Broadcasting Convention (IBC), RealD senior vice president Peter Ludé joked that for maximum contrast movies should be projected without audiences.

Sony World Cup 4K to Vue WestfieldLudé went a step further. Reflections off the audiences are problematic only when there is sufficient light on the screen. So, he joked again, for ideal HDR results, the screen should be black. At right is an image shot during a Sony-arranged live 4K screening of the 2014 World Cup at the Westfield Vue cinema in London. The ceiling, the walls, the floor, and the audience are all visible because of light coming off the screen and being reflected.

Now consider a home with an Ultra HD Premium TV emitting 540 nits. The light hits a viewer. If the viewer’s skin reflects just 1% of the light back to the screen and the screen reflects just 0.4% of that back to the viewer, there could be 0.0216 nits of undesired light on a black pixel (it’s more complicated because the intensity falls with the square of the distances involved but increases with the areas emitting or reflecting). That’s not a lot, but it’s still 43.2 times greater than 0.0005 nits.

A million-to-one contrast ratio? Maybe. But maybe not if there’s a viewer in the room.

Tags: , , , , , , , , , , , , , , , , , , ,

The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution by Mark Schubin

September 1st, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


A look at 4K and the different ways to acquire in 4K. A must see for those who know that 4K will be in their future but who are not sure what it means for them today. Or if it should mean anything.

Other videos in the series include:

The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (185 MB / TRT 17:53):
The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution


Tags: , , , , , , , , , , , , , , , , , , , , , , ,

The Schubin Talks: Next-Generation-Imaging, Higher Dynamic Range by Mark Schubin

August 25th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


Considered the biggest improvement available, using high dynamic range can make productions easier as shaders will have less to do and subjects moving from sunlight to shadows will be easily visible. Should this be what broadcasters hold out for? Or are there things about HDR that can make it tricky if not done correctly?

Other videos in the series include:

The Schubin Talks: Next-Generation-Imaging is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (179 MB / TRT 16:38):
The Schubin Talks: Next-Generation-Imaging, Higher Dynamic Range


Tags: , , , , , , , , , , , , , , , , ,

The Schubin Talks: Next-Generation-Imaging, Higher Frame Rate by Mark Schubin

August 18th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


Do more frames really mean better quality? Does increasing frames change the nature of the video we perceive? These are the questions answered by Mark Schubin in this presentation on higher frame rate.

Other videos in the series include:

The Schubin Talks: Next-Generation-Imaging is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (204 MB / TRT 17:22):
The Schubin Talks: Next-Generation-Imaging, Higher Frame Rate by Mark Schubin


Tags: , , , , , , , , , , , , , , , , , , , ,

The Schubin Talks: Introduction to Next-Generation-Imaging by Mark Schubin

August 11th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special


This series of video presentations by Mark Schubin is designed to help broadcast and media professionals better understand three key concepts that are changing the way content is created and delivered.

This introduction looks at the technical enhancements that can make video look better. It includes a brief overview of the three topics to be covered in the series:

The Schubin Talks: Introduction to Next-Generation-Imaging is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at

Direct Link (264 MB / TRT 22:56):
The Schubin Talks: Introduction to Next-Generation-Imaging


Tags: , , , , , , , , , , , , , , , ,

NAB 2015 Wrap-up by Mark Schubin

June 13th, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded May 20, 2015
SMPTE DC Bits-by-the-Bay, Chesapeake Beach Resort

Direct Link ( 44 MB /  TRT 34:01):
NAB 2015 Wrap-up by Mark Schubin


Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,

B4 Long

April 22nd, 2015 | No Comments | Posted in Schubin Cafe


logo_nabshowSomething extraordinary happened at this month’s annual convention of the National Association of Broadcasters in Las Vegas. Actually, it was more a number of product introductions — from seven different manufacturers — adding up to something extraordinary: the continuation of the B4 lens mount into the next era of video production.

Perhaps it’s best to start at the beginning. The first person to publish an account of a working solid-state television camera knew a lot about lens mounts. His name was Denis Daniel Redmond, his account of “An Electric Telescope” was published in English Mechanic and World of Science on February 7, 1879, and the reason he knew about lens mounts was that, when he wasn’t devising new technologies, he was an ophthalmic surgeon.

Baird apparatus 2 croppedIt would be almost half a century longer before the first recognizable video image of a human face could be captured and displayed, an event that kicked off the so-called mechanical-television era, one in which some form of moving component scanned the image in both the camera and the display system. At left above, inventor John Logie Baird posed next to the apparatus he used. The dummy head (A) was scanned by a spiral of lenses in a rotating disk.

1931 Jenkins cameraA mechanical-television camera designed by SMPTE-founder Charles Francis Jenkins, shown at right, used a more-conventional single lens, but it, too, had a spinning scanning disk. There was so much mechanical technology that the lens mount didn’t need to be made pretty.

The mechanical-television era lasted only about one decade, from the mid-1920s to the mid-1930s. It was followed by the era of cathode-ray-tube (CRT) based television: camera tubes and picture tubes. Those cameras also needed lenses.

FernsehkanonenThe 1936 Olympic Games in Berlin might have been the first time that really long television lenses were used — long both in focal length and in physical length. They were so big (left) that the camera-lens combos were called Fernsehkanone, literally “television cannon.” The mount was whatever was able to support something that large and keep it connected to the camera.

In that particular case, the lens mount was bigger than the camera. With the advent of color television and its need to separate light into its component colors, cameras grew.

TK-41 KMTVAt right is an RCA TK-41 camera, sometimes described as being comparable in size and weight to a pregnant horse; its viewfinder, alone, weighed 45 lbs. At its front, a turret (controlled from the rear) carried a selection of lenses of different focal lengths, from wide angle to telephoto. Behind the lens, a beam splitter fed separate red, green, and blue images to three image-orthicon camera tubes.

The idea of hand-holding a TK-41 was preposterous, even for a weight lifter. But camera tubes got smaller and, with them, cameras.

TK-44P 1972 2-person croppedRCA’s TK-44, with smaller camera tubes, was adapted into a “carryable” camera by Toronto station CFTO, but it was so heavy that the backpack section was sometimes worn by a second person, as shown at the left. The next generation actually had an intentionally carryable version, the TKP-45, but, even with that smaller model, it was useful for a camera person to be a weightlifter, too.

HL-35At about the same time as the two-person adapted RCA TK-44, Ikegami introduced the HL-33, a relatively small and lightweight color camera. The HL stood for “Handy-Looky.” It was soon followed by the truly shoulder-mountable HL-35, shown at right.

The HL-35 achieved its small form factor through the use of 2/3-inch camera tubes. The outside diameter of the tubes was, indeed, 2/3 of an inch, about 17 mm, but, due to the thickness of the tube’s glass and other factors, the size of the image was necessarily smaller, just 11 mm in diagonal.

Many 2/3-inch-tubed cameras followed the HL-35. As with cameras that used larger tubes, the lens mount wasn’t critical. Each tube could be moved slightly into the best position, and its scanning size and geometry could also be adjusted. Color-registration errors were common, but they could be dealt with by shooting a registration chart and making adjustments.

B4The CRT era was followed by the era of solid-state image sensors. They were glued onto color-separation prisms, so the ability to adjust individual tubes and scanning was lost. NHK, the Japan Broadcasting Corporation, organized discussions of a standardized lens-camera interface dealing with the physical mount, optical parameters, and electrical connections. Participants included Canon, Fuji, and Nikon on the lens side and Hitachi, Ikegami, JVC, Matsushita (Panasonic), Sony, and Toshiba on the camera side.

To allow the use of 2/3-inch-format lenses from the tube era, even though they weren’t designed for fixed-geometry sensors, the B4 mount (above left) was adopted. But there was more to the new mount than just the old mechanical connection. There were also specifications of different planes for the three color sensors, types of glass to be used in the color-separation prism and optical filters, and electrical signal connections for iris, focus, zoom, and more.

When HDTV began to replace standard definition, there was a trend toward larger image sensors, again — initially camera tubes. After all, more pixels should take up more space. Sony’s solid-state HDC-500 HD camera used one-inch-format image sensors instead of 2/3-inch. But existing 2/3-inch lenses couldn’t be used on the new camera. So, even though those existing lenses were standard-definition, the B4 mount continued, newly standardized in 1992 as Japan’s Broadcast Technology Association S-1005.

Lockheed Martin sensorLockheed Martin cameraThe first 4K camera also sized up — way up. Lockheed Martin built a 4K camera prototype using three solid-state sensors (called Blue Herring CCDs, shown at left), and the image area on each sensor was larger than that of a frame of IMAX film.

As described in a paper in the March 2001 SMPTE Journal, “High-Performance Electro-Optic Camera Prototype” by Stephen A. Stough and William A. Hill, that meant a large prism. And a large prism meant a return to a camera size not easily shouldered (shown above at right).

Bayer filterThat was a prototype. The first cameras actually to be sold that were called 4K took a different approach, a single large-format (35 mm movie-film-sized) sensor covered with a patterned color filter.

An 8×8 Bayer pattern is shown at right, as drawn by Colin M. L. Burnett. The single sensor and its size suggested a movie-camera lens mount, the ARRI-developed positive-lock or PL mount.

separated Bayer colorsOne issue associated with the color-patterned sensors is the differences in spatial resolution between the colors. As seen at left, the red and blue have half the linear spatial resolution of the sensor (and of the green). Using an optical low-pass filter to prevent red and blue aliases would eliminate the extra green resolution; conversely, a filter that works for green would allow red and blue aliases. And, whether it’s called de-Bayering, demosaicking, uprezzing, or upconversion, changing the resolution of the red and blue sites to that of the overall sensor requires some processing.

Abel chartAnother issue is related to the range of image-sensor sizes that use PL mounts. At right is a portion of a guide created by AbelCine showing shot sizes for the same focal-length lens used on different cameras <>. In each case, the yellowish image is what would be captured on a 35-mm film frame, and the blueish image is what the particular camera captures from the same lens. The windmill at the left, prominent in the Canon 5D shot, is not in the Blackmagic Design Cinema Camera shot.

Whatever their issues, thanks to their elimination of a prism, the initial crop of PL-mount digital-cinematography cameras, despite their large-format image sensors, were relatively small, light, and easily carried. Their size and weight differences from the Lockheed Martin prototype were dramatic.

There was a broad selection of lenses available for them, too — but not the long-range zooms with B4-mount lenses needed for sports and other live-event production. It’s possible to adapt a B4 lens to a PL-mount camera, but an optically perfect adaptor would lose more than 2.5 stops (equivalent to needing about six times more light). Because nothing is perfect, the adaptor would introduce its own degradations to the images from lenses designed for HD, not 4K (or Ultra HD, UHD). And a large-format long-range zoom lens would be a difficult project. So multi-camera production remained largely B4-mount three-sensor prism-based HD, while single-camera production moved to PL-mount single-sensors with more photo-sensitive sites (commonly called “pixels”).

Then, at last year’s NAB Show, Grass Valley showed a B4-mount three-sensor prism-based camera labeled 4K. Last fall, Hitachi introduced a four-chip B4-mount UHD camera. And, at last week’s NAB Show, Ikegami, Panasonic, and Sony added their own B4-mount UHD cameras. And both Canon and Fujinon announced UHD B4-mount long-range zoom lenses.

Grass-Valley-LDX-86The camera imaging philosophies differ. The Grass Valley LDX 86 is optically a three-sensor HD camera, so it uses processing to transform the HD to UHD, but so do color-filtered single-sensor cameras; it’s just different processing. The Grass Valley philosophy offers appropriate optical filtering; the single-sensor cameras offer resolution assistance from the green channel.

sk_uhd4000_xl_1Hitachi’s SK-UHD4000 effectively takes a three-sensor HD camera and, with the addition of another beam-splitting prism element, adds a second HD green chip, offset from the others by one-half pixel diagonally. The result is essentially the same as the color-separated signals from a Bayer-patterned single higher-resolution sensor, and the processing to create UHD is similar.

Panasonic AK-UC3000Panasonic’s AK-UC3000 uses a single, color-patterned one-inch-format sensor. To use a 2/3-inch-format B4 lens, therefore, it needs an optical adaptor, but the adaptor is built into the camera, allowing the electrical connections that enable processing to reduce lens aberrations. Also, the optical conversion from 2/3-inch to one-inch is much less than that required to go to a Super 35-mm movie-frame size.

Ikegami_23-inch_CMOS_4K_cameraSony-HDC-4300Both Ikegami’s UHD camera (left) and Sony’s HDC-4300 (right) use three 2/3-inch-format image sensors on a prism block, but each image sensor is truly 4K, making them the first three-sensor 4K cameras since the Lockheed Martin prototype.  By increasing the resolution without increasing the sensor size, however, they have to contend with photo-sensitive sites a quarter of the area of those on HD-resolution chips, reducing sensitivity.

It might seem strange that camera manufacturers are moving to B4-mount 2/3-inch-format 4K cameras at a time when there are no B4-mount 4K lenses, but the same thing happened with the introduction of HD. Almost any lens will pass almost any spatial resolution, but the “modulation transfer function” or MTF (the amount of contrast that gets through at different spatial resolutions) is usually better in lenses intended for higher-resolution applications, and the higher the MTF the sharper the pictures look.

UA80x9 tilt (2) (1280x805)UA22x8 tilt revised (3) (1280x961)With all five of the major manufacturers of studio/field cameras moving to 2/3-inch 4K cameras, lens manufacturers took note. Canon showed a prototype B4-mount long-range 4K zoom lens, and Fujinon actually introduced two models, the UA80x9 (left) and the UA22x8 (right). The lenses use new coatings that increase contrast and new optical designs that increase MTF dramatically even at HD resolutions.

There is no consensus yet on a shift to 4K production, but 4K B4-mount lenses on HD cameras should significantly improve even HD pictures.  That’s nice!

Tags: , , , , , , , , , , , , , , , , , , , ,

Technology Year in Review

February 18th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special
Annual Technology Year in Review recorded at the 2015 HPA Tech Retreat, Hyatt Regency Indian Wells, CA
February 11, 2015

Direct Link (13 MB / 10:36 TRT): Technology Year in Review


Tags: , , , , , , , , , , , , , , , , , ,

Understanding Frame Rate

January 23rd, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded on January 20, 2015 at the SMPTE Toronto meeting.

In viewing tests, increased frame rate delivers a greater sensation of improvement than increased resolution (at a fraction of the increase in data rate), but some viewers of the higher-frame-rate Hobbit found the sensation unpleasant. How does frames-per-second translate into pixels-per-screen-width? One common frame rate is based on profit; another is based on an interpretation of Asian spirituality. Will future frame rates have to take image contrast into consideration?

Direct Link (61MB / 34:34 TRT): Understanding Frame Rate – SMPTE Toronto


Tags: , , , , , , , , , , , , , , , , , , ,
Web Statistics