Produced by:
| Follow Us  

NAB 2015 Wrap-up by Mark Schubin

June 13th, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded May 20, 2015
SMPTE DC Bits-by-the-Bay, Chesapeake Beach Resort

Direct Link ( 44 MB /  TRT 34:01):
NAB 2015 Wrap-up by Mark Schubin

Embedded:

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,

B4 Long

April 22nd, 2015 | No Comments | Posted in Schubin Cafe

 

logo_nabshowSomething extraordinary happened at this month’s annual convention of the National Association of Broadcasters in Las Vegas. Actually, it was more a number of product introductions — from seven different manufacturers — adding up to something extraordinary: the continuation of the B4 lens mount into the next era of video production.

Perhaps it’s best to start at the beginning. The first person to publish an account of a working solid-state television camera knew a lot about lens mounts. His name was Denis Daniel Redmond, his account of “An Electric Telescope” was published in English Mechanic and World of Science on February 7, 1879, and the reason he knew about lens mounts was that, when he wasn’t devising new technologies, he was an ophthalmic surgeon.

Baird apparatus 2 croppedIt would be almost half a century longer before the first recognizable video image of a human face could be captured and displayed, an event that kicked off the so-called mechanical-television era, one in which some form of moving component scanned the image in both the camera and the display system. At left above, inventor John Logie Baird posed next to the apparatus he used. The dummy head (A) was scanned by a spiral of lenses in a rotating disk.

1931 Jenkins cameraA mechanical-television camera designed by SMPTE-founder Charles Francis Jenkins, shown at right, used a more-conventional single lens, but it, too, had a spinning scanning disk. There was so much mechanical technology that the lens mount didn’t need to be made pretty.

The mechanical-television era lasted only about one decade, from the mid-1920s to the mid-1930s. It was followed by the era of cathode-ray-tube (CRT) based television: camera tubes and picture tubes. Those cameras also needed lenses.

FernsehkanonenThe 1936 Olympic Games in Berlin might have been the first time that really long television lenses were used — long both in focal length and in physical length. They were so big (left) that the camera-lens combos were called Fernsehkanone, literally “television cannon.” The mount was whatever was able to support something that large and keep it connected to the camera.

In that particular case, the lens mount was bigger than the camera. With the advent of color television and its need to separate light into its component colors, cameras grew.

TK-41 KMTVAt right is an RCA TK-41 camera, sometimes described as being comparable in size and weight to a pregnant horse; its viewfinder, alone, weighed 45 lbs. At its front, a turret (controlled from the rear) carried a selection of lenses of different focal lengths, from wide angle to telephoto. Behind the lens, a beam splitter fed separate red, green, and blue images to three image-orthicon camera tubes.

The idea of hand-holding a TK-41 was preposterous, even for a weight lifter. But camera tubes got smaller and, with them, cameras.

TK-44P 1972 2-person croppedRCA’s TK-44, with smaller camera tubes, was adapted into a “carryable” camera by Toronto station CFTO, but it was so heavy that the backpack section was sometimes worn by a second person, as shown at the left. The next generation actually had an intentionally carryable version, the TKP-45, but, even with that smaller model, it was useful for a camera person to be a weightlifter, too.

HL-35At about the same time as the two-person adapted RCA TK-44, Ikegami introduced the HL-33, a relatively small and lightweight color camera. The HL stood for “Handy-Looky.” It was soon followed by the truly shoulder-mountable HL-35, shown at right.

The HL-35 achieved its small form factor through the use of 2/3-inch camera tubes. The outside diameter of the tubes was, indeed, 2/3 of an inch, about 17 mm, but, due to the thickness of the tube’s glass and other factors, the size of the image was necessarily smaller, just 11 mm in diagonal.

Many 2/3-inch-tubed cameras followed the HL-35. As with cameras that used larger tubes, the lens mount wasn’t critical. Each tube could be moved slightly into the best position, and its scanning size and geometry could also be adjusted. Color-registration errors were common, but they could be dealt with by shooting a registration chart and making adjustments.

B4The CRT era was followed by the era of solid-state image sensors. They were glued onto color-separation prisms, so the ability to adjust individual tubes and scanning was lost. NHK, the Japan Broadcasting Corporation, organized discussions of a standardized lens-camera interface dealing with the physical mount, optical parameters, and electrical connections. Participants included Canon, Fuji, and Nikon on the lens side and Hitachi, Ikegami, JVC, Matsushita (Panasonic), Sony, and Toshiba on the camera side.

To allow the use of 2/3-inch-format lenses from the tube era, even though they weren’t designed for fixed-geometry sensors, the B4 mount (above left) was adopted. But there was more to the new mount than just the old mechanical connection. There were also specifications of different planes for the three color sensors, types of glass to be used in the color-separation prism and optical filters, and electrical signal connections for iris, focus, zoom, and more.

When HDTV began to replace standard definition, there was a trend toward larger image sensors, again — initially camera tubes. After all, more pixels should take up more space. Sony’s solid-state HDC-500 HD camera used one-inch-format image sensors instead of 2/3-inch. But existing 2/3-inch lenses couldn’t be used on the new camera. So, even though those existing lenses were standard-definition, the B4 mount continued, newly standardized in 1992 as Japan’s Broadcast Technology Association S-1005.

Lockheed Martin sensorLockheed Martin cameraThe first 4K camera also sized up — way up. Lockheed Martin built a 4K camera prototype using three solid-state sensors (called Blue Herring CCDs, shown at left), and the image area on each sensor was larger than that of a frame of IMAX film.

As described in a paper in the March 2001 SMPTE Journal, “High-Performance Electro-Optic Camera Prototype” by Stephen A. Stough and William A. Hill, that meant a large prism. And a large prism meant a return to a camera size not easily shouldered (shown above at right).

Bayer filterThat was a prototype. The first cameras actually to be sold that were called 4K took a different approach, a single large-format (35 mm movie-film-sized) sensor covered with a patterned color filter.

An 8×8 Bayer pattern is shown at right, as drawn by Colin M. L. Burnett. The single sensor and its size suggested a movie-camera lens mount, the ARRI-developed positive-lock or PL mount.

separated Bayer colorsOne issue associated with the color-patterned sensors is the differences in spatial resolution between the colors. As seen at left, the red and blue have half the linear spatial resolution of the sensor (and of the green). Using an optical low-pass filter to prevent red and blue aliases would eliminate the extra green resolution; conversely, a filter that works for green would allow red and blue aliases. And, whether it’s called de-Bayering, demosaicking, uprezzing, or upconversion, changing the resolution of the red and blue sites to that of the overall sensor requires some processing.

Abel chartAnother issue is related to the range of image-sensor sizes that use PL mounts. At right is a portion of a guide created by AbelCine showing shot sizes for the same focal-length lens used on different cameras <http://blog.abelcine.com/wp-content/uploads/2010/08/35mm_DigitalSensors_13.jpg>. In each case, the yellowish image is what would be captured on a 35-mm film frame, and the blueish image is what the particular camera captures from the same lens. The windmill at the left, prominent in the Canon 5D shot, is not in the Blackmagic Design Cinema Camera shot.

Whatever their issues, thanks to their elimination of a prism, the initial crop of PL-mount digital-cinematography cameras, despite their large-format image sensors, were relatively small, light, and easily carried. Their size and weight differences from the Lockheed Martin prototype were dramatic.

There was a broad selection of lenses available for them, too — but not the long-range zooms with B4-mount lenses needed for sports and other live-event production. It’s possible to adapt a B4 lens to a PL-mount camera, but an optically perfect adaptor would lose more than 2.5 stops (equivalent to needing about six times more light). Because nothing is perfect, the adaptor would introduce its own degradations to the images from lenses designed for HD, not 4K (or Ultra HD, UHD). And a large-format long-range zoom lens would be a difficult project. So multi-camera production remained largely B4-mount three-sensor prism-based HD, while single-camera production moved to PL-mount single-sensors with more photo-sensitive sites (commonly called “pixels”).

Then, at last year’s NAB Show, Grass Valley showed a B4-mount three-sensor prism-based camera labeled 4K. Last fall, Hitachi introduced a four-chip B4-mount UHD camera. And, at last week’s NAB Show, Ikegami, Panasonic, and Sony added their own B4-mount UHD cameras. And both Canon and Fujinon announced UHD B4-mount long-range zoom lenses.

Grass-Valley-LDX-86The camera imaging philosophies differ. The Grass Valley LDX 86 is optically a three-sensor HD camera, so it uses processing to transform the HD to UHD, but so do color-filtered single-sensor cameras; it’s just different processing. The Grass Valley philosophy offers appropriate optical filtering; the single-sensor cameras offer resolution assistance from the green channel.

sk_uhd4000_xl_1Hitachi’s SK-UHD4000 effectively takes a three-sensor HD camera and, with the addition of another beam-splitting prism element, adds a second HD green chip, offset from the others by one-half pixel diagonally. The result is essentially the same as the color-separated signals from a Bayer-patterned single higher-resolution sensor, and the processing to create UHD is similar.

Panasonic AK-UC3000Panasonic’s AK-UC3000 uses a single, color-patterned one-inch-format sensor. To use a 2/3-inch-format B4 lens, therefore, it needs an optical adaptor, but the adaptor is built into the camera, allowing the electrical connections that enable processing to reduce lens aberrations. Also, the optical conversion from 2/3-inch to one-inch is much less than that required to go to a Super 35-mm movie-frame size.

Ikegami_23-inch_CMOS_4K_cameraSony-HDC-4300Both Ikegami’s UHD camera (left) and Sony’s HDC-4300 (right) use three 2/3-inch-format image sensors on a prism block, but each image sensor is truly 4K, making them the first three-sensor 4K cameras since the Lockheed Martin prototype.  By increasing the resolution without increasing the sensor size, however, they have to contend with photo-sensitive sites a quarter of the area of those on HD-resolution chips, reducing sensitivity.

It might seem strange that camera manufacturers are moving to B4-mount 2/3-inch-format 4K cameras at a time when there are no B4-mount 4K lenses, but the same thing happened with the introduction of HD. Almost any lens will pass almost any spatial resolution, but the “modulation transfer function” or MTF (the amount of contrast that gets through at different spatial resolutions) is usually better in lenses intended for higher-resolution applications, and the higher the MTF the sharper the pictures look.

UA80x9 tilt (2) (1280x805)UA22x8 tilt revised (3) (1280x961)With all five of the major manufacturers of studio/field cameras moving to 2/3-inch 4K cameras, lens manufacturers took note. Canon showed a prototype B4-mount long-range 4K zoom lens, and Fujinon actually introduced two models, the UA80x9 (left) and the UA22x8 (right). The lenses use new coatings that increase contrast and new optical designs that increase MTF dramatically even at HD resolutions.

There is no consensus yet on a shift to 4K production, but 4K B4-mount lenses on HD cameras should significantly improve even HD pictures.  That’s nice!

Tags: , , , , , , , , , , , , , , , , , , , ,

Everything Else

December 3rd, 2014 | No Comments | Posted in Schubin Cafe

IBCVideotape is dying. Whether it will be higher in spatial resolution, frame rate, dynamic range, color gamut, and/or sound immersion; whether it will be delivered to cinema screens, TV sets, smartphones, or virtual-image eye wear; whether it arrives via terrestrial broadcast, satellite, cable, fiber, WiFi, 4G, or something else; the moving-image media of the future will be file based. But Hitachi announced at the International Broadcasting Convention in Amsterdam (IBC) in September that Gearhouse Broadcast was buying 50 of its new SDK-UHD4000 cameras.

Does the one statement have anything to do with the other? Perhaps it does. The moving-image media of the future will be file based except for everything else.

zoopraxiscope_diskIt might be best to start at the beginning. In 1879, the public became aware of two inventions. One, called the zoopraxiscope, created by Eadweard Muybridge, showed projected photographic motion pictures. The other, called an electric telescope, created by Denis Redmond, showed live motion pictures.

Zoopraxiscope_16485d by trialsanderrorsNeither was particularly good. Muybridge’s zoopraxiscope could show only a 12- or 13-frame sequence over and over. Redmond’s electric telescope could show only “built-up images of very simple luminous objects.” But, for more than three-quarters of a century, they established the basic criteria of their respective media categories: movies were recorded photographically; video was live.

1879 Redmond

baird playbackIt’s not that there weren’t crossover 1936 intermediate filmattempts. John Logie Baird came up with a mechanism for recording television signals in the 1920s. One of the camera systems for the live television coverage of the 1936 Olympic Games, built into a truck, used a movie camera, immediately developed its film, and shoved it into a video scanner, all in one continuous stream. But, in general, movies were photographic and video was live.

When Albert Abramson published “A Short History of Television Recording” in the Journal of the SMPTE in February 1955, the bulk of what he described was, in essence, movie cameras shooting video screens. He did describe systems that could magnetically record video signals directly, but none had yet become a product.

1956-4-22 NYT Gould videotapeThat changed the following year, when Ampex brought the first commercial videotape recorder to market. New York Times TV critic Jack Gould immediately thought of home video. “Why not pick up the new full-length motion picture at the corner drugstore and then run it through one’s home TV receiver?” But he also saw applications on the production side. “A director could shoot a scene, see what he’s got and then reshoot then and there.” “New scenes could be pieced in at the last moment.”

1965 Harlow ElectronovisionEven in his 1955 SMPTE paper, Abramson had a section devoted to “The Electronic Motion Picture,” describing the technology developed by High Definition Films Ltd. In 1965, in a race to beat a traditional, film-shot movie about actress Jean Harlow to theaters, a version was shot in eight days using a process called Electronovision. It won but didn’t necessarily set any precedents. Reviewing the movie in The New York Times on May 15, Howard Thompson wrote,”The Electronovision rush job on Miss Harlow’s life and career is also a dimly-lit business technically. Maybe it’s just as well. This much is for sure: Whatever the second ‘Harlow’ picture looks and sounds like, it can’t be much worse than the first.”

Today, of course, it’s commonplace to shoot both movies and TV shows electronically, recording the results in those files. A few movies are still shot on film, however, and a lot of television isn’t recorded in files, either; it’s live.

Super-Bowl-2014-Seahawks-vs-BroncosAs this is being written, the most-watched TV show in the U.S. was the 2014 Super Bowl; next year, it will probably be the 2015 Super Bowl. In other countries, the most-watched shows are often their versions of live football.

The Metropolitan Opera: Live  in HD exit lightingIt’s not just sports — almost all sports — that are seen live. So are concerts and awards shows. And, of late, there is even quite a bit of live programming being seen in movie theaters — on all seven continents (including Antarctica) — ranging from ballet, opera, and theater to museum-exhibition openings. In the UK, alone, box-office revenues for so-called event cinema doubled from 2012 to 2013 and are already much higher in 2014.

2014 Peter PanFiles need to be closed before they can be moved, and live shows need to be transmitted live, so live shows are not file-based. They can be streamed, but, for the 2014 Super Bowl, the audience that viewed any portion via live stream was about one-half of one percent of the live broadcast-television audience (and the streaming audience watched for only a fraction of the time the broadcast viewers watched, too). NBC’s live broadcast of The Sound of Music last year didn’t achieve Super Bowl-like ratings, but it did so well that the network is following up with a live Peter Pan this year. New conferences this fall, such as LiveTV:LA, were devoted to nothing but live TV.

B4 mountWhat about Hitachi’s camera? Broadcast HD cameras typically use 2/3-inch-format image sensors, three of them attached to a color-separation prism. The optics of the lens mount for those cameras, called B4, are very well defined in standard BTA S-1005-A. It even specifies the different depths at which the three color images are to land, with the blue five microns behind the green and the red ten microns behind.

Most cameras said to be of “4K” resolution (twice the detail both horizontally and vertically of 1080-line HD) use a single image sensor, often of the Super 35 mm image format, with a patterned color filter atop the sensor. The typical lens mount is the PL format. That’s fine for single-camera shooting; there are many fine PL-mount lenses. But for sports, concerts, awards shows, and even ballet, opera, and theater, something else is required.

FernsehkanonenThe intermediate-film-based live camera system at the 1936 Berlin Olympic Games was the size of a truck.  Other, electronic video cameras were each called, in German, Fernsehkanone, literally television cannon. It’s not that they fired projectiles; it’s that they were the size and shape of cannons. The reason was the lenses required to get close-ups of the action from a distance far enough so as not to interfere with it. And what was true in the Olympic stadium in 1936 remains true in stadiums, arenas, and auditoriums today. Live, multi-camera shows, whether football or opera, are typically shot with long-range zoom lenses, perhaps 100:1.

Unfortunately, the longest-range zoom lens for a PL mount is a 20:1, and it was just introduced by Canon this fall; previously, 12:1 was the limit. And that’s why Gearhouse Broadcast placed the large order for Hitachi SDK-UHD4000 cameras.

Hitachi Gearhouse

Those cameras use 2/3-inch-format image sensors and take B4-mount lenses, but they have a fourth image sensor, a second green one, offset by one-half pixel diagonally from the others, allowing 4K spatial detail to be extracted. Notice in the picture above, however, that although the camera is labeled “4K” the lens is merely “HD.” Below is a modulation-transfer-function (MTF) graph of a hypothetical HD lens. “Modulation,” in this case, means contrast, and the transfer function shows how much gets through the lens at different levels of detail.

lens MTF

Up to HD detail fineness, the lens MTF is quite good, transferring roughly 90% of the incoming contrast to the camera. But this hypothetical curve shows that at 4K detail fineness the lens transfers only about 40% of the contrast.

The first HD lenses had limited zoom ranges, too, so it’s certainly possible that affordable long-zoom-range lenses with high MTFs will arrive someday. In the meantime, PL-mount cameras recording files serve all of the motion-image industry — except for everything else.

 

Tags: , , , , , , , , , , , , , , , , , ,

The Blind Leading

December 10th, 2011 | No Comments | Posted in Schubin Cafe

Once upon a time, people were prevented from getting married, in some jurisdictions, based on the shade of their skin colors. Once upon a time, a higher-definition image required more pixels on the image sensor and higher-quality optics.

Actually, we still seem to be living in the era indicated by the second sentence above. At the 2012 Hollywood Post Alliance (HPA) Tech Retreat, to be held February 14-17 (with a pre-retreat seminar on “The Physics of Image Displays” on the 13th) at the Hyatt Grand Champions in Indian Wells, California <http://bit.ly/slPf9v>, one of the earliest panels in the main program will be about 4K cameras, and representatives from ARRI, Canon, JVC, Red, Sony, and Vision Research will all talk about cameras with far more pixel sites on their image sensors than there are in typical HDTV cameras; Sony’s, shown at the left, has roughly ten times as many.

That’s by no means the limit. The prototypical ultra-high-definition television (UHDTV) camera shown at the right has three image sensors (from Forza Silicon), each one of which has about 65% more pixel sites than on Sony’s sensor. There is so much information being gathered that each sensor chip requires a 720-pin connection (and Sony’s image sensor is intended for use in just a single-sensor camera, so there are actually about five times more pixel sites).  But even that isn’t the limit! As I pointed out last year, Canon has already demonstrated a huge hyper-definition image sensor, with four times the number of pixels of even those Forza image sensors used in the camera at the right <http://www.schubincafe.com/2010/09/07/whats-next/>!

Having entered the video business at a time when picture editing was done with razor blades, iron-filing solutions to make tape tracks visible, and microscopes, and when video projectors utilized oil reservoirs and vacuum pumps, I’ve always had a fondness for the physical characteristics of equipment. Sensors will continue to increase in resolution, and I love that work. At the same time, I recognize some of the problems of an inexorable path towards higher definition.

The standard-definition camera that your computer or smart phone uses for video conferencing might have an image sensor with a resolution characterized as 640×480 or 0.3 Mpel (megapixels), even if that same smart phone has a much-higher-resolution image sensor pointing the other way for still pictures. That’s because video must make use of continually changing information. At 60 frames per second, that 0.3 Mpel camera delivers more pixels in one second than an 18 Mpel sensor shooting a still image.

Common 1080-line HDTV has about 2 Mpels. So called “4K” has about 8 Mpels. It’s already tough to get a great HDTV lens; how will we deal with UHDTV’s 33-Mpel “8K”?

A frame rate of 60-fps delivers twice as much information as 30-fps; 120-fps is twice as much as 60-fps. How will we ever manage to process high-frame-rate UHDTV?

Perhaps it’s worth consulting the academies. In U.S. entertainment media, the highest awards are granted by the Academy of Motion Picture Arts & Sciences (the Academy Award or Oscar), the Academies (there are two) of Television Arts & Sciences (the Emmy Award), and the Recording Academy (the Grammy Award). Win all three, and you are entitled to go on an EGO (Emmy-Grammy-Oscar) trip!

In the history of those awards, only 33 people have ever achieved an EGO trip. And only two of those also won awards from the Audio Engineering Society (AES), the Institute of Electrical and Electronics Engineers (IEEE), and the Society of Motion-Picture and Television Engineers (SMPTE). You’re probably familiar with the last name of at least one of those two, Ray Dolby, shown at left during his induction into the National Inventors Hall of Fame in 2004.

The other was Thomas Stockham. Some in the audio community might recognize his name.  He was at one time president of the AES, is credited with creating the first digital-audio recording company (Soundstream), and was one of the investigators of the 18½-minute gap in then-President Richard Nixon’s White House tapes regarding the Watergate break-in.

Those achievements appeal to my sense of appreciation of physical characteristics. The Soundstream recorder (right) was large and had many moving parts. And the famous “stretch” of Nixon’s secretary Rose Mary Woods (left), which would have been required to accidentally cause the gap in the recording, is a posture worthy of an advanced yogi (Stockham’s investigative group, unfortunately for that theory, found that there were multiple separate instances of erasure, which could not have been caused by any stretch). But what impressed (and still impresses) me most about Stockham’s work has no physical characteristics at all.  It’s pure mathematics.

On the last day of the HPA Tech Retreat, as on the first day, there will be a presentation on high-resolution imaging. But it will have a very different point of view. Siegfried Foessel of Germany’s Fraunhofer research institute will describe “Increasing Resolution by Covering the Image Sensor.” The idea is that, instead of using a higher-resolution sensor, which increases data-readout rates, it’s actually possible to use a much-lower-resolution image sensor, with the pixel sites covered in a strange pattern (a portion of which is shown at the right). Mathematical processing then yields a much-higher-resolution image — without increasing the information rate leaving the sensor.

In the HPA Tech Retreat demo room, there should be multiple demonstrations of the power of mathematical processing. Cube Vision and Image Essence, for example, are expected to be demonstrating ways of increasing apparent sharpness without even needing to place a mask over the sensor. Lightcraft Technology will show photorealistic scenes that never even existed except in a computer. And those are said to have gigapixel (thousand-megapixel) resolutions!

All of that mathematical processing, to the best of my knowledge, had no direct link to Stockham, but he did a lot of mathematical processing, too. In the realm of audio, his most famous effort was probably the removal of the recording artifacts of the acoustical horn into which the famous opera tenor Enrico Caruso sang in the era before microphone-based recording (shown at left in a drawing by the singer, himself).

As Caruso sang, the sound of his voice was convolved with the characteristics of the acoustic horn that funneled the sound to the recording mechanism. Recovering the original sound for the 1976 commercial release Caruso: A Legendary Performer required deconvolving the horn’s acoustic characteristics from the singer’s voice.  That’s tough enough even if you know everything there is to know about the horn. But Stockham didn’t, so he had to use “blind” deconvolution. It wasn’t the first time.

He was co-author of an invited paper that appeared in the Proceedings of the IEEE in August 1968. It was called “Nonlinear Filtering of Multiplied and Convolved Signals,” and, while some of it applied to audio signals, other parts applied to images. He followed up with a solo paper, “Image Processing in the Context of a Visual Model,” in the same journal in July 1972. Both papers have been cited many hundreds of times in more-recent image-processing work.

One image in both papers showed the outside of a building, shot on a bright day; the door was open, but the inside was little more than a black hole (a portion of the image is shown above left, including artifacts of scanning the print article with its half-tone images). After processing, all of the details of the equipment inside could readily be seen (a portion of the image is shown at right, again including scanning artifacts). Other images showed effective deblurring, and the blur could be caused by either lens defocus or camera instability.

Stockham later (in 1975) actually designed a real-time video contrast compressor that could achieve similar effects. I got to try it. I aimed a bright light up at some shelves so that each shelf cast a shadow on what it was supporting. Without the contrast compressor, virtually nothing on the shelves could be seen; with it, fine detail was visible. But the pictures were not really of entertainment quality.

That was, however, in 1975, and technology has marched — or sprinted — ahead since then. The Fraunhofer Institut presentation at the 2012 HPA Tech Retreat will show how math can increase image-sensor resolution. But what about the lens?

A lens convolves an image in the same way that an old recording horn convolved the sound of an acoustic gramophone recording. And, if the defects of one can be removed by blind deconvolution, so might those of the other. An added benefit is that the deconvolution need not be blind; the characteristics of the lens can be identified. Today’s simple chromatic-aberration corrections could extend to all of a lens’s abberations, and even its focus and mount stability.

Is it a merely a dream?  Perhaps.  But, at one time, so was the repeal of so-called anti-miscegenation laws.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , ,

What’s Next?

September 7th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

These were some of the things that could be seen at Canon Expo at New York’s Javits Convention Center last week: ice skaters pirouetting without ice, people viewing someone dressed as the Statue of Liberty from the moving deck of a fake boat, a machine that can squirt out a printed-and-bound book on demand, and a hand-holdable x-ray system.  Those weren’t directly related to the future of our business.  But what about image sensors with 120 million pixels, others (sensor chips) larger than paperback books, and yet others with more colors than merely red, green, and blue?

Canon Liberty trimmed[The photo above, by the way, like the others in this post from Canon Expo, was shot by Mark Forman <http://screeningroom.com/> and is used here with his permission (all other rights reserved).]

We can extrapolate from the past to make certain predictions.  It’s extremely likely, for example, that the sun will rise tomorrow (or, for those of a less-poetic bent, that the rotation of the Earth will cause…).  Otherwise, we can’t predict the future, but we’re often put in a position of having to do so:  Will this stock go up?  Will it rain on during an outdoor wedding ceremony?  Will there be a better, less-expensive camera/computer/etc. after a purchase?

That last is usually as assured as a daily sunrise, but how quickly and how great the improvement are hard to know.  For help, there are blogs like this, publications, conferences, and trade shows.

Philips Autostereo at IFA 2010The Internationale Funkausstellung  (IFA) in Berlin is an example of one of the latter.  It’s an international consumer electronics show.

At the latest IFA, among other stereoscopic 3D offerings (including 58-inch, CinemaScope-shaped, 21:9 glasses-based 3D), Philips spinoff Dimenco showed an auto-stereoscopic (no-glasses) 3D display.  Here’s a portion of a photo of it that appeared on TechRadar’s site here: http://www.techradar.com/news/television/hdtv/philips-to-launch-glasses-free-3d-tv-in-2013-713951

This is by no means the first time Philips has ventured into no-glasses 3D, but this one is different.  Autostereoscopic displays usually involve a number of views, and the display resolution gets divided by them.  The more views, the larger the viewing sweet spot and the better the 3D but the lower the resolution.  The new display has five views horizontally and three vertically, but it starts with twice as much resolution as “full 1080-line HD” both horizontally and vertically, so the 3D images end up with a respectable 768 x 720 for each of 15 views.

CAVE smallPerhaps such glasses-free 3D leads to a greater sensation of immersion, but there are other ways to create (or increase) an immersive sensation.  Consider, for example, the CAVE (Cave Automatic Virtual Environment), a room with stereoscopic projections on at least three walls and the floor (sometimes all surfaces).  The photo here is of a CAVE at the University of Illinois in 2001 (it was developed there roughly 10 years earlier).  SGI brought a CAVE to the National Association of Broadcasters convention shortly after it was developed.

Visitors who wore ordinary 3D glasses saw ordinary 3D — boring.  Visitors who got to wear a special pair of 3D glasses that could track their head movements, however, even though they saw exactly the same 3D as the others, were transported into a virtual world responsive to their every movement.  Unfortunately, only one viewer at a time could get the immersive experience.

At Canon Expo, however, there was “mixed reality.”  It’s based on head-mounted displays using two prisms per eye.  One, a special “free-form prism,” delivers images from a small display to the eye.  The other passes “real-world” images from in front of the viewer to both the eye and a video camera that can tell what the viewer is looking at.

Canon mixed reality trimmedThe result is definitely mixed reality, a combination of stereoscopic imagery with unprocessed vision, with the 3D virtual images conforming to objects and views in the “real world.”  Virtual images can even be mapped onto real-world surfaces, with the cameras in the headgear telling the processors how to warp the virtual images appropriately.  This photo shows a complex version of the headgear; other mixed-reality viewers at Canon Expo looked little different from some 3D glasses.  Canon’s “interactive mixed reality” brochure showed people wearing the headgear walking around and collaboratively discussing an object that doesn’t exist.

Immersive MediaAnother form of immersion involves capturing 360-degree images.  At left is the Immersive Media Dodeca® 2360 camera system, combining the images from 11 different cameras and lenses into a seamless panorama.  At Canon Expo, a 360-degree view was achieved with a single lens, a single imaging chip (8984 x 5792, with 3.2 μm pixel pitch) and a mirror shaped like a cross between a donut and a cone that is, in the words of one high-ranking Canon employee, “the single most-precise optical component the company makes.”  The whole package forms a roughly fist-sized bump.

Of course, immersiveness is only one visual sensation.  There are also sharpness and color.

If you work out the math on that  Canon 360-degree image sensor, it comes to about 50 million pixels, which is considerably more than even NHK’s Super Hi-Vision (also known as ultra high-definition television, with four times the detail of 1920 x 1080 HDTV in both the horizontal and vertical directions).  Canon ultra trimmedAcross the room from Canon’s 360-degree system, however, was their version of ultra-high resolution, with roughly eight times the detail of 1080-line HDTV in both directions.

Four Super Hi-Vision pictures could fit into one from this hyper-resolution sensor.  Canon says its resolution is comparable to the number of human optic nerves.

The full detail of the chip can only (currently) be captured at only about 1.4 frames per second, but while it is shooting hyper-detailed stills, it can (if I interpreted the information provided correctly) simultaneously capture two full-motion full-detail HDTV streams within the image.  The system uses a one-of-a-kind lens, and it’s a work in progress.

Canon giant trimmedThe hyper-resolution image sensor had a roughly full-frame 35mm format (comparable to that in the Canon EOS 5D Mark II DSLR), already roughly four-and-a-half times taller than a 2/3–inch format image sensor.  A few feet away was another new sensor that was larger — much larger.  It was made from a semiconductor wafer the size of a dinner plate, and the sensor itself was the size of an old 8-inch-square floppy disk — huge!

What do you get from such a huge sensor?  Extraordinary sensitivity and dynamic range.  One scene (said to have been shot at 60 frames per second with an aperture of f/6.8) showed stars in the sky as seen through a forest canopy — and it was easy to see that the leaves and needles of the trees were green.  In another scene, a woman walks in front of a table lamp, so she is back lit, but every detail and shade of gray in of her front was clearly visible.

Canon dome trimmedCanon Expo demonstrated advances in both immersiveness (aside from the 360-degree and mixed-reality systems, there was also the 9-meter dome projection shown at right) and in spatial sharpness (the hyper-resolution and giant image sensors, the latter because it can deliver more contrast ratio, which affects sharpness).  There are also temporal sharpness (high frame rate) and spatio-temporal sharpness, both of which affect our perceptions of sharpness.  I found no demonstrations of increased temporal or dynamic resolution at Canon Expo, but that doesn’t mean they’re not being developed.

Picture1 trimmedThe images at left are portions taken from BBC R&D White Paper number 169 on “High Frame-Rate Television” published in September 2008.  It’s available here: http://www.bbc.co.uk/rd/pubs/whp/whp169.shtml The upper picture shows a toy train shot at the equivalent of 50 frames per second; the lower picture shows the same train at 300-fps.  Note that the stationary tracks and ties are equally sharp in both pictures, but the higher frame rate makes the moving train sharper in the lower picture.

As this post shows, there is immersiveness, and there is sharpness (both spatial and temporal).  Is there anything else that future imaging might bring?  How about advances in color?

Cie_chromaticity_diagram_wavelengthEver since its earliest days, color video has been based on three color primaries.  As this chromaticity diagram shows, however, human vision encompasses a curved space of colors, whereas any three primaries within that space define a triangle, excluding many colors.

At Canon Expo, one portion of the new-technologies section was devoted to hand-held displays that could be tilted back and forth to show the iridescence of butterfly wings and other natural phenomena.  The demonstration wasn’t to highlight the displays but a multi-band camera that captures six color ranges instead of three.

Then there was the Tsuzuri Project exhibit at Canon Expo (http://www.canon.com/tsuzuri/index.html).  It was a gorgeous reproduction of an ancient Japanese screen.  Advanced digital technology was used to capture and reproduce the detail of the original, but then a master gold-leaf artist used his talents to complete the copy.

I look forward to future tools based on what I saw at Canon Expo as well as the BBC’s high frame-rate viewing, Immersive Media’s camera system, and even the Philips autostereoscopic display.  And I’m glad that human artists are still needed to use them.

Tags: , , , , , , , , , , ,

One Whole Heck of a Lot of Pixels!

August 25th, 2010 | No Comments | Posted in Today's Special

EETimes reports that Canon has developed an APS-H-sized (roughly 35-mm movie frame) image sensor with 120 million pixels.  How many is that?  Any one-sixtieth of the area of the sensor can provide full 1920×1080 HD resolution. http://www.eetimes.com/electronics-news/4206455/Canon-120-megapixel-CMOS-sensor

Tags: , , ,

The Elephant in the Room: 3D at NAB 2010

April 30th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe
implicit range of 3D eyewear at NAB 2010

implicit 3D eyewear range at NAB 2010

As I roamed the exhibits at the NAB show this month, I kept wondering what other year it seemed most like.  And I was not alone.

There were plenty of important issues covered at the show, from citizen journalism to internet-connected TV.  And then there was the elephant in the room.

It would be a lie to say that 3D technologies could be found at every booth on the show floor.  But it was probably the case that there was 3D in at least every aisle.  There was so much 3D that it tended to diminish all other news.

litepanels_sola12In acquisition technology, for example, LED lighting was near ubiquitous, with focusable instruments, such as the Litepanels Sola, sometimes painfully bright.  Panasonic and Sony both showed models of future inexpensive video cameras with large-format imagers, and Aaton joined the range of those offering “digital magazines” for film cameras.  In small formats, GoPro’s Hero is a complete HD camcorder weighing just three ounces.

In storage technology, Cache-A, For-A, IBM, and Sony all showed in new offerings that tape is not dead.  Meanwhile, iVDR removable-hard-drive storage could be seen in several new products, and Canon introduced new camcorders based on Compact Flash cards.

Cinedeck looks like a viewfinder but includes built-in storage and editing capability. NextoDI’s NVS 2525 can copy either P2 or SxS cards.

In processing, Dan Carew’s Indie 2.0 blog said of Blackmagic Design’s DaVinci Resolve 7.0, “this best-in-class color correction software was formerly US$250,000 (for software and hardware) and is now available in a Mac software only verions for US$995.” http://indie2zero.com/2010/04/16/what-i-liked-and-saw-at-nab-2010/ Immersive Media’s 11-camera spherical views can now be stitched and streamed live.  NewTek’s TriCaster TCXD850 can deal with 22 inputs and virtual sets.  And, though you might not yet be able to figure out why you’d want this capability, Snell’s Kahuna 360 production switcher can deal with up to 16 shows at once.

In wireless distribution, there was VµbIQ’s 60 GHz uncompressed transmitter on a chip and Streambox’s Avenir for bonding up to four cellular modems to create a 20 Mbps channel.  In wired, there was Pleora’s EtherCast palm-sized bidirectional ASI-IP gateways.  And, in technologies that could be applied to either, there were Fraunhofer’s codec with a latency of just one macroblock line and a Harris-LG/Zenith proposal for expanding ATSC mobile transmission to full-channel use.

Ostendo 2In presentation, there was a reference picture monitor from Dolby (seen in almost its final form at the HPA Tech Retreat).  Several booths had OLED monitors, from 7-inch at Sony to 15-inch at TVLogic.  Wohler’s Presto router has an LCD video display on each button.  And Ostendo’s CDM43 is a curved monitor with a 30:9 aspect ratio.

Epic smallThat barely scratches the surface of the non-3D news from NAB.  And then there was 3D.

Even All-Mobile Video’s Epic 3D production truck, parked in Sony’s exhibit, wore 3D glasses.  But it was the glasses on visitors to the truck that proved more instructive.

Sony provided RealD circularly polarized glasses to visitors for looking at everything from relatively small monitors to a giant outdoor-type LED display.  As soon as those visitors entered the control room of AMV’s Epic 3D truck and donned their glasses, however, they saw ghosting — crosstalk between the two eye views.  AMV staff were prepared for the shocked looks.  “Sit down,” they said.  “There’s a narrow vertical angle, and you have to be head-on to the monitors.”  Sure enough, that solved the problem — at least for those who could sit.

Another potential 3D problem was mentioned in the two-day 3D Digital Cinema Summit before the show opened.  If 3D is shot for a small screen and blown up to cinema size, it can cause eye divergence.  3ality’s camera rigs indicate when this might happen, but it happened anyway on at least one cinema-sized screen at NAB, leading to some audience queasiness.

Buzz Hays of the Sony 3D Technology Center says making 3D is easy, but making good 3D is hard.  There was a lot of 3D at NAB, including both easy and hard, good and bad.

It was hard to count the number of side-by-side and beam-splitter dual-camera rigs at the show, but, in addition to those, there were integrated (one-piece) 3D cameras and camcorders, in various stages of readiness, from 17 different brands, both on and off the show floor.  It seems that all of them were said to be “the first.”

Integrated

Much could be learned about 3D at the two-day Digital Cinema Summit before the show opened.  It began with Sony’s Pete Lude showing that an ordinary 2D picture can seem 3D when viewed with just one eye, leading a later speaker (me) to quip that watching with an eye patch, therefore, is an inexpensive way to get 3DTV.

3ality’s Steve Schklair followed Lude with an on-screen, live demonstration-tutorial on the effects of different 3D rig settings: height, rotation, lens interaxial, convergence, etc.  He was followed by directors, stereographers, and trainers of 3D-convergence operators, among others.

Although 3D would seem to require more equipment (two cameras and lenses plus a stereo rig at each location) and more personnel (a convergence operator per camera in addition to a stereographer), there is seemingly one saving grace.  According to Schklair and others, 3D can get away with fewer cameras and less cutting than 2D.

The same thing was said of HD, however, in its early days.  Sure enough, when I worked on one show in 1989, we used just four HD cameras feeding the HD truck and twice as many non-HD cameras feeding the non-HD truck.  In the early days, it was common practice to do separate HD and SD productions.  Today, of course, one HD production feeds all, and it typically uses as many cameras and as rapid cutting as an SD show.

Pace ShadowAtop a tower of Fujinon’s NAB booth, Pace showed something that recognizes the current economics of 3D.  With virtually no 3DTV audience, it’s hard to justify separate 3D productions, but, with such major players as ESPN, DirecTV, Discovery, and Sky involved in 3D, the elephant cannot be ignored, either.  So the Pace Shadow system places a 3D rig atop the long lens of a typical 2D sports camera.  Furthermore, it interconnects the controls (in a variety of selectable ways) so that the operator of the 2D camera need not be concerned about shooting 3D: one camera position, one operator, different 2D and 3D outputs.

Screen Subtitling came up with similarly clever solutions to the problem of 3D graphics.  Unless text is closer to the viewer (in 3D depth) than the portion of the image that it is obscuring, it can be uncomfortable to read.

Traditionally, subtitles are at the bottom of a screen, where 3D objects are closest to the viewer.  Raise the graphics to the top, and they might work in the screen plane.

Then there’s the issue of putting the graphics on the screen.  With left- and right-eye views, it might seem that two keying systems are required.  But with much 3D being distributed in a side-by-side format, a single keyer can place 3D graphics directly into the side-by-side feed.

Screen Subtitling small

copyright 2010 Inition | Niche | Pacific

Relay opticsThere was much more 3D at the show, in every field of video technology (and, perhaps even audio).  In acquisition, for example, aside from integrated cameras, 3D mounts, and even individual cameras designed specifically for 3D (like Sony’s HDC-P1), there were also 3D lens adaptors, precision-matched lenses, precision lens controls, and even relay optics intended to allow wider cameras to be placed closer together, as in this picture shot by Eric Cheng of WetPixel.com: http://wetpixel.com/i.php/full/2010-nab-show-report-las-vegas/

LED smallAt the other end of the 3D chain, there were both plasma and LCD autostereoscopic (no-glasses) displays using both lenticular and parallax-barrier technology, small OLED displays with active-shutter glasses and giant LED screens with passive circularly polarized glasses.  There were LCD and plasma screens (up to 152-inch at Panasonic) and DLP rear-projectors using active-shutter glasses, and both LCD and laser projection using passive polarized glasses.

DSC01809There were dual-panel displays with beam splitters, and displays intended to be viewed through long strips of fixed polarized materials (to accommodate all viewers’ heights).  There were many anaglyph displays in the three-different primary-and-complement color combinations.  There were 3D viewfinders using glasses and others with displays for each eye.

Burton Aerial 3D trimmedJapan’s Burton showed a laser-plasma display that creates 3D images in mid-air.  Normally, they’ve viewed through laser-protection goggles, as in the image at the right at the top of this post.  But as a safety measure, they showed them instead inside an amber tube at NAB.

InKeisoku small storage, it seems that everyone who had anything that could record images had a version that could do so in 3D.  Even Convergent Design’s tiny Nano was available in a 3D version.  The Abekas Mira is an eight-channel digital production server — or it’s a four-channel 3D digital production server.  Want an uncompressed 3D field recorder?  Keisoku Giken’s UDR-D100 was just one such product at the show.

In processing, just about every form of editing and processing had a 3D version.  Monogram showed a touch-screen 3D “truck-in-a-box” production system.  Belgium’s Imec research lab even showed licensable technology for stereoscopic virtual cameras.

There was a range of equipment and services for converting 2D to 3D either in real time or not, automatically and with human assistance.  And there was a large range of processing equipment designed to fix 3D problems, such as camera rotation and height variation.

Sony’s MPE200 is one such device, with a U.S. list price of $38,000.  The MPES3D01/01 software to run it, however, is another $22,500.  With the least-expensive 3D camera at the show (Minoru 3D) retailing for under $60 at amazon.com, it might be said that 3D is cheap, but good 3D costs.

There was 3D test equipment from many manufacturers.  There was high-speed 3D (Antelope/Vision Research). Belden 1694D trimmed There was 3D coax (Belden 1694D, complete with anaglyph color coding).  Ryerson University is doing eye-tracking research on what viewers look at in 3D and whether it’s different from HD and 4K.

So why was I wondering what year it was?  At NAB shows there have been many technologies shown that never went anywhere.  We still await voice-recognition production switchers, for example, and also voice-recognition captioning.  But those have generally been shown by only one company or a small number of exhibitors.

Digital video effects were among the fastest technologies to penetrate the industry.  First shown at NAB in 1973, they were commonly seen in homes by the end of the decade.

Then there was HDTV.  Its penetration after NAB introduction took much longer, even if dated only from 1989, when an entire exhibition hall was devoted to the subject (there were many earlier NAB displays).  Estimates vary, but U.S. household penetration of HDTV 21 years later seems to be in the vicinity of half.

extravisionAt least HDTV did eventually penetrate U.S. households.  Visitors to NAB conventions in the early 1980s could see aisle after aisle of exhibits claiming compatibility with one or both competing standards for teletext.  One standard was being broadcast on CBS and NBC; the other on TBS.  There were professional and consumer equipment manufacturers and services offering support.  Based on the quantity and diversity of promotion at NAB, it was hard to imagine that teletext would not take off in the U.S.

So, will 3DTV emulate digital effects, HDTV, U.S. teletext, or none of the above?  Time will tell.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Web Statistics