Produced by:
| Follow Us  

B4 Long

April 22nd, 2015 | No Comments | Posted in Schubin Cafe

 

logo_nabshowSomething extraordinary happened at this month’s annual convention of the National Association of Broadcasters in Las Vegas. Actually, it was more a number of product introductions — from seven different manufacturers — adding up to something extraordinary: the continuation of the B4 lens mount into the next era of video production.

Perhaps it’s best to start at the beginning. The first person to publish an account of a working solid-state television camera knew a lot about lens mounts. His name was Denis Daniel Redmond, his account of “An Electric Telescope” was published in English Mechanic and World of Science on February 7, 1879, and the reason he knew about lens mounts was that, when he wasn’t devising new technologies, he was an ophthalmic surgeon.

Baird apparatus 2 croppedIt would be almost half a century longer before the first recognizable video image of a human face could be captured and displayed, an event that kicked off the so-called mechanical-television era, one in which some form of moving component scanned the image in both the camera and the display system. At left above, inventor John Logie Baird posed next to the apparatus he used. The dummy head (A) was scanned by a spiral of lenses in a rotating disk.

1931 Jenkins cameraA mechanical-television camera designed by SMPTE-founder Charles Francis Jenkins, shown at right, used a more-conventional single lens, but it, too, had a spinning scanning disk. There was so much mechanical technology that the lens mount didn’t need to be made pretty.

The mechanical-television era lasted only about one decade, from the mid-1920s to the mid-1930s. It was followed by the era of cathode-ray-tube (CRT) based television: camera tubes and picture tubes. Those cameras also needed lenses.

FernsehkanonenThe 1936 Olympic Games in Berlin might have been the first time that really long television lenses were used — long both in focal length and in physical length. They were so big (left) that the camera-lens combos were called Fernsehkanone, literally “television cannon.” The mount was whatever was able to support something that large and keep it connected to the camera.

In that particular case, the lens mount was bigger than the camera. With the advent of color television and its need to separate light into its component colors, cameras grew.

TK-41 KMTVAt right is an RCA TK-41 camera, sometimes described as being comparable in size and weight to a pregnant horse; its viewfinder, alone, weighed 45 lbs. At its front, a turret (controlled from the rear) carried a selection of lenses of different focal lengths, from wide angle to telephoto. Behind the lens, a beam splitter fed separate red, green, and blue images to three image-orthicon camera tubes.

The idea of hand-holding a TK-41 was preposterous, even for a weight lifter. But camera tubes got smaller and, with them, cameras.

TK-44P 1972 2-person croppedRCA’s TK-44, with smaller camera tubes, was adapted into a “carryable” camera by Toronto station CFTO, but it was so heavy that the backpack section was sometimes worn by a second person, as shown at the left. The next generation actually had an intentionally carryable version, the TKP-45, but, even with that smaller model, it was useful for a camera person to be a weightlifter, too.

HL-35At about the same time as the two-person adapted RCA TK-44, Ikegami introduced the HL-33, a relatively small and lightweight color camera. The HL stood for “Handy-Looky.” It was soon followed by the truly shoulder-mountable HL-35, shown at right.

The HL-35 achieved its small form factor through the use of 2/3-inch camera tubes. The outside diameter of the tubes was, indeed, 2/3 of an inch, about 17 mm, but, due to the thickness of the tube’s glass and other factors, the size of the image was necessarily smaller, just 11 mm in diagonal.

Many 2/3-inch-tubed cameras followed the HL-35. As with cameras that used larger tubes, the lens mount wasn’t critical. Each tube could be moved slightly into the best position, and its scanning size and geometry could also be adjusted. Color-registration errors were common, but they could be dealt with by shooting a registration chart and making adjustments.

B4The CRT era was followed by the era of solid-state image sensors. They were glued onto color-separation prisms, so the ability to adjust individual tubes and scanning was lost. NHK, the Japan Broadcasting Corporation, organized discussions of a standardized lens-camera interface dealing with the physical mount, optical parameters, and electrical connections. Participants included Canon, Fuji, and Nikon on the lens side and Hitachi, Ikegami, JVC, Matsushita (Panasonic), Sony, and Toshiba on the camera side.

To allow the use of 2/3-inch-format lenses from the tube era, even though they weren’t designed for fixed-geometry sensors, the B4 mount (above left) was adopted. But there was more to the new mount than just the old mechanical connection. There were also specifications of different planes for the three color sensors, types of glass to be used in the color-separation prism and optical filters, and electrical signal connections for iris, focus, zoom, and more.

When HDTV began to replace standard definition, there was a trend toward larger image sensors, again — initially camera tubes. After all, more pixels should take up more space. Sony’s solid-state HDC-500 HD camera used one-inch-format image sensors instead of 2/3-inch. But existing 2/3-inch lenses couldn’t be used on the new camera. So, even though those existing lenses were standard-definition, the B4 mount continued, newly standardized in 1992 as Japan’s Broadcast Technology Association S-1005.

Lockheed Martin sensorLockheed Martin cameraThe first 4K camera also sized up — way up. Lockheed Martin built a 4K camera prototype using three solid-state sensors (called Blue Herring CCDs, shown at left), and the image area on each sensor was larger than that of a frame of IMAX film.

As described in a paper in the March 2001 SMPTE Journal, “High-Performance Electro-Optic Camera Prototype” by Stephen A. Stough and William A. Hill, that meant a large prism. And a large prism meant a return to a camera size not easily shouldered (shown above at right).

Bayer filterThat was a prototype. The first cameras actually to be sold that were called 4K took a different approach, a single large-format (35 mm movie-film-sized) sensor covered with a patterned color filter.

An 8×8 Bayer pattern is shown at right, as drawn by Colin M. L. Burnett. The single sensor and its size suggested a movie-camera lens mount, the ARRI-developed positive-lock or PL mount.

separated Bayer colorsOne issue associated with the color-patterned sensors is the differences in spatial resolution between the colors. As seen at left, the red and blue have half the linear spatial resolution of the sensor (and of the green). Using an optical low-pass filter to prevent red and blue aliases would eliminate the extra green resolution; conversely, a filter that works for green would allow red and blue aliases. And, whether it’s called de-Bayering, demosaicking, uprezzing, or upconversion, changing the resolution of the red and blue sites to that of the overall sensor requires some processing.

Abel chartAnother issue is related to the range of image-sensor sizes that use PL mounts. At right is a portion of a guide created by AbelCine showing shot sizes for the same focal-length lens used on different cameras <http://blog.abelcine.com/wp-content/uploads/2010/08/35mm_DigitalSensors_13.jpg>. In each case, the yellowish image is what would be captured on a 35-mm film frame, and the blueish image is what the particular camera captures from the same lens. The windmill at the left, prominent in the Canon 5D shot, is not in the Blackmagic Design Cinema Camera shot.

Whatever their issues, thanks to their elimination of a prism, the initial crop of PL-mount digital-cinematography cameras, despite their large-format image sensors, were relatively small, light, and easily carried. Their size and weight differences from the Lockheed Martin prototype were dramatic.

There was a broad selection of lenses available for them, too — but not the long-range zooms with B4-mount lenses needed for sports and other live-event production. It’s possible to adapt a B4 lens to a PL-mount camera, but an optically perfect adaptor would lose more than 2.5 stops (equivalent to needing about six times more light). Because nothing is perfect, the adaptor would introduce its own degradations to the images from lenses designed for HD, not 4K (or Ultra HD, UHD). And a large-format long-range zoom lens would be a difficult project. So multi-camera production remained largely B4-mount three-sensor prism-based HD, while single-camera production moved to PL-mount single-sensors with more photo-sensitive sites (commonly called “pixels”).

Then, at last year’s NAB Show, Grass Valley showed a B4-mount three-sensor prism-based camera labeled 4K. Last fall, Hitachi introduced a four-chip B4-mount UHD camera. And, at last week’s NAB Show, Ikegami, Panasonic, and Sony added their own B4-mount UHD cameras. And both Canon and Fujinon announced UHD B4-mount long-range zoom lenses.

Grass-Valley-LDX-86The camera imaging philosophies differ. The Grass Valley LDX 86 is optically a three-sensor HD camera, so it uses processing to transform the HD to UHD, but so do color-filtered single-sensor cameras; it’s just different processing. The Grass Valley philosophy offers appropriate optical filtering; the single-sensor cameras offer resolution assistance from the green channel.

sk_uhd4000_xl_1Hitachi’s SK-UHD4000 effectively takes a three-sensor HD camera and, with the addition of another beam-splitting prism element, adds a second HD green chip, offset from the others by one-half pixel diagonally. The result is essentially the same as the color-separated signals from a Bayer-patterned single higher-resolution sensor, and the processing to create UHD is similar.

Panasonic AK-UC3000Panasonic’s AK-UC3000 uses a single, color-patterned one-inch-format sensor. To use a 2/3-inch-format B4 lens, therefore, it needs an optical adaptor, but the adaptor is built into the camera, allowing the electrical connections that enable processing to reduce lens aberrations. Also, the optical conversion from 2/3-inch to one-inch is much less than that required to go to a Super 35-mm movie-frame size.

Ikegami_23-inch_CMOS_4K_cameraSony-HDC-4300Both Ikegami’s UHD camera (left) and Sony’s HDC-4300 (right) use three 2/3-inch-format image sensors on a prism block, but each image sensor is truly 4K, making them the first three-sensor 4K cameras since the Lockheed Martin prototype.  By increasing the resolution without increasing the sensor size, however, they have to contend with photo-sensitive sites a quarter of the area of those on HD-resolution chips, reducing sensitivity.

It might seem strange that camera manufacturers are moving to B4-mount 2/3-inch-format 4K cameras at a time when there are no B4-mount 4K lenses, but the same thing happened with the introduction of HD. Almost any lens will pass almost any spatial resolution, but the “modulation transfer function” or MTF (the amount of contrast that gets through at different spatial resolutions) is usually better in lenses intended for higher-resolution applications, and the higher the MTF the sharper the pictures look.

UA80x9 tilt (2) (1280x805)UA22x8 tilt revised (3) (1280x961)With all five of the major manufacturers of studio/field cameras moving to 2/3-inch 4K cameras, lens manufacturers took note. Canon showed a prototype B4-mount long-range 4K zoom lens, and Fujinon actually introduced two models, the UA80x9 (left) and the UA22x8 (right). The lenses use new coatings that increase contrast and new optical designs that increase MTF dramatically even at HD resolutions.

There is no consensus yet on a shift to 4K production, but 4K B4-mount lenses on HD cameras should significantly improve even HD pictures.  That’s nice!

Tags: , , , , , , , , , , , , , , , , , , , ,

Technology Year in Review

February 18th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special
Annual Technology Year in Review recorded at the 2015 HPA Tech Retreat, Hyatt Regency Indian Wells, CA
February 11, 2015

Direct Link (13 MB / 10:36 TRT): Technology Year in Review

Embedded:

Tags: , , , , , , , , , , , , , , , , , ,

When Will We Convert to HDTV?

February 28th, 2014 | No Comments | Posted in Schubin Cafe

 

A few weeks ago I worked on an event television production in New Jersey. Last week I was at the HPA Tech Retreat in California. Yesterday I attended Panasonic’s pre-NAB press conference in New York. What do the three have in common? They made me wonder when we will make the transition to HDTV. That’s right: HDTV, not “4K” or any other form of beyond-HD television.

ScheideThe event was called Ode to Joy, a concert at Princeton University’s Richardson Auditorium celebrating the 100th birthday of philanthropist and musical scholar William H. Scheide. It was shot in HDTV, which has a picture shape or aspect ratio, worldwide, of 16 units wide to 9 units high, 16:9, wider than the old TV aspect ratio of 12:9 or 4:3.

Schubin ScheideThe producers released an eight-minute, behind-the-scenes, promotional video, which I recommend highly to anyone who wants to see some of what’s involved in such productions, from running cables through the snow in sub-zero temperatures to going over the music and shots with the camera people before the concert. Here’s a link to it: http://www.youtube.com/watch?v=3awcZ_dPHQU

Schubin Scheide 4x3In addition to its YouTube release, the promo was shown on a number of public television stations. I watched one, via cable television, at a friend’s house. No setting of the friend’s TV or cable box allowed me to see the promo as was intended, filling the 16:9 HDTV screen; the sides were chopped off, at either the station or the cable system, back to old TV’s 4:3.

Such chopping is why some broadcasters still want their content configured in “shoot-and-protect” mode, shot to fill the 16:9 frame but with important content and graphics protected to schubin scheide 4x3 stretchedbe visible in what remains after the sides of the HDTV image are chopped off. Maybe shoot-and-protect made sense in the early days of HDTV, when most viewers watched narrower screens; today it can mean most viewers watching stretched out, unnatural pictures as they try to fill their widescreen TVs with chopped-off images.

I know that it’s most viewers because I follow and report on surveys of television households in the U.S. One of the places where I do such reports is at the annual HPA Tech Retreat.

HPA slide pic

Above is the opening slide of the “Technology Year in Review” that I present there (you can get the whole presentation on the “Get the Download” section of this site here: http://www.schubincafe.com/2014/02/27/hpa-2014-technology-year-in-review/). For many years, I’ve been running essentially the same slide, just tweaking the numbers a bit. This year I noted that the Consumer Electronics Association, Leichtman, and Nielsen all agreed that, as of the beginning of 2013, about ¾ of U.S. television households had HDTVs. That’s most.

So, while shoot-and-protect is preventing a minority of viewers from losing important information at the sides of the picture, it is fostering an environment in which the majority of viewers is watching content in the wrong shape and/or, as in the case of my viewing of the broadcast of the Ode to Joy promo, losing important content at the sides. And that’s not the only problem.

Ode to Joy was an event. That’s the type of television show on which I work most often. And events are often newsworthy. Often the event producer will invite members of the press to cover it. When that happens, part of my job is providing the radio and television press with the feeds they need.

walesaFor a live transmission, that can be as simple as delivering satellite coordinates or authorizing a carrier to feed a station. Most newsworthy events also require some form of “press bridge,” audio and video distribution amplifiers and connectors. A 32-output press bridge is shown in the image in the slide above. The most I’ve ever fed was about 175 when Solidarity-leader Lech Wałesa spoke at the AFL-CIO convention in 1989 in Washington, D.C. I’d prepared for only 150, so some press daisy-chained off of others.

For most of the analog television era, such daisy-chaining was relatively easy. Video was 4:3 standard-definition NTSC color on a coaxial cable with a BNC. Audio was monaural and used a XL-type connection. Press bridges often had switches to deal with the biggest issue, such as whether the audio desired was to be line level or mic level; those press needing mic level usually brought their own attenuators, just in case.

va32-6hToday, in the supposed surround-sound HDTV era, press bridges provide… 4:3 standard-definition NTSC color on a BNC and monaural audio on an XL-type connection, as in the Opamp Labs VA-32 shown at left and still being sold. If someone shows up with a recorder that can accept an HD-SDI input with embedded, AES-3, or analog audio, I can usually accommodate it. If there’s an HDMI input, and I know of it in advance, I can usually accommodate that, too. Unfortunately, those are rare. And that brings me to yesterday’s Panasonic press conference.

AJ-PX270Among other products, the company is introducing a new HDTV camcorder, the AJ-PX270. It’s relatively low cost, and, based on everything reported about it at the press conference, extremely flexible and high in quality. The company suggested many possible uses for it, including news coverage. Its small size and light weight seem to make it a good choice for shooting a car accident or fire or tornado or for rushing in with the rest of the crowd to get shots of an acquitted or convicted defendant after a trial.

Unfortunately for me and others of my ilk who try to provide press feeds at planned events, it will also likely show up at those, and, like other camcorders of its ilk, it lacks any form of video input, and a news videographer bringing along a separate recorder would cancel the small-size, light-weight, and low-cost advantages. So, as I have done in the past, I will provide a monitor for the camcorder to shoot. And, as I have done in the past, I will tweak the numbers on my first Technology Year in Review slide, the one with the picture of the pre-HDTV-era press bridge still being sold.

I really, really, really look forward to junking that slide some day. That’ll be when we’re truly in the HDTV era.

Tags: , , , , , , , ,

The Blind Leading

December 10th, 2011 | No Comments | Posted in Schubin Cafe

Once upon a time, people were prevented from getting married, in some jurisdictions, based on the shade of their skin colors. Once upon a time, a higher-definition image required more pixels on the image sensor and higher-quality optics.

Actually, we still seem to be living in the era indicated by the second sentence above. At the 2012 Hollywood Post Alliance (HPA) Tech Retreat, to be held February 14-17 (with a pre-retreat seminar on “The Physics of Image Displays” on the 13th) at the Hyatt Grand Champions in Indian Wells, California <http://bit.ly/slPf9v>, one of the earliest panels in the main program will be about 4K cameras, and representatives from ARRI, Canon, JVC, Red, Sony, and Vision Research will all talk about cameras with far more pixel sites on their image sensors than there are in typical HDTV cameras; Sony’s, shown at the left, has roughly ten times as many.

That’s by no means the limit. The prototypical ultra-high-definition television (UHDTV) camera shown at the right has three image sensors (from Forza Silicon), each one of which has about 65% more pixel sites than on Sony’s sensor. There is so much information being gathered that each sensor chip requires a 720-pin connection (and Sony’s image sensor is intended for use in just a single-sensor camera, so there are actually about five times more pixel sites).  But even that isn’t the limit! As I pointed out last year, Canon has already demonstrated a huge hyper-definition image sensor, with four times the number of pixels of even those Forza image sensors used in the camera at the right <http://www.schubincafe.com/2010/09/07/whats-next/>!

Having entered the video business at a time when picture editing was done with razor blades, iron-filing solutions to make tape tracks visible, and microscopes, and when video projectors utilized oil reservoirs and vacuum pumps, I’ve always had a fondness for the physical characteristics of equipment. Sensors will continue to increase in resolution, and I love that work. At the same time, I recognize some of the problems of an inexorable path towards higher definition.

The standard-definition camera that your computer or smart phone uses for video conferencing might have an image sensor with a resolution characterized as 640×480 or 0.3 Mpel (megapixels), even if that same smart phone has a much-higher-resolution image sensor pointing the other way for still pictures. That’s because video must make use of continually changing information. At 60 frames per second, that 0.3 Mpel camera delivers more pixels in one second than an 18 Mpel sensor shooting a still image.

Common 1080-line HDTV has about 2 Mpels. So called “4K” has about 8 Mpels. It’s already tough to get a great HDTV lens; how will we deal with UHDTV’s 33-Mpel “8K”?

A frame rate of 60-fps delivers twice as much information as 30-fps; 120-fps is twice as much as 60-fps. How will we ever manage to process high-frame-rate UHDTV?

Perhaps it’s worth consulting the academies. In U.S. entertainment media, the highest awards are granted by the Academy of Motion Picture Arts & Sciences (the Academy Award or Oscar), the Academies (there are two) of Television Arts & Sciences (the Emmy Award), and the Recording Academy (the Grammy Award). Win all three, and you are entitled to go on an EGO (Emmy-Grammy-Oscar) trip!

In the history of those awards, only 33 people have ever achieved an EGO trip. And only two of those also won awards from the Audio Engineering Society (AES), the Institute of Electrical and Electronics Engineers (IEEE), and the Society of Motion-Picture and Television Engineers (SMPTE). You’re probably familiar with the last name of at least one of those two, Ray Dolby, shown at left during his induction into the National Inventors Hall of Fame in 2004.

The other was Thomas Stockham. Some in the audio community might recognize his name.  He was at one time president of the AES, is credited with creating the first digital-audio recording company (Soundstream), and was one of the investigators of the 18½-minute gap in then-President Richard Nixon’s White House tapes regarding the Watergate break-in.

Those achievements appeal to my sense of appreciation of physical characteristics. The Soundstream recorder (right) was large and had many moving parts. And the famous “stretch” of Nixon’s secretary Rose Mary Woods (left), which would have been required to accidentally cause the gap in the recording, is a posture worthy of an advanced yogi (Stockham’s investigative group, unfortunately for that theory, found that there were multiple separate instances of erasure, which could not have been caused by any stretch). But what impressed (and still impresses) me most about Stockham’s work has no physical characteristics at all.  It’s pure mathematics.

On the last day of the HPA Tech Retreat, as on the first day, there will be a presentation on high-resolution imaging. But it will have a very different point of view. Siegfried Foessel of Germany’s Fraunhofer research institute will describe “Increasing Resolution by Covering the Image Sensor.” The idea is that, instead of using a higher-resolution sensor, which increases data-readout rates, it’s actually possible to use a much-lower-resolution image sensor, with the pixel sites covered in a strange pattern (a portion of which is shown at the right). Mathematical processing then yields a much-higher-resolution image — without increasing the information rate leaving the sensor.

In the HPA Tech Retreat demo room, there should be multiple demonstrations of the power of mathematical processing. Cube Vision and Image Essence, for example, are expected to be demonstrating ways of increasing apparent sharpness without even needing to place a mask over the sensor. Lightcraft Technology will show photorealistic scenes that never even existed except in a computer. And those are said to have gigapixel (thousand-megapixel) resolutions!

All of that mathematical processing, to the best of my knowledge, had no direct link to Stockham, but he did a lot of mathematical processing, too. In the realm of audio, his most famous effort was probably the removal of the recording artifacts of the acoustical horn into which the famous opera tenor Enrico Caruso sang in the era before microphone-based recording (shown at left in a drawing by the singer, himself).

As Caruso sang, the sound of his voice was convolved with the characteristics of the acoustic horn that funneled the sound to the recording mechanism. Recovering the original sound for the 1976 commercial release Caruso: A Legendary Performer required deconvolving the horn’s acoustic characteristics from the singer’s voice.  That’s tough enough even if you know everything there is to know about the horn. But Stockham didn’t, so he had to use “blind” deconvolution. It wasn’t the first time.

He was co-author of an invited paper that appeared in the Proceedings of the IEEE in August 1968. It was called “Nonlinear Filtering of Multiplied and Convolved Signals,” and, while some of it applied to audio signals, other parts applied to images. He followed up with a solo paper, “Image Processing in the Context of a Visual Model,” in the same journal in July 1972. Both papers have been cited many hundreds of times in more-recent image-processing work.

One image in both papers showed the outside of a building, shot on a bright day; the door was open, but the inside was little more than a black hole (a portion of the image is shown above left, including artifacts of scanning the print article with its half-tone images). After processing, all of the details of the equipment inside could readily be seen (a portion of the image is shown at right, again including scanning artifacts). Other images showed effective deblurring, and the blur could be caused by either lens defocus or camera instability.

Stockham later (in 1975) actually designed a real-time video contrast compressor that could achieve similar effects. I got to try it. I aimed a bright light up at some shelves so that each shelf cast a shadow on what it was supporting. Without the contrast compressor, virtually nothing on the shelves could be seen; with it, fine detail was visible. But the pictures were not really of entertainment quality.

That was, however, in 1975, and technology has marched — or sprinted — ahead since then. The Fraunhofer Institut presentation at the 2012 HPA Tech Retreat will show how math can increase image-sensor resolution. But what about the lens?

A lens convolves an image in the same way that an old recording horn convolved the sound of an acoustic gramophone recording. And, if the defects of one can be removed by blind deconvolution, so might those of the other. An added benefit is that the deconvolution need not be blind; the characteristics of the lens can be identified. Today’s simple chromatic-aberration corrections could extend to all of a lens’s abberations, and even its focus and mount stability.

Is it a merely a dream?  Perhaps.  But, at one time, so was the repeal of so-called anti-miscegenation laws.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , ,

The E and Eye

February 26th, 2010 | No Comments | Posted in Schubin Cafe

“HDTV is ideally viewed at a distance of roughly three times the picture height.”  That’s the sort of statement heard frequently — as recently as at last week’s HPA Tech Retreat.  And there seems to be a basis for it.

Snellen06

According to the eye chart commonly used to determine visual acuity, 20/20 vision can just identify two black lines separated by a white line that covers one minute of arc on the retina.  There are 360 degrees of arc in a circle and 60 minutes per degree (and 60 seconds per minute).

If you divide the 1080 active (picture-carrying) lines of the most common form of HDTV by those 60 minutes, the result is 18 degrees of retinal angle.  Divide that by two, and you can form two right triangles, one above the other.  The sides opposite the 9-degree angles are each half the height of the HDTV screen.  The sides adjacent to the angles are the distance from the screen to the eye.

The tangent of an angle is the ratio of the opposite side to the adjacent.  The tangent of 9 degrees is roughly 0.158.  Double that to include both right triangles, and the result is roughly 0.317.  Divide 1 by that to get the ratio of viewing distance to height, and the result is roughly 3.16.

According to the theory of that eye chart, if you sit about 3.16 times the height of your HDTV screen away from it, you’ll get optimum resolution.  Sit farther, and you’ll lose some detail.  Sit closer, and you might not be able to see the picture due to the visibility of the scanning structure.

For 720-line HDTV, the viewing distance is roughly 4.76 times the height (4.76H).  For old-time NTSC, it’s roughly 7.15H.

There are a few problems with this theory.  One came with a slight change in this sentence: “Optimum NTSC resolution is achieved by sitting roughly seven times the picture height from the screen.”  Over time, it became “People watch NTSC at roughly seven times the picture height.”

I can think of at least one person who might take out a tape measure, run some calculations, and move a chair to the optimum viewing spot.  But I can’t think of many.

Bernie Lechner, then a researcher at RCA Laboratories, decided to measure how far people sat from their TVs.  At the time, the result was about nine feet, regardless of screen size, a figure that became known as the Lechner Distance.  Richard Jackson at Philips Laboratories in the UK came up with a similar three meters.

The Lechner/Jackson Distance was based largely on the size of living rooms and their furniture.  In Japan, viewers sat closer to their TVs, thus needing HDTV.  Or so the theory goes.  But Japanese screen sizes also tended to be smaller.

Other problems with the optimum-viewing-distance theory relate to such issues as overscan and the reductions of vertical resolution caused by interlace, overlapping scanning lines, sampling filtering, color-phosphor dots or stripes, and CRT faceplate optical characteristics.  But a much more serious issue is that one arc minute derived from the eye chart.

Officially, that eye chart (shown near the top of this post) is called a Snellen chart, named for the Dutch ophthalmologist who introduced its symbols in 1862.  And the symbols on it are said not to be letters in a typeface but “optotypes” intended to identify visual acuity.

snellen EConsider the famous E. When it is located on the 20/20 line of “normal” vision (meaning that the viewer sees at 20 feet what should be just visible at 20 feet — or, outside the United States, the 6/6 line, meaning the viewer sees at six meters what should be just visible at six meters), the entire symbol fits within an arc that subtends a retinal angle of 5 minutes (5/60 of a degree), and each black or white feature of the symbol is 1 minute.

That’s it.  That’s the basis for viewing HDTV at three times the picture height.  But maybe it’s worth examining that basis somewhat further.

First, 20/20 is not the lowest line on a typical Snellen eye chart.  Here’s what the Snellen obituary on page 296 of the February 1, 1908 issue of the British Medical Journal had to say about it:

“He started with the idea that a person might be considered to have normal vision if he could see and distinguish a letter which subtended an angle of one minute on the retina.  This was by no means the best which most eyes could do, but he set this as the minimum standard required to justify one in regarding an eye as normal.”

pelli-robson trimmedSo viewers could conceivably view HDTVs from farther away and still see full resolution.  And then there are two issues I’ve raised in previous posts.

One is contrast (see Angry About Contrast here: http://schubincafe.com/blog/2009/09/angry-about-contrast/).  The portion of a Pelli-Robson chart pictured here shows how important contrast is in being able to distinguish symbols.  TV pictures, whether NTSC or HDTV, tend to comprise a broad range of contrast ratios, and so do the screens on which they’re viewed (and the environments in which that viewing is done; the light of a lamp reflected off a screen can wreak havoc with contrast).

sine grating 3The other issue I’ve gone into before is edges (see Sines of the Times here: http://schubincafe.com/blog/2009/12/sines-of-the-times/).  The E on a Snellen chart has nice sharp edges.  Making sharp edges requires harmonics far beyond the fundamental sine-wave frequency.  Compare the sharp-edged line at top left with the more sinusoidal line below.

One arc minute of visual acuity is the same as 30 cycles per degree (a cycle comprising both a light part and a dark part).  And that figure has become etched in stone for some who discuss visual resolution.  But then there was the paper “Research on Human Factors in UHDTV,” published in the April 2008 SMPTE Journal by authors at NHK, the Japan Broadcasting Corporation, source of modern HDTV.

It noted that observers could tell the difference between 78 cycles per degree (cpd) and 156.  The latter figure is more than five times greater than the 30 cpd of 20/20 vision.  Further, the research found that the sensation of “realness” rises rapidly to 50 cpd but continues rising to 156 (with no indication that it stops there).

So, how fine is visual acuity for detail perception?  I don’t know.  But it doesn’t seem to be a simple 30 cpd.

Tags: , , , , , , , , , , ,

The Hole Thing

October 17th, 2009 | No Comments | Posted in Schubin Cafe

Large lens adapterTake away a camera’s mount, viewfinder, electronics, optical system (including lens), and case, and what’s left? It’s not “nothing;” it’s a hole. Holes treat light very differently from the way nothing treats light, and the image business is very much involved with light. One of the key effects of holes on light is diffraction.

Imagine a small road, a two-lane highway. Imagine that there’s a lot of traffic on it, but it’s moving nicely, as fast as anyone would like to go. Now imagine that the highway suddenly expands from two lanes to four (or six or eight). What happens? In my experience, the cars from the two-lane highway, even though they are moving as fast as their drivers would like, will spread out into the newly available space.

Light does something similar. It is bent by edges. The phenomenon is called diffraction.  Sean T. McHugh’s CambridgeInColour photography site offers an excellent interactive tutorial on the subject here: http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm Many of the illustrations in this post are taken from that site (with permission).

Diffraction large holeDiffraction small holeAs can be seen from these images, a big hole bends light less than does a small hole. Think of a 12-lane highway expanding into a 14-lane; cars won’t spread out as much as when a two-lane highway expands into a 14-lane.

Airy diskNow consider the diagram on the right above. Each arrow, each ray of light, may be considered to be consist of waves. If the center arrow hits the wall at the right at the peak of a wave, it’ll make a bright dot. One of the bent rays might hit the wall at the same instant at the trough of a wave, resulting in a dark ring (a ring because the hole is two dimensional). The resulting diffraction pattern is called an Airy disk. As two Airy disks overlap, the dark part of one might be co-located with the bright part of another.

As a result, the dark gets brighter and the bright gets darker, a loss of contrast, and contrast is essential for sharpness.  If they overlap enough, they no longer look like two disks but one.  At that point, when individual pixels can’t be distinguished, resolution is lost. More »

Tags: , , , , , , ,

Angry About Contrast

September 11th, 2009 | No Comments | Posted in Schubin Cafe
"Angry Man/Neutral Woman" copyright 1990 Aude Oliva, MIT, and Philippe Schyns, University of Glasgow

"Angry Man/Neutral Woman," copyright 1997, Aude Oliva, MIT, and Philippe G. Schyns, University of Glasgow

If you are looking at the above picture on a nominally sized screen at a nominal viewing distance, you probably see an angry man on the left.  What’s an “angry man”?  Me, when I think about technical descriptions of HDTV.

Think about it.  Maybe you hear HDTV described as being 1080i or 720p.  Maybe it’s 1920 x 1080 or 1280 x 720.   Maybe it’s 2 megapixels or 1.  An engineer who remembers such things as analog bandwidths might refer to 30 MHz or 37 MHz.  Someone concerned with lenses might talk about 100 line-pairs per millimeter.  Someone describing visual acuity, screen sizes, and viewing distances might offer 30 cycles per degree.

Someday, I’ll probably get around to explaining how all of those are related and how many of them are pretty much the same thing.  But, when it comes to the sharpness perceived by viewers, they’re all pretty bogus because they’re all missing something of vital importance.

Of course, that isn’t the only silly spec.  Look at “sensitivity,” or, one of my all-time favorites, “minimum sensitivity.”  I just went to a web site of someone called an “expert” and found a sensitivity figure of 1 lux. More »

Tags: , , , , , , , , , , , ,

A Brief History of Height

August 10th, 2009 | No Comments | Posted in Schubin Cafe
NHK's 1969 HDTV

NHK's 1969 HDTV Demo

Based on the basic questions who, when, where, how, and why, HDTV was invented by NHK (Nippon Hoso Kyokai, the Japan Broadcasting Corporation), first shown to the public in 1969 at NHK’s Science & Technical Research Laboratory, initially achieved by using three image tubes to create the picture, and developed because, with real estate at a premium in Japan, viewers sat closer to TVs and, therefore, were more aware of the flaws of ordinary television. Unfortunately, that view doesn’t match the following news report:

worlds-fair-1939-trk-121“The exposition’s opening on April 30 also marked the advent of this country’s first regular schedule of high-definition broadcasts.” That was published in the U.S. magazine Broadcasting before NHK’s unveling of HDTV, more than 30 years before, on April 30, 1939, reporting on the first day of that year’s New York World’s Fair, where RCA demonstrated what it called high-definition television. More »

Tags: , , , , , , , , ,
Web Statistics