Produced by:
| Follow Us  

The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution by Mark Schubin

September 1st, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special

 

A look at 4K and the different ways to acquire in 4K. A must see for those who know that 4K will be in their future but who are not sure what it means for them today. Or if it should mean anything.

Other videos in the series include:

The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution is presented by SVG, the Sports Video Group, advancing the creation, production and distribution of sports content, at sportsvideo.org.

Direct Link (185 MB / TRT 17:53):
The Schubin Talks: Next-Generation-Imaging, Higher Spatial Resolution

Embedded:

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

Everything Else

December 3rd, 2014 | No Comments | Posted in Schubin Cafe

IBCVideotape is dying. Whether it will be higher in spatial resolution, frame rate, dynamic range, color gamut, and/or sound immersion; whether it will be delivered to cinema screens, TV sets, smartphones, or virtual-image eye wear; whether it arrives via terrestrial broadcast, satellite, cable, fiber, WiFi, 4G, or something else; the moving-image media of the future will be file based. But Hitachi announced at the International Broadcasting Convention in Amsterdam (IBC) in September that Gearhouse Broadcast was buying 50 of its new SDK-UHD4000 cameras.

Does the one statement have anything to do with the other? Perhaps it does. The moving-image media of the future will be file based except for everything else.

zoopraxiscope_diskIt might be best to start at the beginning. In 1879, the public became aware of two inventions. One, called the zoopraxiscope, created by Eadweard Muybridge, showed projected photographic motion pictures. The other, called an electric telescope, created by Denis Redmond, showed live motion pictures.

Zoopraxiscope_16485d by trialsanderrorsNeither was particularly good. Muybridge’s zoopraxiscope could show only a 12- or 13-frame sequence over and over. Redmond’s electric telescope could show only “built-up images of very simple luminous objects.” But, for more than three-quarters of a century, they established the basic criteria of their respective media categories: movies were recorded photographically; video was live.

1879 Redmond

baird playbackIt’s not that there weren’t crossover 1936 intermediate filmattempts. John Logie Baird came up with a mechanism for recording television signals in the 1920s. One of the camera systems for the live television coverage of the 1936 Olympic Games, built into a truck, used a movie camera, immediately developed its film, and shoved it into a video scanner, all in one continuous stream. But, in general, movies were photographic and video was live.

When Albert Abramson published “A Short History of Television Recording” in the Journal of the SMPTE in February 1955, the bulk of what he described was, in essence, movie cameras shooting video screens. He did describe systems that could magnetically record video signals directly, but none had yet become a product.

1956-4-22 NYT Gould videotapeThat changed the following year, when Ampex brought the first commercial videotape recorder to market. New York Times TV critic Jack Gould immediately thought of home video. “Why not pick up the new full-length motion picture at the corner drugstore and then run it through one’s home TV receiver?” But he also saw applications on the production side. “A director could shoot a scene, see what he’s got and then reshoot then and there.” “New scenes could be pieced in at the last moment.”

1965 Harlow ElectronovisionEven in his 1955 SMPTE paper, Abramson had a section devoted to “The Electronic Motion Picture,” describing the technology developed by High Definition Films Ltd. In 1965, in a race to beat a traditional, film-shot movie about actress Jean Harlow to theaters, a version was shot in eight days using a process called Electronovision. It won but didn’t necessarily set any precedents. Reviewing the movie in The New York Times on May 15, Howard Thompson wrote,”The Electronovision rush job on Miss Harlow’s life and career is also a dimly-lit business technically. Maybe it’s just as well. This much is for sure: Whatever the second ‘Harlow’ picture looks and sounds like, it can’t be much worse than the first.”

Today, of course, it’s commonplace to shoot both movies and TV shows electronically, recording the results in those files. A few movies are still shot on film, however, and a lot of television isn’t recorded in files, either; it’s live.

Super-Bowl-2014-Seahawks-vs-BroncosAs this is being written, the most-watched TV show in the U.S. was the 2014 Super Bowl; next year, it will probably be the 2015 Super Bowl. In other countries, the most-watched shows are often their versions of live football.

The Metropolitan Opera: Live  in HD exit lightingIt’s not just sports — almost all sports — that are seen live. So are concerts and awards shows. And, of late, there is even quite a bit of live programming being seen in movie theaters — on all seven continents (including Antarctica) — ranging from ballet, opera, and theater to museum-exhibition openings. In the UK, alone, box-office revenues for so-called event cinema doubled from 2012 to 2013 and are already much higher in 2014.

2014 Peter PanFiles need to be closed before they can be moved, and live shows need to be transmitted live, so live shows are not file-based. They can be streamed, but, for the 2014 Super Bowl, the audience that viewed any portion via live stream was about one-half of one percent of the live broadcast-television audience (and the streaming audience watched for only a fraction of the time the broadcast viewers watched, too). NBC’s live broadcast of The Sound of Music last year didn’t achieve Super Bowl-like ratings, but it did so well that the network is following up with a live Peter Pan this year. New conferences this fall, such as LiveTV:LA, were devoted to nothing but live TV.

B4 mountWhat about Hitachi’s camera? Broadcast HD cameras typically use 2/3-inch-format image sensors, three of them attached to a color-separation prism. The optics of the lens mount for those cameras, called B4, are very well defined in standard BTA S-1005-A. It even specifies the different depths at which the three color images are to land, with the blue five microns behind the green and the red ten microns behind.

Most cameras said to be of “4K” resolution (twice the detail both horizontally and vertically of 1080-line HD) use a single image sensor, often of the Super 35 mm image format, with a patterned color filter atop the sensor. The typical lens mount is the PL format. That’s fine for single-camera shooting; there are many fine PL-mount lenses. But for sports, concerts, awards shows, and even ballet, opera, and theater, something else is required.

FernsehkanonenThe intermediate-film-based live camera system at the 1936 Berlin Olympic Games was the size of a truck.  Other, electronic video cameras were each called, in German, Fernsehkanone, literally television cannon. It’s not that they fired projectiles; it’s that they were the size and shape of cannons. The reason was the lenses required to get close-ups of the action from a distance far enough so as not to interfere with it. And what was true in the Olympic stadium in 1936 remains true in stadiums, arenas, and auditoriums today. Live, multi-camera shows, whether football or opera, are typically shot with long-range zoom lenses, perhaps 100:1.

Unfortunately, the longest-range zoom lens for a PL mount is a 20:1, and it was just introduced by Canon this fall; previously, 12:1 was the limit. And that’s why Gearhouse Broadcast placed the large order for Hitachi SDK-UHD4000 cameras.

Hitachi Gearhouse

Those cameras use 2/3-inch-format image sensors and take B4-mount lenses, but they have a fourth image sensor, a second green one, offset by one-half pixel diagonally from the others, allowing 4K spatial detail to be extracted. Notice in the picture above, however, that although the camera is labeled “4K” the lens is merely “HD.” Below is a modulation-transfer-function (MTF) graph of a hypothetical HD lens. “Modulation,” in this case, means contrast, and the transfer function shows how much gets through the lens at different levels of detail.

lens MTF

Up to HD detail fineness, the lens MTF is quite good, transferring roughly 90% of the incoming contrast to the camera. But this hypothetical curve shows that at 4K detail fineness the lens transfers only about 40% of the contrast.

The first HD lenses had limited zoom ranges, too, so it’s certainly possible that affordable long-zoom-range lenses with high MTFs will arrive someday. In the meantime, PL-mount cameras recording files serve all of the motion-image industry — except for everything else.

 

Tags: , , , , , , , , , , , , , , , , , ,

All You Can See

July 7th, 2012 | 2 Comments | Posted in Schubin Cafe

The equipment exhibitions at the annual convention of the National Association of Broadcasters (NAB) often seem to have themes. Two years ago, it was stereoscopic 3D. Before that, it was DSLRs. Long before HDTV became common, it was a theme at NAB conventions. And there was at least one convention at which the theme seemed to be teletext. At the 2012 NAB show, a theme seemed to be 4K.

What is 4K? That’s a good question without a simple answer. Nominally, 4K denotes a moving-image system with 4096 active (image-carrying) picture elements (pixels) per row. At one time, it was considered to have 2048 active rows; now 2160 — twice HDTV’s 1080 — is more common. But, if twice HDTV is appropriate vertically, why not horizontally, too? Sure enough, some call 3840 pixels across the screen 4K (others call it Quad HD, because twice the number horizontally and vertically results in four times the number of pixels of 1080-line HDTV).

Then there is color. There have been 4K cameras using a beam-splitting prism (right, diagram by Colin M. L. Burnett, http://en.wikipedia.org/wiki/File:Dichroic-prism.png) and three image-sensor chips, just like a typical studio or truck camera. Other 4K cameras have single chips overlayed with color filters (one version, the Bayer pattern, is shown below). There have also been four-chip cameras, with HD-resolution chips and an additional green one offset diagonally by half a pixel. Conceivably, as was done in HD cameras, a 4K camera could also use three HD-resolution chips with the green offset from the red and blue.

Some say a color-filtered chip with at least 4096 (or 3840) photosites per row is 4K; others say it is not. Consider optical low-pass filtering. In a three-chip camera, the optical low-pass can be designed to match any of the chips. In a filtered single-chip (left, also from Burnett, http://en.wikipedia.org/wiki/File:Bayer_pattern_on_sensor.svg) or four-chip camera, should it be optimized for the individual photosites (the “luma” or uncolored resolution), the green ones (which occur more frequently), or the other colors (which have filters spaced twice as far apart as the photosites)?

Then there are those who think it’s not necessary to go all the way to 4K (e.g., the “3.5K” of the popular ARRI Alexa at right) and those who think 4K is insufficient (e.g., proponents of “8K”). Just counting photosites, there have been “4K” cameras with anything from roughly 8.3 to roughly 38.2 million, and there have been other beyond-HDTV-resolution cameras shown and discussed with as few as 3.3 million and as many as 100 million. There’s even a group working on camera systems with a thousand times more pixels than even that high end (100 gigapixels http://www.disp.duke.edu/projects/AWARE/index.ptml).

There are also ways of increasing resolution without changing the number of photosites on an image sensor. One is compressive sampling (described by Siegfried Foessel of Germany’s Fraunhofer Institut at the HPA Tech Retreat in February in a system that increases resolution by covering portions of sensor photosites). There are also various forms of “super-resolution” (one version, which can take advantage of aliases that slip through filters, is shown below, original at left, enhanced at right, in a portion of an image from the Almalence PhotoAcute Studio web site: http://photoacute.com/studio/examples/mac_hdd/index.html).

As I noted in a previous post (“Y4K?” http://www.schubincafe.com/2011/08/31/y4k/), there are benefits to using a beyond-HD-resolution camera even if the distribution will be only HD. These include the possibilities of reframing in post, image stabilization without loss of resolution, one form of stereoscopic 3D shooting, and the delivery of images with perceptually increased sharpness. They’re not just theoretical benefits. Zaxel, for example, announced on July 1 the delivery of their 720CUT, a system that allows a 720p high-definition window to be smoothly moved around a 4K moving image in real time.

Although such issues as cost and storage might still keep users away from higher-resolution cameras, they clearly seem like a good idea. But what about delivering more resolution (not just more sharpness) to the viewer? How many pixels are enough?

Unfortunately, there’s no simple answer. Look again at the pictures above. They could clearly benefit from more detail — even the one on the right.  But what if the whole picture were of something the size of a building. In that case, when zooming in so close (the pictures show the label of a hard drive), even a 100-gigapixel image might be insufficient. One benefit of delivering 4K to a home viewer, therefore, is the ability to zoom in to any desired HD frame from the larger 4K frame, as shown in the inner rectangle in the example at left, with a trimmed original image from HighDefWallpapers.Info (http://www.highdefwallpapers.info/amazing-sea-resort-high-definition-wallpapers/ Added 2015 June 26: That link no longer seems to work. Here’s a link to an HD version of the image: http://www.coolwallpapers.org/photo/42925/amazing_sea_resort_high_definition_wallpapers.jpg). Systems for doing such extraction at home have been shown at NAB conventions for years.

How about complete images? Again, there’s no simple answer. At right is a diagram from ARRI’s “4K+ Systems Theory Basics for Motion Picture Imaging” (http://www.efilm.com/publish/2008/05/19/4K%20plus.pdf). Based on 20/20 (or 6/6) vision, it shows visual-acuity limitations for movie viewers in different seats. Even at the rear of this auditorium, a viewer with 20/20 vision could perceive more than 50% more detail than 1080-line HD can deliver in any direction. In the front of the main section of seating, such a viewer could perceive 8K resolution, and, in the very front row, far more than even that extraordinary resolution.

There are, however, some problems with the above. For one thing, almost no one has 20/20 vision. The extra lines at the bottom of an eye chart (left) below the red line indicate that many people have visual acuity far better than 20/20. But the seven lines above the 20/20 line indicate that other people have poorer visual acuity.

Then there is the number 20; 20/20 means that the viewer can see at 20 feet what the “standard” viewer (one with 20/20 vision) can also see at 20 feet (in 6/6, the numbers are in meters). But why specify 20 feet? It’s because at that distance eye-lens focus plays almost no role, and aging viewers can have trouble with eye-lens focus.

In a cinema auditorium, that’s not much of an issue; the screen is likely to be at least 20 feet away.  At home-TV viewing distances, it is an issue. So is lighting. Movies are viewed in dark rooms; TV is often viewed with the light on. A simple formula for contrast can be the division of the sum of desired light plus undesired light divided by the undesired light. Movie screens are typically much dimmer than TV screens, but cinema auditoriums are typically very much darker than TV-viewing rooms, so movies typically offer more contrast.

The image above is called a contrast-resolution grating. Contrast increases from bottom to top; detail resolution increases from left to right. You probably see undifferentiated gray at the bottom left and right corners, but both between those corners and above them, you can probably make out vertical lines. The reason you can make out the lines between the corners is that the human visual system has a contrast-sensitivity function with a peak. So perception of resolution depends on contrast. And that’s not all.

If there is an ideal resolution for viewing, it is based on a compromise: Too much, and the system becomes overly expensive; too little, and, aside from any possibility that the viewer might find the pictures insufficiently detailed, the structure of the display becomes visible, theoretically preventing the viewer from seeing the image due to its visible pixels — in effect, not being able to see the forest for the trees. At left and right above are two different pixel structures of two different display panels.  Do they offer equivalent structure visibility for the same resolution?

Suppose everyone’s visual acuity is 20/20, and eye-lens-focus (accommodation), contrast, color, and pixel structure don’t matter. Then, with 20/20 defined as 30 cycles per degree, and assuming a white pixel and a black pixel constitute a cycle, as shown at right, it’s possible to use high-school trigonometry to calculate optimum viewing distances. For U.S. standard-definition television, which has about 480 active rows of pixels, that distance would be 7.15 times the height of the picture 7.15H); for 1080-line HDTV, it would be 3.16H; for 2160-line 4K 1.54H; for 4320-line 8K 0.69H.  With a lot of rounding (of the same sort that allows 7680-across to be called 8K), these have been called 7, 3, 1.5, and 0.75 times the picture height.

The “9 feet” in the image above happens to be the result of the calculation for an old 25-inch 4×3-shaped TV set, but it has another significance. It is the Lechner Distance. Named for then-RCA Laboratories researcher Bernard Lechner, it is the result of a survey conducted to see how far people sit from their TV screens.  Richard Jackson, a researcher at Philips Laboratories in Redhill, England, conducted his own survey and came up with a similar 3 meters. The distance is determined by room sizes and furniture.  It is not affected by screen sizes or resolutions, although flat-panel TV sets, lacking the depth required by a long-necked picture tube, would, in theory at least, increase the distance somewhat.

At right is a portion of Figure 3 of the paper “‘Super Hi-Vision’ Video Parameters for Next-Generation Television,” by Takayuki Yamashita, Kenichiro Masaoka, Kohei Ohmura, Masaki Emoto, Yukihiro Nishida, and Masayuki Sugawara of the NHK Science and Technology Research Laboratories. It shows that a viewer’s “sense of being there” increases as the viewing distance decreases, as might be expected; as the screen occupies more of the visual field, the viewer gets enveloped in the image. It also shows that “sense of realness” increases with greater viewing distance. That’s also as might be expected; from the top of a skyscraper, a viewer can’t tell the difference between a mannequin (fake) and a person (real) at street level.

Super Hi-Vision is being shown to the public at the 2012 Olympic Games in special, giant-screen viewing rooms, as has been the case when it was exhibited at such broadcast exhibitions as NAB and the International Broadcasting Convention. Viewers can see HD detail from just the segment of screen in front of them and glance elsewhere to see more HD-equivalent images forming the whole. I wrote previously of a system Canon has demonstrated with even more resolution (http://www.schubincafe.com/2010/09/07/whats-next/). In those special viewing venues, it’s easy to achieve a viewing distance of 0.75H; at home, at the Lechner distance, it would require a TV image 12-feet high.

At the same London Games, however, the official host broadcaster is using the DVCPROHD codec, which reduces 1920-pixel-across 1080-line HDTV resolution by a substantial amount. HDCAM does something similar. Both have been acceptable because they retain most of the image sharpness, even though they greatly reduce its resolution, because they preserve most of the area under the modulation-transfer-function curve shown at right.

Perhaps it would be better to say that DVCPROHD and HDCAM have been acceptable. Today, some viewers seem willing to comment on the difference between the reduced resolution of those systems and “full HD.” That might be because some forms of perception are learned.

After Thomas Edison switched from phonograph cylinders to disks, he came up with a plan to demonstrate their quality.  He presented a series of “tone tests.” In small venues, as shown at left, listeners would be blindfolded. At larger ones, the lights would go out. In either case, the audience had to decide whether they’d heard the live singer or a pre-electronic phonograph disk.

These comments from a Pittsburgh Post reporter in 1919 were typical: “It did not seem difficult to determine in the dark when the singer sang and when she did not. The writer himself was pretty sure about it until the lights were turned on again and it was discovered that [the singer] was not on the stage at all and that the new Edison alone had been heard.” Today, we scoff at the idea that audiences couldn’t hear differences between those forms of sounds, but we’ve had years of high fidelity to let us know what sounds bad.

As with hearing, so, too, with vision. At right is the apparatus used in an old experiment conducted to see whether animals would cross a visual gap. When the gap was covered with a visually transparent material, they would not. When the transparent material was covered with visible stripes, they would. But animals raised from birth in an environment devoid of lines oriented in a particular direction treated stripes oriented that way on the transparent material as though they weren’t there and wouldn’t cross.

So, can viewers actually avail themselves of beyond-HD resolution at home? If they’d simply sit closer to their screens, the answer would be a definite yes.  If they continue to sit at the Lechner Distance, the answer is less obvious. On April 28, reporting on an 8K 145-inch television screen, PC World used the headline “Panasonic’s Newest TV Prototype Is Too Big for Your Living Room” <http://www.pcworld.com/article/254649/panasonics_newest_tv_prototype_is_too_big_for_your_living_room.html>.

Possibilities? Maybe we’ll sit closer. Maybe we’ll learn to see with greater acuity (NHK’s Super Hi-Vision research showed subjects already able to perceive differences in “realness” in detail more than five times finer than the 20/20 criterion). Maybe we’ll use virtual viewing systems unrestricted by rooms and furniture. Or maybe not.

Meanwhile, a little skepticism probably couldn’t hurt. Things aren’t always as they seem.

In a 1972 interview, Anna Case (left), one of the opera singers used in the Edison tone tests, admitted that she’d trained herself to sound like a phonograph recording. Oh, well.

Tags: , , , , , , , , , , , , , , ,

Y4K?

August 31st, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

 

What should come after HDTV? There’s certainly a lot of buzz about 3D TV. Such directors as James Cameron and Douglas Trumbull are pushing for higher frame rates. Several manufacturers have introduced TVs with a 21:9 (“CinemaScope”) aspect ratio instead of HDTV’s 16:9. Some think we should increase dynamic range (the range from dark to light). Some think it should be a greater range of colors. Japan’s Super Hi-Vision offers 22.2-channel surround sound. And then there’s 4K.

In simple terms, 4K has approximately twice as much detail as HDTV in both the horizontal and vertical directions. If the orange rectangle above is HDTV, the blue one is roughly 4K. It’s called 4K because there are 4096 picture elements (pixels) per line.

This post will not get much more involved with what 4K is. The definition of 4096 pixels per line says nothing about capture or display.  Even at lower resolutions, some cameras use a complete image sensor for each primary color; others use some sort of color filtering on a single image sensor. At left is Colin Burnett’s depiction of the popular Bayer filter design. Clearly, if such a filtered image sensor were shooting another Bayer filter offset by one color element, the result would be nothing like the original.

Optical filtering and “demosaicking” algorithms can reduce color problems, but the filtering also reduces resolution. Some say a single color-filtered image sensor with 4096 pixels per line is 4K; others say it isn’t. That’s an argument for a different post.  This one is about why 4K might be considered useful.

An obvious answer is for more detail resolution. But maybe that’s not quite as obvious as it seems at first glance. The history of video technology certainly shows ever-increasing resolutions, from eight scanning lines per frame in the 1920s to HDTV’s….

As can be seen above, in 1935, a British Parliamentary Report declared that HDTV should have no fewer than 240 lines per frame. Today’s HDTV has 720 or 1080 “active” (picture-carrying) lines per frame, and 4K has a nominal 2160, but even ordinary 525-line (~480 active) TV was considered HDTV when it was first introduced.

Human visual acuity is often measured with a common Snellen eye chart, as shown at left above. On the line for “normal” vision (20/20 in the U.S., 6/6 in other parts of the world), each portion of the “optotype” character occupies one arcminute (1′, a sixtieth of a degree) of retinal angle, so there are 30 “cycles” of black and white lines per degree.

Bernard Lechner, a researcher at RCA Laboratories at the time, studied television viewing distances in the U.S. and determined they were about nine feet (Richard Jackson, a researcher at Philips Laboratories in the UK at the same time, came up with a similar three meters). As shown above, a 25-inch 4:3 TV screen provides just about a perfect match to “normal” vision’s 30 cycles per degree when “525-line” television is viewed at the Lechner Distance — roughly seven times the picture height.

HDTV should, under the same theory, be viewed from a smaller multiple of the screen height (h). For 1080 active lines, it should be 7.15 x 480/1080, or about 3.2h. Looked at another way, at a nine-foot viewing distance, the height should be about 34 inches, a diagonal screen size of about 60 inches, and, indeed, 60-inch (and larger) HDTV screens are not uncommon (and so are closer viewing distances).

For 4K (again, using the same theory), it should be a screen height of about 68 inches. Add a few inches for a screen bezel and stand, and mount it on a table, and suddenly the viewer needs a minimum ceiling height of nine feet!

Of course, cinema auditoriums don’t have domestic ceiling heights. Above is an elevation of a typical old-style auditorium, courtesy of Warner Bros. Technical Operations. The scale is in picture heights. Back near the projection booth, standard-definition resolution seems adequate. Even in the fifth row, HD resolution seems adequate. Below, however, is a modern, stadium-seating cinema auditorium (courtesy of the same source).

This time, even a viewer with “normal” vision in the last row could see greater-than-HD detail, and 4K could well serve most of the auditorium. That’s one reason why there’s interest in 4K for cinema distribution.

Another is questions about that theory of “normal” vision. First of all, there are lines on the Snellen eye chart (which dates back to 1862) below the “normal” line, meaning some viewers can see more resolution.

Then there are the sharp lines of the optotypes. A wave cycle would have gently shaded transitions between white and black, which might make the optotype more difficult to identify on an eye chart. Adding in higher frequencies, as shown below, makes the edges sharper, and 4K offers higher frequencies than does HD.

Then there’s sharpness, which is different from resolution. Words that end in -ness (brightness, loudness, sharpness, etc.) tend to be human psychophysical sensations (psychological responses to physical stimuli) rather than simple machine-measurable characteristics (luminance, sound level, resolution, contrast, etc.). Another RCA Labs researcher, Otto Schade, showed that sharpness is proportional to the square of the area under a modulation-transfer function (MTF) curve, a curve plotting contrast ratio against resolution.

One of the factors affecting an MTF curve is the filtering inherent in sampling, as is done in imaging. An ideal filter might use a sine of x divided by x function, also called a SINC function. Above is a SINC function for an arbitrary image sensor and its filters. It might be called a 2K sensor, but the contrast ratio at 2K is zero, as shown by the red arrow at the left.

Above is the same SINC function. All that has changed is a doubling of the number of pixels (in each direction). Now the contrast ratio at 2K is 64%, a dramatic increase (again, as shown by the red arrow at the left). Of course, if the original sensor offered 64% at 2K, the improvement offered by 4K would be much less dramatic, a reason why the question of what 4K is is not trivial.

Then there’s 3D.  Some of the issues associated with 3D shooting relate to the use of two cameras with different image sensors and processing. One camera might deliver different gray scale, color, or even geometry from the other.

Above is an alternative, two HD images (one for each eye’s view) on a single 4K image sensor. A Zepar stereoscopic lens system on a Vision Research Phantom 65 camera serves that purpose. It’s even available for rent.

There are other reasons one might want to shoot HD-sized images on a 4K sensor. One is image stabilization. The solid orange rectangle above represents an HD image that has been jiggled out of its appropriate position, the lighter orange rectangle behind it with the dotted border. There are many image-stabilization systems available that can straighten out a subject in the center, but they do so by trimming away what doesn’t fit, resulting in the smaller, green rectangle. If a 4K sensor is used, however, the complete image can be stabilized.

It’s not just stabilization. An HD-sized image shot on a 4K sensor can be reframed in post production. The image can be moved left or right, up or down, rotated, or even zoomed out.

So 4K offers much even to people not intending to display 4K. But it comes at a cost. Cameras and displays for 4K are more expensive, and an uncompressed 4K signal has more than four times as much data as HD. If the 1080p60 (1080 active lines, progressively scanned, at roughly 60 frames per second) version of HD uses 3G (three-gigabit-per-second) connections, 4K might require four of those.

When getting 4K to cinemas or homes, however, compression is likely to be used, and, as can be seen by the MTF curves, the highest-resolution portion of the image has the least contrast ratio. It has been suggested that, in real-world images, it might take as little as an extra 5% of data rate to encode the extra detail of 4K over HD.

So, is 4K the future? The aforementioned Super Hi-Vision is already effectively 8K, and it’s scheduled to be used in next year’s Olympic Games.

Tags: , , , , , , , , , , , , , , , ,

Angry About Contrast

September 11th, 2009 | No Comments | Posted in Schubin Cafe
"Angry Man/Neutral Woman" copyright 1990 Aude Oliva, MIT, and Philippe Schyns, University of Glasgow

"Angry Man/Neutral Woman," copyright 1997, Aude Oliva, MIT, and Philippe G. Schyns, University of Glasgow

If you are looking at the above picture on a nominally sized screen at a nominal viewing distance, you probably see an angry man on the left.  What’s an “angry man”?  Me, when I think about technical descriptions of HDTV.

Think about it.  Maybe you hear HDTV described as being 1080i or 720p.  Maybe it’s 1920 x 1080 or 1280 x 720.   Maybe it’s 2 megapixels or 1.  An engineer who remembers such things as analog bandwidths might refer to 30 MHz or 37 MHz.  Someone concerned with lenses might talk about 100 line-pairs per millimeter.  Someone describing visual acuity, screen sizes, and viewing distances might offer 30 cycles per degree.

Someday, I’ll probably get around to explaining how all of those are related and how many of them are pretty much the same thing.  But, when it comes to the sharpness perceived by viewers, they’re all pretty bogus because they’re all missing something of vital importance.

Of course, that isn’t the only silly spec.  Look at “sensitivity,” or, one of my all-time favorites, “minimum sensitivity.”  I just went to a web site of someone called an “expert” and found a sensitivity figure of 1 lux. More »

Tags: , , , , , , , , , , , ,
Web Statistics