Produced by:
| Follow Us  

The ‘Look’ of HFR (HPA, Feb. 18, 2013)

February 19th, 2013 | No Comments | Posted in Download, Today's Special

The ‘Look’ of HFR
(Supplement to the 2013 HPA Charles Poynton Seminar)

HPA, February 18, 2013

Video (12:20 TRT)

Tags: , , , , , , , , , , ,

IBC-ing the Future

September 23rd, 2012 | 1 Comment | Posted in 3D Courses, Schubin Cafe

This is what happened on Monday night, September 10, at the International Broadcasting Convention (IBC) in Amsterdam: A crowd crammed into the large (1750-seat) auditorium to see the future–well, a future. They saw Hugo in stereoscopic 3D.

The movie, itself, is hardly futuristic. It was released in 2011, and it takes place almost a century ago.

So was it, perhaps, astoundingly, glasses-free? No. And it wasn’t the first 3D movie screened at IBC. It wasn’t even the first of IBC 2012; Prometheus was shown two nights earlier. But it was a special event. According to one participant who had previously seen Hugo stereoscopically, “It was awesome–like a different movie!”

The big deal? The perceived screen brightness was that of a well-projected 2D movie, perhaps four to five times greater than that of typical stereoscopic 3D movie projection.

It was said to be the world’s first laser-projected screening of a full-length movie (and in stereoscopic 3D), and it used an astoundingly bright, 63,000-lumen Christie Digital projector. Above left is a picture of Christie’s Dr. Don Shaw discussing it before the screening. You can read more about it in Christie’s press release here: http://www.christiedigital.com/en-us/news-room/press-releases/Pages/Worlds-First-Laser-Projected-Screening-of-Full-Length-Movie-Debuts-With-Christie-Laser-Projector.aspx

Can you buy that projector? Not today and maybe never. That’s why the audience saw only a possible future, but IBC has a pretty terrific track record of predicting the future. Today, for example, television is digital–whether via broadcast, cable, satellite, internet, or physical media–and virtual sets and virtual graphics are common; both digital TV and virtual video could be found at IBC in 1990, part of which is shown below.

As can be seen from Sony’s giant white “beach cabana” above, IBC had outgrown the convention facilities in Brighton, England, where it was located that year. The following convention (in 1992, because it was held every two years at the time) moved to Amsterdam’s RAI convention center, which has been adding new exhibit halls seemingly each year to try to keep up (there are now 14).

After the move, IBC became an annual event, show-goers could relax on a sand beach (left) and eat raw herring served by people in klederdracht (right), and finding the futures became easier. They were stuck into a Future Zone.

It’s not that everything new was put into the Future Zone. At IBC 2011, in a regular exhibit hall, Sony introduced its HDC-2500, one of the most-advanced CCD cameras ever made; at IBC 2012, Grass Valley introduced the LDX series, based on their CMOS Xensium sensor, perhaps one of the most-advanced lines of CMOS cameras ever made. And they’re supposed to be upgradable–someday–to the high-dynamic-range mode I showed in my coverage of IBC 2009 here: http://www.schubincafe.com/2009/09/20/walkin-in-a-camera-wonderland/

ARRI makes cameras that, in theory, at least, compete with those Grass Valley and Sony ones. ARRI also makes lighting instruments. The future of lighting (and much of the present) seems to be LED-based. But ARRI demonstrated in its booth how some LED lights can produce wildly different looking colors on different cameras, including (labeled by brand) Grass Valley and Sony.  Their point was that ARRI’s new L7-T (tungsten color-temperature) LED lighting (left) looks pretty much the same on those different cameras.

Grass Valley’s LDX line drew crowds, but whether they came for the engineering of the cameras or to look at the leggy model in hot pants shouldering one was not entirely clear. IBC 2012 set a new official attendance record, but, with 14 exhibit halls (well, 13 plus one devoted to meetings rooms), not every exhibitor had crowds all the time, even if they were deserved. Consider Silentair’s mobile unit (right). It’s rated at a noise level of NR15 (comparable to the U.S. NC15). According to EngineeringToolBox.com, concert halls and recording studios often use the much looser NR25 rating. The unit comes in multiple colors and can be installed in about half an hour by unskilled labor.

Where might it be used? Perhaps in or near Dreamtek’s Broadcastpod (left). About the size of an old amusement-park photo booth (near left), it comes complete with HD camera, prompter glass (far left), media management, and behind-the-talent graphics. It’s also well lit, acoustically treated and ventilated.

There were many other delights in the exhibit halls, from a Boeing 737 simulator with real-time, near-photorealistic graphics 17,920 pixels wide from Professional Show (right) to theatrically presented 810-frame-per-second (fps) 4K images shot by the For-A FT-One. There were Geniatech’s USB-stick pay-TV tuner with replaceable security card (left) and the Fraunhofer Institut’s work on computational photography (like using the masked-pixels resolution-expansion system described at this year’s HPA Tech Retreat to increase dynamic range, too) and light-field capture for motion video.

The most light-field equipment at the show could be found at the European Union’s 3D VIVANT project booth in the Future Zone. Hungary’s Holografika showed its Holovizio light-field display. In perhaps their most-amazing demo, they depicted five playing cards, edge-on to the screen. From the front, they looked like five vertical lines, but the photos above, shot at their IBC location (complete with extraneous reflections), show (not quite as well as being there) what they looked like from left or right.

One could walk from one edge of the display to the other, and the view was always appropriate to the angle and offered 1280 x 768 resolution to each eye at each location. Unfortunately, that meant that the whole display was close to 80 megapixels, and no camera system at IBC could provide matching pictures.

The top award-winning IBC conference paper was “Fully Automatic Conversion of Stereo to Multiview for Autostereoscopic Displays” by Christian Riechert and four other authors from Fraunhofer’s image processing department.  The process is shown at right.

Holografika showed some upconversions from stereoscopic views, but those didn’t fully utilize the capability of their display. In fact, none of the autostereoscopic displays at IBC could (in my opinion) match the glasses-required versions. One of the best-looking of the latter was a Sony 4K LCD using alternate-line polarization; with passive glasses, it offered 3840 x 1080 simultaneously to each eye.

Right behind Holografika in the 3D VIVANT booth, however, was Brunel University, and they had some camera systems that might, someday, properly stimulate something like the Holovizio display. At left is one of their holoscopic lens adaptors on an ARRI Alexa camera. The long tube is just for relaying the image, and, by the end of the show, they added a small, easily hand-holdable prototype without the relay tubes. The Brunel University area also featured a crude-resolution glasses-free display made from an LCD computer monitor and not much thicker than the unmodified original.

Across the aisle from the 3D VIVANT booth, DeCS Media was showing another way to capture 3D for autostereoscopic display with a single lens–that is, a single image-capturing lens (any lens on any camera) and a DeCS Media module to capture depth information (as shown at right). Even Fraunhofer’s Christian Riechert, in the Q&A session following the presentation of the award-winning paper, pointed out that, if separate depth information is available, the process of multi-view generation is simplified. DeCS Media says their process works live (though disocclusion would require additional processing).

There was something else of interest  in the 3D VIVANT booth: the Institut für Rundfunktechnik’s MARVIN (Microphone Array for Realtime and Versatile INterpolation), a ball (left), about the size of a soccer ball, containing microphones that capture sound in 3D and can be configured in many different ways. The IRT demoed MARVIN with position-sensing headphones; as the listener moved, the sound vectors changed appropriately.

Looking even more like a soccer ball was Panospective’s ball camera (shown at right in an image by Jonas Pfeil, http://jonaspfeil.de/ballcamera). It can be thrown (and, perhaps, even kicked), and, when it reaches its maximum height, its multiple cameras (36 of them!) capture images covering 360 degrees spherically. Viewers holding a tablet can see any part of the image, seamlessly, by moving the tablet around.

The Panospective ball’s images might be spatial, but they are neither stereoscopic nor light field. The same might be said of Mediapro Research’s Project FINE demonstrations. Using a few cameras–not necessarily shooting from every direction–they can reconstruct the space in which an event is captured and place virtual cameras anywhere within it (even “shooting” straight down from a non-existent aircraft). In just the few months since their demo at the NAB convention, they seem to have advanced considerably.

Another stereoscopic-3D revelation in the Future Zone related to lip-sync. It was printed on a couple of posters from St. Petersburg State University of Film and Television in Russia. The researchers, A. Fedina, E. Grinenko, K. Glasman, and E. Zakharova, shot a typical news-anchor setup in 3D and then tested sensitivity to lip sync in both 2D and shutter-glasses stereoscopic 3D. One poster is shown at left. Their experimental results show that 3D viewers are almost twice as sensitive as 2D viewers to audio-leading-video lip-sync error (27 ms vs. 50).

The IBC 2012 Future Zone was by no means limited to 3D. Other posters covered such topics as integrating social media into media asset management and using crowdsourcing to add metadata to archives.

Social media and crowdsourcing suggest personal computers and hand-held devices–the legendary second and third screens. But viewers appear to over-report new-media use and under-report plain, non-DVR, television viewing. How can we know what viewers actually do?

One exhibitor at the IBC 2012 Future Zone was Actual Customer Behaviour. With permission, they spy on actual viewers as they actually use various screens. Then experts in advertising, anthropology, behavior, ethnography, marketing, and psychology figure out what’s going on, including engagement. Their 1-3-9 Media Lab, for example, is named for the nominal viewing distances (in feet) of handheld devices, computer screens, and TV screens. But lab head Sarah Pearson notes that TV viewing distance can vary significantly just from leaning back when relaxing or leaning in with excitement.

There were other technology demonstrations. Japan’s National Institute of Information and Communications Technology, which, in the past, has shown such amazing technologies as holographic video and tactile-feedback (with aroma!), had a possibly more practical but no less amazing compact free-space optical link with autotracking and a current capacity of 1.28 terabits per second (enough to carry more than 860 uncompressed HD-SDI channels).

There were still more Future Zone exhibitors, such as the BBC, Korea’s Electronics and Telecommunications Research Institute, and Nippon Telephone and Telegraph. And, outside the Future Zone, one could find such exhibitors as the European SAVAS project for live, automated subtitling. Then there was NHK, the Japan Broadcasting Corporation, whose Science and Technology Research Laboratories (STRL) won IBC’s highest award this year, the International Honour for Excellence. NHK’s STRL is where modern HDTV originated and where its possible replacement, ultra HDTV, with 16 times more pixels than normal 1920 x 1080 HDTV, is still being perfected.

Part of NHK’s exhibit at the IBC 2012 Future Zone was two 85-inch UHDTV LCD screens showing material shot at the Olympic Games in London this summer. NHK has previously shown UHDTV via projection on a large screen. The 1-3-9 powers-of-three viewing-distance progression might continue to 27-feet for a multiplex cinema screen and 81 feet for IMAX, but NHK’s Super Hi-Vision (their term for UHDTV) was always viewed from closer distances. The 85-inch direct-view screens were attractive in a literal sense. They attracted viewers to get closer and closer to the screens to see fine detail.

Another NHK Super Hi-Vision (SHV) demo involved shooting and displaying at 120 frames per second (fps) instead of 60. At far right above is the camera used. Just above the lens is a display showing 120-fps images and to its left one showing 60-fps. The difference in sharpness was dramatic. But to the right of the 120-fps images and to the left of the 60-fps were static portions of the image, and they looked sharper than either moving version. At the left in the picture above is the moving belt the SHV camera was shooting, and it looked sharper than even the 120-fps images.

So maybe 120-fps isn’t the limit. Maybe it should be more like 300-fps. Might that appear at some future IBC? Actually, it was described (and demonstrated) at IBC 2008: http://www.bbc.co.uk/rd/pubs/whp/whp169.shtml

 

Tags: , , , , , , , , , , , , , , , , , , ,

The Blind Leading

December 10th, 2011 | No Comments | Posted in Schubin Cafe

Once upon a time, people were prevented from getting married, in some jurisdictions, based on the shade of their skin colors. Once upon a time, a higher-definition image required more pixels on the image sensor and higher-quality optics.

Actually, we still seem to be living in the era indicated by the second sentence above. At the 2012 Hollywood Post Alliance (HPA) Tech Retreat, to be held February 14-17 (with a pre-retreat seminar on “The Physics of Image Displays” on the 13th) at the Hyatt Grand Champions in Indian Wells, California <http://bit.ly/slPf9v>, one of the earliest panels in the main program will be about 4K cameras, and representatives from ARRI, Canon, JVC, Red, Sony, and Vision Research will all talk about cameras with far more pixel sites on their image sensors than there are in typical HDTV cameras; Sony’s, shown at the left, has roughly ten times as many.

That’s by no means the limit. The prototypical ultra-high-definition television (UHDTV) camera shown at the right has three image sensors (from Forza Silicon), each one of which has about 65% more pixel sites than on Sony’s sensor. There is so much information being gathered that each sensor chip requires a 720-pin connection (and Sony’s image sensor is intended for use in just a single-sensor camera, so there are actually about five times more pixel sites).  But even that isn’t the limit! As I pointed out last year, Canon has already demonstrated a huge hyper-definition image sensor, with four times the number of pixels of even those Forza image sensors used in the camera at the right <http://www.schubincafe.com/2010/09/07/whats-next/>!

Having entered the video business at a time when picture editing was done with razor blades, iron-filing solutions to make tape tracks visible, and microscopes, and when video projectors utilized oil reservoirs and vacuum pumps, I’ve always had a fondness for the physical characteristics of equipment. Sensors will continue to increase in resolution, and I love that work. At the same time, I recognize some of the problems of an inexorable path towards higher definition.

The standard-definition camera that your computer or smart phone uses for video conferencing might have an image sensor with a resolution characterized as 640×480 or 0.3 Mpel (megapixels), even if that same smart phone has a much-higher-resolution image sensor pointing the other way for still pictures. That’s because video must make use of continually changing information. At 60 frames per second, that 0.3 Mpel camera delivers more pixels in one second than an 18 Mpel sensor shooting a still image.

Common 1080-line HDTV has about 2 Mpels. So called “4K” has about 8 Mpels. It’s already tough to get a great HDTV lens; how will we deal with UHDTV’s 33-Mpel “8K”?

A frame rate of 60-fps delivers twice as much information as 30-fps; 120-fps is twice as much as 60-fps. How will we ever manage to process high-frame-rate UHDTV?

Perhaps it’s worth consulting the academies. In U.S. entertainment media, the highest awards are granted by the Academy of Motion Picture Arts & Sciences (the Academy Award or Oscar), the Academies (there are two) of Television Arts & Sciences (the Emmy Award), and the Recording Academy (the Grammy Award). Win all three, and you are entitled to go on an EGO (Emmy-Grammy-Oscar) trip!

In the history of those awards, only 33 people have ever achieved an EGO trip. And only two of those also won awards from the Audio Engineering Society (AES), the Institute of Electrical and Electronics Engineers (IEEE), and the Society of Motion-Picture and Television Engineers (SMPTE). You’re probably familiar with the last name of at least one of those two, Ray Dolby, shown at left during his induction into the National Inventors Hall of Fame in 2004.

The other was Thomas Stockham. Some in the audio community might recognize his name.  He was at one time president of the AES, is credited with creating the first digital-audio recording company (Soundstream), and was one of the investigators of the 18½-minute gap in then-President Richard Nixon’s White House tapes regarding the Watergate break-in.

Those achievements appeal to my sense of appreciation of physical characteristics. The Soundstream recorder (right) was large and had many moving parts. And the famous “stretch” of Nixon’s secretary Rose Mary Woods (left), which would have been required to accidentally cause the gap in the recording, is a posture worthy of an advanced yogi (Stockham’s investigative group, unfortunately for that theory, found that there were multiple separate instances of erasure, which could not have been caused by any stretch). But what impressed (and still impresses) me most about Stockham’s work has no physical characteristics at all.  It’s pure mathematics.

On the last day of the HPA Tech Retreat, as on the first day, there will be a presentation on high-resolution imaging. But it will have a very different point of view. Siegfried Foessel of Germany’s Fraunhofer research institute will describe “Increasing Resolution by Covering the Image Sensor.” The idea is that, instead of using a higher-resolution sensor, which increases data-readout rates, it’s actually possible to use a much-lower-resolution image sensor, with the pixel sites covered in a strange pattern (a portion of which is shown at the right). Mathematical processing then yields a much-higher-resolution image — without increasing the information rate leaving the sensor.

In the HPA Tech Retreat demo room, there should be multiple demonstrations of the power of mathematical processing. Cube Vision and Image Essence, for example, are expected to be demonstrating ways of increasing apparent sharpness without even needing to place a mask over the sensor. Lightcraft Technology will show photorealistic scenes that never even existed except in a computer. And those are said to have gigapixel (thousand-megapixel) resolutions!

All of that mathematical processing, to the best of my knowledge, had no direct link to Stockham, but he did a lot of mathematical processing, too. In the realm of audio, his most famous effort was probably the removal of the recording artifacts of the acoustical horn into which the famous opera tenor Enrico Caruso sang in the era before microphone-based recording (shown at left in a drawing by the singer, himself).

As Caruso sang, the sound of his voice was convolved with the characteristics of the acoustic horn that funneled the sound to the recording mechanism. Recovering the original sound for the 1976 commercial release Caruso: A Legendary Performer required deconvolving the horn’s acoustic characteristics from the singer’s voice.  That’s tough enough even if you know everything there is to know about the horn. But Stockham didn’t, so he had to use “blind” deconvolution. It wasn’t the first time.

He was co-author of an invited paper that appeared in the Proceedings of the IEEE in August 1968. It was called “Nonlinear Filtering of Multiplied and Convolved Signals,” and, while some of it applied to audio signals, other parts applied to images. He followed up with a solo paper, “Image Processing in the Context of a Visual Model,” in the same journal in July 1972. Both papers have been cited many hundreds of times in more-recent image-processing work.

One image in both papers showed the outside of a building, shot on a bright day; the door was open, but the inside was little more than a black hole (a portion of the image is shown above left, including artifacts of scanning the print article with its half-tone images). After processing, all of the details of the equipment inside could readily be seen (a portion of the image is shown at right, again including scanning artifacts). Other images showed effective deblurring, and the blur could be caused by either lens defocus or camera instability.

Stockham later (in 1975) actually designed a real-time video contrast compressor that could achieve similar effects. I got to try it. I aimed a bright light up at some shelves so that each shelf cast a shadow on what it was supporting. Without the contrast compressor, virtually nothing on the shelves could be seen; with it, fine detail was visible. But the pictures were not really of entertainment quality.

That was, however, in 1975, and technology has marched — or sprinted — ahead since then. The Fraunhofer Institut presentation at the 2012 HPA Tech Retreat will show how math can increase image-sensor resolution. But what about the lens?

A lens convolves an image in the same way that an old recording horn convolved the sound of an acoustic gramophone recording. And, if the defects of one can be removed by blind deconvolution, so might those of the other. An added benefit is that the deconvolution need not be blind; the characteristics of the lens can be identified. Today’s simple chromatic-aberration corrections could extend to all of a lens’s abberations, and even its focus and mount stability.

Is it a merely a dream?  Perhaps.  But, at one time, so was the repeal of so-called anti-miscegenation laws.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , ,

Y4K?

August 31st, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

 

What should come after HDTV? There’s certainly a lot of buzz about 3D TV. Such directors as James Cameron and Douglas Trumbull are pushing for higher frame rates. Several manufacturers have introduced TVs with a 21:9 (“CinemaScope”) aspect ratio instead of HDTV’s 16:9. Some think we should increase dynamic range (the range from dark to light). Some think it should be a greater range of colors. Japan’s Super Hi-Vision offers 22.2-channel surround sound. And then there’s 4K.

In simple terms, 4K has approximately twice as much detail as HDTV in both the horizontal and vertical directions. If the orange rectangle above is HDTV, the blue one is roughly 4K. It’s called 4K because there are 4096 picture elements (pixels) per line.

This post will not get much more involved with what 4K is. The definition of 4096 pixels per line says nothing about capture or display.  Even at lower resolutions, some cameras use a complete image sensor for each primary color; others use some sort of color filtering on a single image sensor. At left is Colin Burnett’s depiction of the popular Bayer filter design. Clearly, if such a filtered image sensor were shooting another Bayer filter offset by one color element, the result would be nothing like the original.

Optical filtering and “demosaicking” algorithms can reduce color problems, but the filtering also reduces resolution. Some say a single color-filtered image sensor with 4096 pixels per line is 4K; others say it isn’t. That’s an argument for a different post.  This one is about why 4K might be considered useful.

An obvious answer is for more detail resolution. But maybe that’s not quite as obvious as it seems at first glance. The history of video technology certainly shows ever-increasing resolutions, from eight scanning lines per frame in the 1920s to HDTV’s….

As can be seen above, in 1935, a British Parliamentary Report declared that HDTV should have no fewer than 240 lines per frame. Today’s HDTV has 720 or 1080 “active” (picture-carrying) lines per frame, and 4K has a nominal 2160, but even ordinary 525-line (~480 active) TV was considered HDTV when it was first introduced.

Human visual acuity is often measured with a common Snellen eye chart, as shown at left above. On the line for “normal” vision (20/20 in the U.S., 6/6 in other parts of the world), each portion of the “optotype” character occupies one arcminute (1′, a sixtieth of a degree) of retinal angle, so there are 30 “cycles” of black and white lines per degree.

Bernard Lechner, a researcher at RCA Laboratories at the time, studied television viewing distances in the U.S. and determined they were about nine feet (Richard Jackson, a researcher at Philips Laboratories in the UK at the same time, came up with a similar three meters). As shown above, a 25-inch 4:3 TV screen provides just about a perfect match to “normal” vision’s 30 cycles per degree when “525-line” television is viewed at the Lechner Distance — roughly seven times the picture height.

HDTV should, under the same theory, be viewed from a smaller multiple of the screen height (h). For 1080 active lines, it should be 7.15 x 480/1080, or about 3.2h. Looked at another way, at a nine-foot viewing distance, the height should be about 34 inches, a diagonal screen size of about 60 inches, and, indeed, 60-inch (and larger) HDTV screens are not uncommon (and so are closer viewing distances).

For 4K (again, using the same theory), it should be a screen height of about 68 inches. Add a few inches for a screen bezel and stand, and mount it on a table, and suddenly the viewer needs a minimum ceiling height of nine feet!

Of course, cinema auditoriums don’t have domestic ceiling heights. Above is an elevation of a typical old-style auditorium, courtesy of Warner Bros. Technical Operations. The scale is in picture heights. Back near the projection booth, standard-definition resolution seems adequate. Even in the fifth row, HD resolution seems adequate. Below, however, is a modern, stadium-seating cinema auditorium (courtesy of the same source).

This time, even a viewer with “normal” vision in the last row could see greater-than-HD detail, and 4K could well serve most of the auditorium. That’s one reason why there’s interest in 4K for cinema distribution.

Another is questions about that theory of “normal” vision. First of all, there are lines on the Snellen eye chart (which dates back to 1862) below the “normal” line, meaning some viewers can see more resolution.

Then there are the sharp lines of the optotypes. A wave cycle would have gently shaded transitions between white and black, which might make the optotype more difficult to identify on an eye chart. Adding in higher frequencies, as shown below, makes the edges sharper, and 4K offers higher frequencies than does HD.

Then there’s sharpness, which is different from resolution. Words that end in -ness (brightness, loudness, sharpness, etc.) tend to be human psychophysical sensations (psychological responses to physical stimuli) rather than simple machine-measurable characteristics (luminance, sound level, resolution, contrast, etc.). Another RCA Labs researcher, Otto Schade, showed that sharpness is proportional to the square of the area under a modulation-transfer function (MTF) curve, a curve plotting contrast ratio against resolution.

One of the factors affecting an MTF curve is the filtering inherent in sampling, as is done in imaging. An ideal filter might use a sine of x divided by x function, also called a SINC function. Above is a SINC function for an arbitrary image sensor and its filters. It might be called a 2K sensor, but the contrast ratio at 2K is zero, as shown by the red arrow at the left.

Above is the same SINC function. All that has changed is a doubling of the number of pixels (in each direction). Now the contrast ratio at 2K is 64%, a dramatic increase (again, as shown by the red arrow at the left). Of course, if the original sensor offered 64% at 2K, the improvement offered by 4K would be much less dramatic, a reason why the question of what 4K is is not trivial.

Then there’s 3D.  Some of the issues associated with 3D shooting relate to the use of two cameras with different image sensors and processing. One camera might deliver different gray scale, color, or even geometry from the other.

Above is an alternative, two HD images (one for each eye’s view) on a single 4K image sensor. A Zepar stereoscopic lens system on a Vision Research Phantom 65 camera serves that purpose. It’s even available for rent.

There are other reasons one might want to shoot HD-sized images on a 4K sensor. One is image stabilization. The solid orange rectangle above represents an HD image that has been jiggled out of its appropriate position, the lighter orange rectangle behind it with the dotted border. There are many image-stabilization systems available that can straighten out a subject in the center, but they do so by trimming away what doesn’t fit, resulting in the smaller, green rectangle. If a 4K sensor is used, however, the complete image can be stabilized.

It’s not just stabilization. An HD-sized image shot on a 4K sensor can be reframed in post production. The image can be moved left or right, up or down, rotated, or even zoomed out.

So 4K offers much even to people not intending to display 4K. But it comes at a cost. Cameras and displays for 4K are more expensive, and an uncompressed 4K signal has more than four times as much data as HD. If the 1080p60 (1080 active lines, progressively scanned, at roughly 60 frames per second) version of HD uses 3G (three-gigabit-per-second) connections, 4K might require four of those.

When getting 4K to cinemas or homes, however, compression is likely to be used, and, as can be seen by the MTF curves, the highest-resolution portion of the image has the least contrast ratio. It has been suggested that, in real-world images, it might take as little as an extra 5% of data rate to encode the extra detail of 4K over HD.

So, is 4K the future? The aforementioned Super Hi-Vision is already effectively 8K, and it’s scheduled to be used in next year’s Olympic Games.

Tags: , , , , , , , , , , , , , , , ,
Web Statistics