Produced by:
| Follow Us  

Bang for the Buck: Data Rate vs. Perception in UHD Production

November 18th, 2013 | No Comments | Posted in Download, Schubin Cafe, Today's Special

 

Going beyond today’s television could involve higher resolution, higher frame rate, higher dynamic range, wider color gamut, stereoscopic 3D, and immersive sound. Do all provide the same sensation of improvement? Could some preclude the use of others? Which delivers the biggest “bang for the buck,” and how do we know?

Presented during Content & Communications World, November 13, 2013, Javits Center, New York.

ARRI 4K+ Cinema AuditoriumMark Schubin adds: “I neglected to describe all of the images on slide 19. The upper right image shows that, in a cinema auditorium, detail resolutions beyond HD might be visible to everyone in the audience, even in the last row. The ARRI Alexa camera, from the same company that provided that image, however, has only 2880 pixels across — less than “3k.” That hasn’t stopped it from being used in major motion pictures, such as Skyfall (shown on set in the left bottom image) or the top-grossing movie to date in 2013, Iron Man 3.”

Video (TRT 28:03)

 

Tags: , , , , , , , , , , , , , , , , , ,

Bang for the Buck

September 25th, 2013 | No Comments | Posted in Schubin Cafe

Something extraordinary happened shortly after noon local time on Saturday, September 14, at the International Broadcasting Convention (IBC) in Amsterdam (aside from the fact that the fresh herring vendors were no longer wearing traditional klederdracht). It could affect at least the short-term future of television. And a possible long-term future was also revealed at the event. IBC, however, featured not only the extraordinary but also the cute.

For many years, there seemed to be an unofficial contest to show the largest and smallest television vehicles. The “largest” title seems to have been won (and retired) at IBC 2007 by Euroscena’s MPC34, a three-truck, expanded, interconnected system including everything from a sizable production studio through edit suites, at least as wide as it was long; nothing exhibited since has come anywhere close. The “smallest” went from trucks to vans to tiny Smart cars to motorcycles to Segways. In an era in which a hand-held camera can stream directly to the world, it’s hard to claim a “smallest” title with a vehicle, so perhaps Mobile Viewpoint’s tiny scooters (click image to enlarge) should be considered among the cutest, rather than the smallest.

Not far from those Mobile Viewpoint scooters, however, was another claimant for the “cutest” title, though it was neither new nor particularly small. In the BTS outdoor exhibit was Portuguese television broadcaster RTP’s first mobile unit, built by Fernseh GmbH (roughly translated: Television, Inc.) in an eight-meter-long Mercedes-Benz truck delivered to Lisbon in 1957. It did its first live broadcast the following February 9, continued in service through 1980, and was restored in 2006 (though it still has a top speed of only 76 kilometers per hour, about 47 MPH).

About a quarter of the length of the 1957 RTP mobile unit — more than devoted to its control room — is occupied by multi-core camera-cable reels for the vehicle’s four cameras. Cabling is one area in which video technology has advanced tremendously in the last 56 years — except for one characteristic. Consider some products of another small IBC 2013 exhibitor, NuMedia.

In television’s early days, there were separate cables for video and sync, eventually becoming one in composite video. Color required multiple cables again; composite color brought it back to one. Digital video initially used a parallel interface of multiple wires, the serial digital interface (SDI) made digital video even easier to connect than analog, because a single coaxial connection could carry video, multi-channel audio, and other data. Then modern high definition arrived.

NuMedia SDI extendersSDI carried 270 million bits per second (Mbps); HD-SDI was about 1.5 billion (Gbps). HD-SDI still used just one coaxial-cable connection, but usable cable lengths, due to the higher data rates, plunged. NuMedia offered a list of usable lengths for different cables, ranging from 330 meters for fat and heavy RG11 coax down to just 90 meters for a lighter, skinnier version. NuMedia’s HDX series uses encoders and decoders to more than double usable cable lengths (and offer a reverse audio path) — at a cost of more than $1800 per encoder/decoder pair. And that provides some background for the extraordinary event.

IBC hosted a conference session called “The Great Quality Debate: Do We Really Need to Go Beyond HD?” Although going “beyond HD” has included such concepts as a wider color gamut (WCG), higher dynamic range (HDR, the range between the brightest and darkest parts of the image), higher frame rates (HFR), stereoscopic 3D (S3D), and more-immersive sound, the debate focused primarily on a literal reading of HD, meaning that going beyond it would be going to the next level of spatial detail, with twice the fineness of HD’s resolution in both the horizontal and vertical directions.

The audience in the Forum, IBC’s largest conference venue, was polled before the debate started and was roughly evenly split between feeling the need for more definition and not. Then moderator Dr. William Cooper of informitv began the debate. On the “need” side were Andy Quested, head of technology for BBC HD & UHDTV; vision scientist Dr. Sean McCarthy, fellow of the technical staff at Arris; and Dr. Giles Wilson, head of the TV compression business at Ericsson. On the “no-need” side were Rory Sutherland, vice chair of the Ogilvy Group (speaking remotely from London); journalist Raymond Snoddy; and I.

The “need” side covered the immersive nature of giant screens with higher definition and their increased sensations of reality and presence (“being there”). Perhaps surprisingly, the “no-need” side also acknowledged the eventual inevitability of ever higher definition — both sides, for example, referred to so-called “16k,” images with eight times the spatial detail of today’s 1080-line HDTV in both the horizontal and vertical directions (64 times more picture elements or pixels). But the “no-need” side added the issue of “bang for the buck.”

At the European Broadcasting Union (EBU) exhibit on the show floor, some of that bang was presented in lectures about the plan for implementing UHDTV (ultra-HDTV, encompassing WCG, HDR, HFR, immersive sound, etc.). UHDTV-1 has a spatial resolution commonly called “4k,” with four times the number of spatial pixels of 1080-line HDTV. As revealed at the HPA Tech Retreat in February, EBU testing with a 56-inch screen viewed at a typical home screen-to-eye distance of 2.7 meters showed roughly a half-grade improvement in perceived image quality for the source material used. HFRAt the EBU’s IBC lectures, the results of viewer HFR testing were also revealed. Going from 60 frames per second (fps) to 120, doubling the pixels per second, yielded a full grade quality improvement for the sequences tested. In terms of data rate, that’s four times the bang for the buck of “4k” or “4K” (the EBU emphasized that the latter is actually a designation for a temperature near absolute zero).

IBC attendees could see for themselves the perceptual effects of HFR at the EBU exhibit or, even more easily, at a BBC exhibit in IBC’s Future Zone. Even from outside that exhibit hall, the difference between the images on two small monitors, one HFR and one not, was obvious to all observers.

The EBU hasn’t yet released perceptual-quality measurements associated with HDR, but HDR involves an even lower data-rate increase: just 50% to go from eight bits to twelve. If my personal experience with HDR displays at Dolby private demonstrations at both NAB and IBC is any indication, that small data-rate increase might provide the biggest bang-for-the-buck of all (although Pinguin Ingenieurbüro’s relatively low-data-rate immersive sound system in the Future Zone was also very impressive).

At IBC’s Future Zone, the University of Warwick showed HDR capture using two cameras, with parallax correction. Behind a black curtain at its exhibit, ARRI publicly showed HDR images from just one of its Alexa cameras on side-by-side “4k” and higher-dynamic-range HD monitors. Even someone who had previously announced that “4k” monitors offer the best-looking HD pictures had to admit that the HDR HD monitor looked much sharper than the “4k.”

Fujinon XA99x8.4HDR is contrast-ratio-related, and, before cameras, processing, and displays, lenses help determine contrast ratio. Sports and concerts typically use long-zoom-range lenses, which don’t yet exist for “4k.” A Fujinon “4k” 3:1 wide-angle zoom lens costs almost twice as much as the same manufacturer’s 50:1 HD sports lens. Stick an HD lens on a “4k” camera, however, and the contrast ratio of the finest detail gets reduced — LDR instead of HDR.

Then there are those cables. As in the change from SDI to HD-SDI, as data rate increases, useful cable length decreases. Going from 1080i to “4k” at the same number of images per second is an increase of 8:1 (so-called 6G-SDI can handle “4k” up to only 30 progressive frames per second). Going from 60 fps to 120 is another 2:1 increase. Going from non-HDR to HDR is another 1.5:1 increase, a total of 24:1, not counting WCG, immersive sound, or stereoscopic 3D (a few exhibits at IBC even showed new technology for the last). Nevertheless, Denmark’s NIMB showed a tiny, three-wheel multicamera “4k” production vehicle, perhaps initiating a new contest for largest and smallest.

The lens and cable issues were raised by the “no-need” side at “The Great Quality Debate” at IBC. Perhaps some in the audience considered this conundrum: “spending” so much data rate on “4k” might actually preclude such lower-data-rate improvements as HFR and HDR. Whatever the cause, when the audience was polled after the debate, it was no longer evenly split; at an event where seemingly almost every exhibit said something about “4k,” the majority in the audience now opposed the proposition that there is a need to go beyond high definition:

http://informitv.com/news/2013/09/14/ibcdebatevotes/

http://www.hollywoodreporter.com/behind-screen/ibc-do-we-need-go-629422

Thuraya SatSleevePerhaps a secondary theme of IBC 2013 (after “4k”) will be more significant in the long term: signal distribution. IBC has long covered all forms of distribution; in 2013 offerings ranged from broadcast transmitters in just two rack units (Onetastic) to a sleeve that turns an iPhone into a satellite phone (Thuraya). In the Future Zone, Technische Universität Braunschweig offered one of the most-sensible distribution plans for putting live mass-appeal programming on mobile devices, an overlay of a tower-based broadcast over regular LTE cells.

The most radical signal-distribution plan at IBC 2013, however, was also the one most likely to be the future of the television-production business. It’s related to HD-SDI: eliminating it. HD-SDI technology is mature and works fine (up to the distance limit for the data rate and cable), but it’s unique to our industry. Meanwhile, the rest of the world is using common information technology (IT) and internet protocol (IP).

The EBU “village” was a good place to get up to speed on replacing SDI with IT, with both lectures and demonstrations, the latter, from the BBC, showing both HD and “4k.” Here are links to EBU and BBC sites on the subject:

http://tech.ebu.ch/JT-NM/FNS/NVCIP

http://www.bbc.co.uk/rd/projects/ip-studio

SVS switcher control surfaceThen there was SVS Broadcast, which took information technology a step further, showing what they called an IT-based switcher. The control surface is a little unusual, but what’s behind it is more unusual. When a facility uses multiple switchers, they can share processing power. Oh, and the control surfaces demonstrated at IBC in Amsterdam were actually controlling switcher electronics in Frankfurt.

There were more wonders at IBC, from Panasonic 64×9 images to MidworldPro Panocam tiltedMidworldPro’s Panocam that uses 16 lenses to see everything and stitch it all into a single image. And then there was Clear-Com, offering respite from the relentless march of advanced technology with their new RS-700 series, an updated version of traditional, analog, wired, belt-pack intercom.

Ahhhhhhh.

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , ,

Redefining High Definition

May 24th, 2012 | No Comments | Posted in Download, Today's Special

Redefining High Definition
May 21, 2012
The Cable Show (NCTA Convention)
Boston Convention Center

Collateral:

 

Video:

Tags: , , , , , , ,

Update: Schubin Cafe: Beyond HD: Resolution, Frame-Rate, and Dynamic Range

February 9th, 2012 | No Comments | Posted in Download, Today's Special

You can download the PowerPoint presentation by clicking on the title:

SchubinCafe_Beyond_HD.ppt (7.76 MB)

 

You can download the mov file of the webinar by clicking on the title:

Schubin-Cafe-Webinar-2-9-12-1.mov

 

Tags: , , , , , , , , ,

The Blind Leading

December 10th, 2011 | No Comments | Posted in Schubin Cafe

Once upon a time, people were prevented from getting married, in some jurisdictions, based on the shade of their skin colors. Once upon a time, a higher-definition image required more pixels on the image sensor and higher-quality optics.

Actually, we still seem to be living in the era indicated by the second sentence above. At the 2012 Hollywood Post Alliance (HPA) Tech Retreat, to be held February 14-17 (with a pre-retreat seminar on “The Physics of Image Displays” on the 13th) at the Hyatt Grand Champions in Indian Wells, California <http://bit.ly/slPf9v>, one of the earliest panels in the main program will be about 4K cameras, and representatives from ARRI, Canon, JVC, Red, Sony, and Vision Research will all talk about cameras with far more pixel sites on their image sensors than there are in typical HDTV cameras; Sony’s, shown at the left, has roughly ten times as many.

That’s by no means the limit. The prototypical ultra-high-definition television (UHDTV) camera shown at the right has three image sensors (from Forza Silicon), each one of which has about 65% more pixel sites than on Sony’s sensor. There is so much information being gathered that each sensor chip requires a 720-pin connection (and Sony’s image sensor is intended for use in just a single-sensor camera, so there are actually about five times more pixel sites).  But even that isn’t the limit! As I pointed out last year, Canon has already demonstrated a huge hyper-definition image sensor, with four times the number of pixels of even those Forza image sensors used in the camera at the right <http://www.schubincafe.com/2010/09/07/whats-next/>!

Having entered the video business at a time when picture editing was done with razor blades, iron-filing solutions to make tape tracks visible, and microscopes, and when video projectors utilized oil reservoirs and vacuum pumps, I’ve always had a fondness for the physical characteristics of equipment. Sensors will continue to increase in resolution, and I love that work. At the same time, I recognize some of the problems of an inexorable path towards higher definition.

The standard-definition camera that your computer or smart phone uses for video conferencing might have an image sensor with a resolution characterized as 640×480 or 0.3 Mpel (megapixels), even if that same smart phone has a much-higher-resolution image sensor pointing the other way for still pictures. That’s because video must make use of continually changing information. At 60 frames per second, that 0.3 Mpel camera delivers more pixels in one second than an 18 Mpel sensor shooting a still image.

Common 1080-line HDTV has about 2 Mpels. So called “4K” has about 8 Mpels. It’s already tough to get a great HDTV lens; how will we deal with UHDTV’s 33-Mpel “8K”?

A frame rate of 60-fps delivers twice as much information as 30-fps; 120-fps is twice as much as 60-fps. How will we ever manage to process high-frame-rate UHDTV?

Perhaps it’s worth consulting the academies. In U.S. entertainment media, the highest awards are granted by the Academy of Motion Picture Arts & Sciences (the Academy Award or Oscar), the Academies (there are two) of Television Arts & Sciences (the Emmy Award), and the Recording Academy (the Grammy Award). Win all three, and you are entitled to go on an EGO (Emmy-Grammy-Oscar) trip!

In the history of those awards, only 33 people have ever achieved an EGO trip. And only two of those also won awards from the Audio Engineering Society (AES), the Institute of Electrical and Electronics Engineers (IEEE), and the Society of Motion-Picture and Television Engineers (SMPTE). You’re probably familiar with the last name of at least one of those two, Ray Dolby, shown at left during his induction into the National Inventors Hall of Fame in 2004.

The other was Thomas Stockham. Some in the audio community might recognize his name.  He was at one time president of the AES, is credited with creating the first digital-audio recording company (Soundstream), and was one of the investigators of the 18½-minute gap in then-President Richard Nixon’s White House tapes regarding the Watergate break-in.

Those achievements appeal to my sense of appreciation of physical characteristics. The Soundstream recorder (right) was large and had many moving parts. And the famous “stretch” of Nixon’s secretary Rose Mary Woods (left), which would have been required to accidentally cause the gap in the recording, is a posture worthy of an advanced yogi (Stockham’s investigative group, unfortunately for that theory, found that there were multiple separate instances of erasure, which could not have been caused by any stretch). But what impressed (and still impresses) me most about Stockham’s work has no physical characteristics at all.  It’s pure mathematics.

On the last day of the HPA Tech Retreat, as on the first day, there will be a presentation on high-resolution imaging. But it will have a very different point of view. Siegfried Foessel of Germany’s Fraunhofer research institute will describe “Increasing Resolution by Covering the Image Sensor.” The idea is that, instead of using a higher-resolution sensor, which increases data-readout rates, it’s actually possible to use a much-lower-resolution image sensor, with the pixel sites covered in a strange pattern (a portion of which is shown at the right). Mathematical processing then yields a much-higher-resolution image — without increasing the information rate leaving the sensor.

In the HPA Tech Retreat demo room, there should be multiple demonstrations of the power of mathematical processing. Cube Vision and Image Essence, for example, are expected to be demonstrating ways of increasing apparent sharpness without even needing to place a mask over the sensor. Lightcraft Technology will show photorealistic scenes that never even existed except in a computer. And those are said to have gigapixel (thousand-megapixel) resolutions!

All of that mathematical processing, to the best of my knowledge, had no direct link to Stockham, but he did a lot of mathematical processing, too. In the realm of audio, his most famous effort was probably the removal of the recording artifacts of the acoustical horn into which the famous opera tenor Enrico Caruso sang in the era before microphone-based recording (shown at left in a drawing by the singer, himself).

As Caruso sang, the sound of his voice was convolved with the characteristics of the acoustic horn that funneled the sound to the recording mechanism. Recovering the original sound for the 1976 commercial release Caruso: A Legendary Performer required deconvolving the horn’s acoustic characteristics from the singer’s voice.  That’s tough enough even if you know everything there is to know about the horn. But Stockham didn’t, so he had to use “blind” deconvolution. It wasn’t the first time.

He was co-author of an invited paper that appeared in the Proceedings of the IEEE in August 1968. It was called “Nonlinear Filtering of Multiplied and Convolved Signals,” and, while some of it applied to audio signals, other parts applied to images. He followed up with a solo paper, “Image Processing in the Context of a Visual Model,” in the same journal in July 1972. Both papers have been cited many hundreds of times in more-recent image-processing work.

One image in both papers showed the outside of a building, shot on a bright day; the door was open, but the inside was little more than a black hole (a portion of the image is shown above left, including artifacts of scanning the print article with its half-tone images). After processing, all of the details of the equipment inside could readily be seen (a portion of the image is shown at right, again including scanning artifacts). Other images showed effective deblurring, and the blur could be caused by either lens defocus or camera instability.

Stockham later (in 1975) actually designed a real-time video contrast compressor that could achieve similar effects. I got to try it. I aimed a bright light up at some shelves so that each shelf cast a shadow on what it was supporting. Without the contrast compressor, virtually nothing on the shelves could be seen; with it, fine detail was visible. But the pictures were not really of entertainment quality.

That was, however, in 1975, and technology has marched — or sprinted — ahead since then. The Fraunhofer Institut presentation at the 2012 HPA Tech Retreat will show how math can increase image-sensor resolution. But what about the lens?

A lens convolves an image in the same way that an old recording horn convolved the sound of an acoustic gramophone recording. And, if the defects of one can be removed by blind deconvolution, so might those of the other. An added benefit is that the deconvolution need not be blind; the characteristics of the lens can be identified. Today’s simple chromatic-aberration corrections could extend to all of a lens’s abberations, and even its focus and mount stability.

Is it a merely a dream?  Perhaps.  But, at one time, so was the repeal of so-called anti-miscegenation laws.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , ,

The E and Eye

February 26th, 2010 | No Comments | Posted in Schubin Cafe

“HDTV is ideally viewed at a distance of roughly three times the picture height.”  That’s the sort of statement heard frequently — as recently as at last week’s HPA Tech Retreat.  And there seems to be a basis for it.

Snellen06

According to the eye chart commonly used to determine visual acuity, 20/20 vision can just identify two black lines separated by a white line that covers one minute of arc on the retina.  There are 360 degrees of arc in a circle and 60 minutes per degree (and 60 seconds per minute).

If you divide the 1080 active (picture-carrying) lines of the most common form of HDTV by those 60 minutes, the result is 18 degrees of retinal angle.  Divide that by two, and you can form two right triangles, one above the other.  The sides opposite the 9-degree angles are each half the height of the HDTV screen.  The sides adjacent to the angles are the distance from the screen to the eye.

The tangent of an angle is the ratio of the opposite side to the adjacent.  The tangent of 9 degrees is roughly 0.158.  Double that to include both right triangles, and the result is roughly 0.317.  Divide 1 by that to get the ratio of viewing distance to height, and the result is roughly 3.16.

According to the theory of that eye chart, if you sit about 3.16 times the height of your HDTV screen away from it, you’ll get optimum resolution.  Sit farther, and you’ll lose some detail.  Sit closer, and you might not be able to see the picture due to the visibility of the scanning structure.

For 720-line HDTV, the viewing distance is roughly 4.76 times the height (4.76H).  For old-time NTSC, it’s roughly 7.15H.

There are a few problems with this theory.  One came with a slight change in this sentence: “Optimum NTSC resolution is achieved by sitting roughly seven times the picture height from the screen.”  Over time, it became “People watch NTSC at roughly seven times the picture height.”

I can think of at least one person who might take out a tape measure, run some calculations, and move a chair to the optimum viewing spot.  But I can’t think of many.

Bernie Lechner, then a researcher at RCA Laboratories, decided to measure how far people sat from their TVs.  At the time, the result was about nine feet, regardless of screen size, a figure that became known as the Lechner Distance.  Richard Jackson at Philips Laboratories in the UK came up with a similar three meters.

The Lechner/Jackson Distance was based largely on the size of living rooms and their furniture.  In Japan, viewers sat closer to their TVs, thus needing HDTV.  Or so the theory goes.  But Japanese screen sizes also tended to be smaller.

Other problems with the optimum-viewing-distance theory relate to such issues as overscan and the reductions of vertical resolution caused by interlace, overlapping scanning lines, sampling filtering, color-phosphor dots or stripes, and CRT faceplate optical characteristics.  But a much more serious issue is that one arc minute derived from the eye chart.

Officially, that eye chart (shown near the top of this post) is called a Snellen chart, named for the Dutch ophthalmologist who introduced its symbols in 1862.  And the symbols on it are said not to be letters in a typeface but “optotypes” intended to identify visual acuity.

snellen EConsider the famous E. When it is located on the 20/20 line of “normal” vision (meaning that the viewer sees at 20 feet what should be just visible at 20 feet — or, outside the United States, the 6/6 line, meaning the viewer sees at six meters what should be just visible at six meters), the entire symbol fits within an arc that subtends a retinal angle of 5 minutes (5/60 of a degree), and each black or white feature of the symbol is 1 minute.

That’s it.  That’s the basis for viewing HDTV at three times the picture height.  But maybe it’s worth examining that basis somewhat further.

First, 20/20 is not the lowest line on a typical Snellen eye chart.  Here’s what the Snellen obituary on page 296 of the February 1, 1908 issue of the British Medical Journal had to say about it:

“He started with the idea that a person might be considered to have normal vision if he could see and distinguish a letter which subtended an angle of one minute on the retina.  This was by no means the best which most eyes could do, but he set this as the minimum standard required to justify one in regarding an eye as normal.”

pelli-robson trimmedSo viewers could conceivably view HDTVs from farther away and still see full resolution.  And then there are two issues I’ve raised in previous posts.

One is contrast (see Angry About Contrast here: http://schubincafe.com/blog/2009/09/angry-about-contrast/).  The portion of a Pelli-Robson chart pictured here shows how important contrast is in being able to distinguish symbols.  TV pictures, whether NTSC or HDTV, tend to comprise a broad range of contrast ratios, and so do the screens on which they’re viewed (and the environments in which that viewing is done; the light of a lamp reflected off a screen can wreak havoc with contrast).

sine grating 3The other issue I’ve gone into before is edges (see Sines of the Times here: http://schubincafe.com/blog/2009/12/sines-of-the-times/).  The E on a Snellen chart has nice sharp edges.  Making sharp edges requires harmonics far beyond the fundamental sine-wave frequency.  Compare the sharp-edged line at top left with the more sinusoidal line below.

One arc minute of visual acuity is the same as 30 cycles per degree (a cycle comprising both a light part and a dark part).  And that figure has become etched in stone for some who discuss visual resolution.  But then there was the paper “Research on Human Factors in UHDTV,” published in the April 2008 SMPTE Journal by authors at NHK, the Japan Broadcasting Corporation, source of modern HDTV.

It noted that observers could tell the difference between 78 cycles per degree (cpd) and 156.  The latter figure is more than five times greater than the 30 cpd of 20/20 vision.  Further, the research found that the sensation of “realness” rises rapidly to 50 cpd but continues rising to 156 (with no indication that it stops there).

So, how fine is visual acuity for detail perception?  I don’t know.  But it doesn’t seem to be a simple 30 cpd.

Tags: , , , , , , , , , , ,

A Brief History of Height

August 10th, 2009 | No Comments | Posted in Schubin Cafe
NHK's 1969 HDTV

NHK's 1969 HDTV Demo

Based on the basic questions who, when, where, how, and why, HDTV was invented by NHK (Nippon Hoso Kyokai, the Japan Broadcasting Corporation), first shown to the public in 1969 at NHK’s Science & Technical Research Laboratory, initially achieved by using three image tubes to create the picture, and developed because, with real estate at a premium in Japan, viewers sat closer to TVs and, therefore, were more aware of the flaws of ordinary television. Unfortunately, that view doesn’t match the following news report:

worlds-fair-1939-trk-121“The exposition’s opening on April 30 also marked the advent of this country’s first regular schedule of high-definition broadcasts.” That was published in the U.S. magazine Broadcasting before NHK’s unveling of HDTV, more than 30 years before, on April 30, 1939, reporting on the first day of that year’s New York World’s Fair, where RCA demonstrated what it called high-definition television. More »

Tags: , , , , , , , , ,
Web Statistics