Produced by:
| Follow Us  

HDR: The Bottom Line by Mark Schubin

September 1st, 2016 | No Comments | Posted in Download, Schubin Cafe

This is a modified version of the original presentation given at the 2016 HPA Tech Retreat on February 18, 2016.

High Dynamic Range (HDR) imagery offers the most bang for the bit in viewing tests.  Equipment is available, and issues are being worked out.  What happens in theaters and homes, however, is a different matter.

Download: HDR: The Bottom Line by Mark Schubin (TRT 5:07)



Tags: , , , , , , , , ,

The Bottom Line

January 26th, 2016 | No Comments | Posted in Schubin Cafe


Like many other innovations, high-dynamic-range (HDR) imaging can bring benefits but will require work to implement. And then there’s the bottom line.

HDR’s biggest benefit is that it offers the greatest perceptual image improvement per bit. Different researchers have independently verified the improvement, and it theoretically requires no increase in bit rate whatsoever.  In practice, to allow both standard-dynamic-range (SDR) TVs and HDR TVs to be accommodated with the same signal (and because not everyone keeps the appropriate amount of noise), the bit rate might increase a small amount — perhaps 20%.

Viewing Tests

Above are comparisons of viewer evaluations of higher spatial resolution (e.g., going from HD to 4K) at left, higher frame rate (HFR) in the middle, and HDR at right, with the vertical scales normalized. The distance from the top shows the improvement. To achieve the improvement that HDR delivers with a zero-to-20% increase in bit rate, HFR would need a 100% increase or more. Going to 4K from HD can’t even approach the HDR improvement, but, if it could, it would seem to require more than a 1600% increase in bit rate. HDR is the clear winner.

That’s one piece of HDR good news. Another is that it can deliver more colors separately from any increase in color gamut. It also allows more flexibility in shooting and post production. And it doesn’t appear to require any new technologies at any point from scene to seen.

Below is an image presented at the 2008 SMPTE/NAB Digital Cinema Summit. It was shot in a Grass Valley lab using the Xensium image sensor. The only light on the scene came from the lamp aimed at the camera at lower right, but every chip on the chart is distinguishable. From lamp filament to darkest black, there was a 10,000,000:1 contrast ratio, more than 23 stops of dynamic range. And, on the viewing end, TV sets have already been sold with HDR-level light outputs. New equipment might be needed, of course, but not new technologies.


That’s the good news. Getting everyone to agree on how HDR images should be converted to video signals, how those signals should be encoded for transmission, and how SDR and HDR TV sets should deal with a single transmission path are among the issues being worked out. They’ll be discussed at next month’s HPA Tech Retreat. And then there are interactions.

hue shift with increased luminanceSean McCarthy of Arris offered an excellent presentation on the subject at the main SMPTE conference last fall. Appropriately, it was called “How Independent Are HDR, WCG [wide color gamut], and HFR in Human Visual Perception and the Creative Process?” Those viewing HDR-vs.-SDR demos have sometimes commented that image-motion artifacts seem worse in HDR, suggesting that HDR might require HFR or restrictions on scene motion; McCarthy’s paper explains the science involved. It also explains how color hues can shift in unusual ways, becoming yellower above certain wavelengths and bluer below as light level increases, as shown in an excerpt from an illustration in McCarthy’s paper above at right (higher light level is at top).

Then there’s time.  McCarthy’s paper explains how perceived brightness can change over time as human vision adapts to higher light levels. And there’s also an inability to see dark portions of an image after adaptation to bright scenes. “In bright home and mobile viewing environments,” McCarthy notes, “both light and dark adaptation to [changes] in illumination may be expected to proceed on a time scale measured in seconds. In dark home and theater environments, rapid changes going back and forth from [darker to lighter light levels] might result in slower dark adaptation.” In other words, after a commercial showing a bright seashore or ski slope, viewers will need some recovery time before they can perceive dim shadow detail.

Billiards_ballsHDR also brings concerns about electric power.  It’s often said that the high end of the HDR range will be used only for “speculars,” short for specular reflections, like glints of lights on shiny objects, as shown on these billiard balls, from Dave Pape’s computer-graphics lighting course. If so, an HDR TV set would be unlikely to need significantly more electric power than an SDR TV set.

Samsung SUHDTV_UHDA_Main_2 (2)Those snow and seashore scenes, however, could need a lot more power if shown at peak light output. At right is a scene shown in promotional material for a Samsung HDR-capable TV, with bright snow, ice, and clouds. Below is a section of the technical specifications of the Samsung SUHD JS8500 series 65-inch TV. As shown below, the “typical power consumption” is 82 watts, but the “maximum power consumption” is 255 watts, more than three times higher. The monitor used in Dolby’s HDR demos is liquid cooled.

Samsung 65-inch SUHD specs cropped

All of the above are issues that need to be worked out, from standards and recommended practices to aesthetic decisions. And working such issues out is not really new. Consider those motion artifacts. Even old editions of the American Cinematographer Manual included tables of “35mm Camera Recommended Panning Speeds.” As for power, old TV sets from the era of tube-based circuitry used more power even with smaller and dimmer pictures. But then there’s the bottom line, the lowest light level of the dynamic range.

uhd_alliance_uhd_premium_logo_headerConsider the HDR portion of the requirements for the “Ultra HD Premium” logo shown above that Samsung TV. According to a UHD Alliance press release on January 4, to get the designation, aside from double HD resolution in both the horizontal and vertical directions and some other characteristics, a TV must conform to the SMPTE ST2084 electro-optic transfer function and must offer “a combination of peak brightness and black level either more than 1000 nits peak brightness and less than 0.05 nits black level or more than 540 nits peak brightness and less than 0.0005 nits black level.” The latter is a ratio of more than a million to one.

The high end of those ranges is beyond most current video displays but achieved by some. Again, new equipment might be required but not new technology. And the bottom end seems achievable, too. Turn off a TV, and it emits no light. Manufacturers just need to be able to have black pixels pretty close to “off.”

Ma8thew TV (2)What the viewer sees, however, is a different matter. At right is an image of a TV set posted by Ma8thew and used in the Wikipedia page “Technology of television.” The TV set appears to be off, but a lot of light can be seen on its screen. The light is reflected off the screen from ambient light in the room. Cedric Demers posted “Reflections of 2015 TVs” on The lowest reflection listed was 0.4%, the highest was 1.9%. Of course, that’s between 0.4% and 1.9% of the light hitting the TV set. How much light is that?

Luxury-holiday-letting-Hyeres-Le-Mas-des-iles-d-Or_10 (2)At left is a portion of an image of the TV room of a luxury vacation rental in France, listed on IHA holiday ads. The television set is off. It shows a bright reflected view of the outdoors. It looks very nice outside — possibly too nice to stay in and watch TV. But, if one were watching TV, presumably one would draw the drapes closed. If the windows were thus completely blocked off and not a single lamp were on in the room, would that be dark enough to appreciate the 0.0005-nit black level of an Ultra HD Premium HDR TV?

It would probably not be. What’s the problem? For one thing, the viewer(s).

Consider a movie-theater auditorium. When the movie comes on, all the lights (except exit lights) go off. The walls, floor, and seats are typically made of dark, non-reflective materials. Scientists from the stereoscopic-3D exhibition company RealD measured the reflectivity of auditorium finishes (walls and carpet), seating, and audiences and concluded that the last were the biggest contributors to light reflected back to the screen (especially when they wear white T-shirts). Discussing the research at an HDR session in a cinema auditorium at last fall’s International Broadcasting Convention (IBC), RealD senior vice president Peter Ludé joked that for maximum contrast movies should be projected without audiences.

Sony World Cup 4K to Vue WestfieldLudé went a step further. Reflections off the audiences are problematic only when there is sufficient light on the screen. So, he joked again, for ideal HDR results, the screen should be black. At right is an image shot during a Sony-arranged live 4K screening of the 2014 World Cup at the Westfield Vue cinema in London. The ceiling, the walls, the floor, and the audience are all visible because of light coming off the screen and being reflected.

Now consider a home with an Ultra HD Premium TV emitting 540 nits. The light hits a viewer. If the viewer’s skin reflects just 1% of the light back to the screen and the screen reflects just 0.4% of that back to the viewer, there could be 0.0216 nits of undesired light on a black pixel (it’s more complicated because the intensity falls with the square of the distances involved but increases with the areas emitting or reflecting). That’s not a lot, but it’s still 43.2 times greater than 0.0005 nits.

A million-to-one contrast ratio? Maybe. But maybe not if there’s a viewer in the room.

Tags: , , , , , , , , , , , , , , , , , , ,

HPA 2014 – Resolution Frame-Rate and Dynamic Range [video]

March 12th, 2014 | No Comments | Posted in Download, Schubin Cafe, Today's Special


Mark Schubin’s Resolution Frame-Rate and Dynamic Range presentation from the HPA Tech Retreat, presented on February 20, 2014 (audio recorded later).

(Extended Version: Bang for the Buck: Data Rate vs. Perception in UHD Production by Mark Schubin at

Video (TRT 12:57)

Tags: , , , , , , , , ,

Bang for the Buck: Data Rate vs. Perception in UHD Production

November 18th, 2013 | No Comments | Posted in Download, Schubin Cafe, Today's Special


Going beyond today’s television could involve higher resolution, higher frame rate, higher dynamic range, wider color gamut, stereoscopic 3D, and immersive sound. Do all provide the same sensation of improvement? Could some preclude the use of others? Which delivers the biggest “bang for the buck,” and how do we know?

Presented during Content & Communications World, November 13, 2013, Javits Center, New York.

ARRI 4K+ Cinema AuditoriumMark Schubin adds: “I neglected to describe all of the images on slide 19. The upper right image shows that, in a cinema auditorium, detail resolutions beyond HD might be visible to everyone in the audience, even in the last row. The ARRI Alexa camera, from the same company that provided that image, however, has only 2880 pixels across — less than “3k.” That hasn’t stopped it from being used in major motion pictures, such as Skyfall (shown on set in the left bottom image) or the top-grossing movie to date in 2013, Iron Man 3.”

Video (TRT 28:03)


Tags: , , , , , , , , , , , , , , , , , ,

Leg Asea

March 1st, 2013 | No Comments | Posted in Schubin Cafe

2013 HPA Tech Retreat Broadcasters Panel: ABC, CBC, CBS, Ericsson, Fox, NAB, NBC, and PBS are shown (not in order); EBU, NHK, Sinclair, Univision, and locals were also present

Joe Zaller, a manager of the very popular (16,000-member) Television Broadcast Technologies group on LinkedIn, tweeted on February 21 from the 2013 HPA Tech Retreat in Indian Wells, California: “Pretty much blown away from how much I learned Wednesday at [the broadcasters] panel… just wish it had been longer.”

Adam Wilt, in his huge, six-part, 14-web-page (each page perhaps 20 screens long) coverage of the five-day event for put it this way: “When you get many of the best and brightest in the business together in a conference like this, it’s like drinking from a fire hose. That’s why my notes are only a faint shadow of the on-site experience. Sorry, but you really do have to be there for the full experience”:

In his Display Central coverage, Peter Putman called it “one of the leading cutting-edge technology conferences for those working in movie and TV production”: The European Broadcasting Union’s technology newsletter noted of the retreat, held in the Southern California desert, “There were also many European participants at HPA 2013, in particular from universities, research institutes and the supplier industry. It has clearly become an annual milestone conference for technology strategists and experts in the media field”:

It was all those things and more. HPA is the Hollywood Post Alliance, but the event is older than HPA itself. It is by no means restricted to Hollywood (presenters included the New Zealand team that worked on the high-frame-rate production of The Hobbit and the NHK lab in Japan that shot the London Olympics in 8K), and it’s also not restricted to post. This year’s presentations touched on lighting, lenses, displays, archiving, theatrical sound systems, and even viewer behavior while watching one, two, or even three screens at once.

It is cutting-edge high tech–the lighting discussed included wireless plasmas, the displays brightnesses as high as 20,000 cd/m² (and as low as 0.0027), and the archiving artificial, self-replicating DNA–and yet there was a recognition of a need to deal with legacy technologies as well. Consider that ultra-high-dynamic-range (HDR) display.

The simulator created by Dolby for HDR preference testing is shown at left, minus the curtains that prevented light leakage. About the only way to achieve sufficient brightness today is to have viewers look into a high-output theatrical projector. In tests, viewers preferred levels far beyond those available in today’s home or theatrical displays. But a demonstration at the retreat seemed to come to a different conclusion.

The SMPTE standard for cinema-screen brightness, 196M, calls for 16 foot-lamberts or 55 cd/m² with an open gate (no film in the projector). With film, peak white is about 14 fL or 48 cd/m², a lot lower than 20,000. Whether real-world movie theaters achieve even 48–especially for 3D–is another matter.

During the “More, Bigger, But Better?” super-session at the retreat, a non-depolarizing screen (center at right) was set up, the audience put on 3D glasses, and scenes were projected in 3D at just 4.5 fL (15 cd/m²) and again at 12 fL (41 cd/m²). The audience clearly preferred the latter.

Later, however, RealD chief scientific officer Matt Cowan showed a scene from an older two-dimensional movie at 14, 21, and 28 fL (48, 72, and 96 cd/m²). This time, the audience (but not everyone in the audience) seemed to prefer 21 to 28. Cowan led a breakfast roundtable one morning on the question “Is There a ‘Just Right’ for Cinema Brightness?”

Of course, as Dolby’s brightness-preference numbers showed, a quick demo is not the same as a test, and people might be reacting simply to the difference between what they are used to and what they were shown. The same might be the case with reactions to the high-frame-rate (HFR) 48 frames per second (48 fps) of The Hobbit. When the team from Park Road Post in New Zealand showed examples in their retreat presentation, it certainly looked different from 24-fps material, but whether it was better or worse was a subjective decision that will likely change with time. There were times when the introduction of sound or color were also deemed detrimental to cinematic storytelling.

At least the standardized cinema brightness of 14 fL had a technological basis in arc light sources and film density. A presentation at the retreat revealed the origin of the 24-fps rate and showed that it had nothing to do with visual or aural perception or technological capability; it was just a choice made by Western Electric’s Stanley Watkins (left) after speaking with Warner Bros. chief projectionist Jack Kekaley. And we’ve gotten used to that choice for 88 years.

Today, of course, actual strands of film have nothing to do with the moving-images business–or do they? Technicolor’s Josh Pines noted a newspaper story explaining the recent crop of lengthy movies by saying that digital technology lets directors go longer because they don’t have to worry about the cost of film stock. But Pines analyzed those movies and found they they had actually, for the most part, been shot on film.

Film is also still used for archiving. Major studio blockbusters, even those shot, edited, and projected electronically, are transferred to three strands of black-&-white film (via a color-separation process), even though that degrades the image quality, for “just-in-case” disaster recovery; b&w film is the only moving-image medium to have thus far lasted more than a hundred years.

At one of the 2013 HPA Tech Retreat breakfast roundtables (right) one morning, the head of archiving for a major studio shocked others by revealing they were no longer archiving on film. At the same roundtable, however, studios acknowledged that whenever a new restoration technology is developed, they go to the oldest available source, not a more-recent restoration.

If film brightness, frame rate, and archiving are legacies of the movie business, what about television? There was much discussion at the retreat of beyond-HDTV resolutions and frame rates. Charles Poynton’s seminar on the technology of high[er] frame rates explained why some display technologies don’t have a problem with them while others do.

Other legacies of early television also appeared at the retreat. Do we still need the 0.999000… frame-rate-reduction factor of NTSC color in an age of Ultra-HD? It’s being argued in those beyond-HD standards groups today.

Interlace and its removal appeared in multiple presentations and even in a demo by isovideo in the demo room (a tiny portion of which is shown at left). As with film restoration from the original, the demo recommended archiving interlaced video as such and using the best-available de-interlacer when necessary. And there appeared to be a consensus at the retreat that conversion from progressive to interlace for legacy distribution is not a problem.

There was no such consensus about another legacy of early television, the 4:3 aspect ratio. One of the retreat’s nine quizzes asked who intentionally invented the 16:9 aspect ratio (for what was then called advanced television), what it was called, and why it was created. The answers (all nine quizzes had winners) were: Joseph Nadan of Philips Labs, 5-1/3:3, and because it was considered the minimum change from 4:3 that would be seen as a valuable-enough difference to make consumers want to buy new TV sets. But that was in 1983.

Thirty years later, the retreat officially opened with a “Technology Year in Review,” which called 2013 the “27th (or 78th) Annual ‘This Is the Year of HDTV.'” It noted that, although press feeds often still remain analog NTSC, according to both Nielsen and Leichtman research in 2012 75% of U.S. households had HDTVs. Leichtman added that 3/5 of all U.S. TVs, even in multi-set households, were HDTV. Some of the remainder, even if not HDTV, might have a 16:9 image shape. So why continue to shoot and protect for a 4:3 sub-frame of the 16:9?

On the broadcasters panel, one U.S. network executive explained the decision by pointing to other Nielsen data showing that, as of July 15 of 2012, although roughly 76% of U.S. households had HDTVs (up 14% from the previous year), in May only 29% of English-language broadcast viewing was in HD and only 25% of all cable viewing. Furthermore, much of the legacy equipment feeding the HDTV sets is not HD capable.

A device need not be HD capable, however, to be able to carry a 16:9 image. Every piece of 4:3 equipment ever built can carry a 16:9 image, even if 4:3 image displays will show it squeezed. So the question seems to be whether it’s better for a majority of U.S. TV sets to get the horizontally stretched picture above left or a minority to get the horizontally squeezed picture at right.

What do actual viewers think about legacy technologies? Two sessions scarily provided a glimpse. A panel of students, studying in the moving-image field, offered some comments that included a desire to text during cinema viewing. And Sarah Pearson of Actual Customer Behaviour in the UK showed sequences shot (with permission) in viewer homes on both sides of the Atlantic, analyzed by the 1-3-9 Media Lab (example above left). Viewers’ use of other media while watching TV might shock, but old photos of families gathered around the television often depicted newspapers and books in hand.

It wasn’t only legacy viewing that was challenged at the retreat. Do cameras need lenses? There was a mention of meta-materials-based computational imaging.

Do cameras need to move to change viewpoint, or can that be done in post? Below are two slides from “The Design of a Lightfield Camera,” a presentation by by Siegfried Foessel of Germany’s Fraunhofer Institut (as shot off the screen by Adam Wilt for his coverage of the retreat: Look at the left of the top of the chair and what’s behind it.

Are lightfield cameras with computational sensors the future? Will artificial-DNA-based archives replace all other media? Will U.S. broadcasters finally stop protecting legacy 4:3 TV screens? Plan now to attend the 2014 HPA Tech Retreat, the week of February 17-21 at the Hyatt Regency in Indian Wells, California.
Disclosure: I have received compensation from HPA for my role in helping to put the Tech Retreat together.
Tags: , , , , , , , , , , , , ,

Smellyvision and Associates

February 25th, 2012 | No Comments | Posted in 3D Courses, Schubin Cafe

 What is reality? And is it something we want to get closer to? Take a look at the picture of a cat above, as printed on a package of Bell Rock Growers’ Pet Greens® Treats <>. Does it look unreal? Distorted? Is it?

At this month’s HPA Tech Retreat in Indian Wells, California (shown above in a photo by Peter Putman as part of his coverage of the event <>), there was much talk about getting closer to reality by using images with higher resolution, higher frame rate, greater dynamic range, larger color gamut, stereoscopic sensation, and even surround vision. The last was based on a demonstration from C360 Technologies. Another demo featured Barco’s Auro-3D enveloping sound technology. In the main program, vision scientist Jenny Read explained how stereoscopic 3D in a cinema auditorium can’t possibly work right and why we think it does. And then there were the quizzes.

All of them related to the introduction of image and sound technologies at various World’s Fairs. Although the dates ranged from 1851 to the late 20th century, more than one quiz related to technologies introduced at the 1900 Paris Exposition. It stands to reason.

At that one event, people could attend sync-sound movies and watch large-format high-resolution movies on a giant-screen. They could also experience reality simulations: an “ocean voyage” on a motion-platform with visual effects called the Mareorama (depicted at left), a “train trip” on the Trans-Siberian Railway using spatial motion parallax (with one image belt moving at 1000 feet per minute!), and a “flight above the city” in the surround-projection-based Cinéorama (shown below, with synchronized projectors under the audience). At the same fair, they could also hear sound broadcasting of music (with no radios required) and even try out the newly coined word television.

Well over a century later, we still have sound broadcasting (though receivers are now required), we still watch sync-sound movies, and we still use the word television. There are still large-format large-screen, surround vision, and moving-platform experiences, but they tend to be at, well, World’s Fairs, museums, and other special venues.

There was a time when at least 70-mm film was used as a selling point for some Hollywood movies and the theaters where they were shown. And then it wasn’t. The audience’s desire for quality didn’t seem to justify the additional cost. The digital-cinema era started at lower-than-home-HD resolution but is now moving towards “4K,” more than twice the linear resolution of the best HD (the 4K effects and workflows of The Girl with the Dragon Tattoo were discussed at the HPA Tech Retreat).

Back in the publicized 70-mm film era, special-effects wizard, inventor, and director Douglas Trumbull created a system for increasing temporal resolution in the same way that 70-mm offered greater spatial resolution than 35-mm film. It was called Showscan, with 60 frames per second (fps) instead of 24.

The results were stunning, with a much greater sensation of reality. But not everyone was convinced it should be used universally. In the August 1994 issue of American Cinematographer, Bob Fisher and Marji Rhea interviewed a director about his feelings about the process after viewing Trumbull’s 1989 short, Leonardo’s Dream.

“After that film was completed, I drew a very distinct conclusion that the Showscan process is too vivid and life-like for a traditional fiction film. It becomes invasive. I decided that, for conventional movies, it’s best to stay with 24 frames per second. It keeps the image under the proscenium arch. That’s important, because most of the audience wants to be non-participating voyeurs.”

Who was that mystery director who decided 24-fps is better for traditional movies than 60-fps? It was the director of the major features Brainstorm and Silent Running. It was Douglas Trumbull.

As perhaps the greatest proponent of high-frame-rate shooting today, Trumbull was more recently asked about his 1994 comments. He responded that a director might still seek a more-traditional look for storytelling, but by shooting at a higher frame rate that option will remain open, and the increased spatial detail offered by a higher frame rate will also be an option.

That increased spatial detail is shown at left in a BBC/EBU simulation of 50-fps (top) and 100-fps (bottom) images based on 300-fps shooting. Note that the tracks and ties are equally sharp in both images; only the moving train changes. The images may be found in the September 2008 BBC White Paper on “High Frame-Rate Television,” available here <>.

Trumbull is a fan of using higher frame rates, especially for stereoscopic 3D (his Leonardo’s Dream was stereoscopic). Such other directors as James Cameron and Peter Jackson have joined that approach. And at the SMPTE International Conference on Stereoscopic 3D in June Martin Banks of UC-Berkeley’s Visual Space Perception Laboratory explained strobing effects that can occur in S3D viewing.

A hit of the 2012 HPA Tech Retreat, however, in both the main program and the demo area, was the Tessive Time Filter, a mechanism for eliminating (or at least greatly reducing) strobing effects without changing frame rate. It applies appropriate temporal filtering in front of the lens — essentially any lens. Because the filtering is temporal, it does not affect the sharpness of items that are stationary relative to the image sensor. Above right is an image illustrating a “compensator” plug-in for Apple’s Final Cut Pro “to achieve the best possible representation of time in your footage” (when the green word “Compensated” appears at the bottom right, the compensator is on <>).

That’s frame rate and resolution. Visual dynamic range (from brightest to darkest) and color gamut were also topics at the 2012 HPA Tech Retreat, primarily in Charles Poynton’s seminar on the physics of imaging displays and presentation on high-dynamic-range imaging, in a panel discussion on laser projection, and in Dolby’s high-dynamic-range monitoring demonstrations.

Poynton noted a conflict between displays that can “create” their own extended ranges and gamuts and the intentions of directors. He also noted that in medical imaging, where gray scale and color can be critical, there are standards that don’t exist in consumer television. But that doesn’t mean medical imaging is closer to reality. In fact, it might be nice for a tumor otherwise invisible to show up very obviously, like a clown’s red nose.

Above left is another scientific image, the National Oceanic and Atmospheric Administration’s satellite image of cloud cover over the U.S. this morning, at a very clear time. Rest assured that the air did not look green, yellow, and brown at the time. Sometimes reality is not desirable.

Consider the cat at the top of this post. Its unusual look is intentional, something to grab a shopper’s intention. But it’s actually not unrealistic.

Try holding your hand about a foot in front of your face and note its apparent size. Now move it two feet away. It looks smaller, but not half the size. Yet the “real” image of the hand on your retina is half the size.

Reality is even more complex. We track different moving objects at different times, changing what looks sharp or blurry. We focus on objects at different depths in a scene, unlike a camera (regarding stereoscopic 3D perception, at the HPA Tech Retreat Read noted that although a generation that grows up with S3D imagery might not experience today’s S3D viewing difficulties neither might they find S3D exciting). We can see 360 degrees in any direction (by moving our heads and bodies, if necessary). We can also hear sounds coming from any direction. And then there are our other senses.

At the 2010 International Broadcasting Convention in Amsterdam, the Korean Electronics and Telecommunications Research Institute demonstrated what they called “4D TV” (diagram above). When there was a fire on screen, viewers felt heat. When there was the appearance of speed on screen, viewers felt the rush of air across their faces. During an episode reminiscent of a news event in which an athlete was struck, viewers felt a blow on their legs. And there were also scents.

“There may come a time when we shall have ‘smellyvision’ and ‘tastyvision’. When we are able to broadcast so that all the senses are catered for, we shall live in a world which no one has yet dreamt about.”

That quotation by Archibald Montgomery Low appeared in the “Radio Mirror” of the (London) Daily News on December 30, 1926. Much more recently (June 14 of last year), the Samsung Advanced Institute of Technology and the University of California – San Diego’s Jacobs School of Engineering jointly announced the development of something that might sit on the back of a TV set and generate “thousands of odors” on command. But that raises the reality issue, again. Do we really want to smell what the sign above left depicts?

Archibald Low was an interesting character. He was inducted posthumously into the International Space Hall of Fame as the “father of radio guidance systems” and was one of the founders and presidents of the British Interplanetary Society, but he was also (among many other posts and appointments) fellow and president of the British Institute of Radio Engineers, fellow of the Chemical Society, fellow of the Geographical Society, and chair of the Royal Automobile Club’s Motor Cycle Committee (he built and arranged the demonstration of a rocket-powered motorcycle, above right).

Besides that motorcycle, he also developed drawing tools, a well-selling whistling egg boiler, and what was probably the first drone aircraft not carrying a pilot. But two other aspects of Low’s long and varied career might be worth considering.

In 1914, he demonstrated, first to the Institute of Automobile Engineers and later at Selfridge’s Department Store, something he called “televista” but probably better described in the title of his presentation, “Seeing by Wireless.” And, in a 1937 book, he wrote, “The telephone may develop to a stage where it is unnecessary to enter a special call-box. We shall think no more of telephoning to our office from our cars or railway-carriages than we do today of telephoning from our homes.” So he wasn’t too bad at predictions.

“Smellyvision”? Who knows? But, if we’re lucky, it won’t bring us any closer to reality.

Tags: , , , , , , , , , , , , , , , ,

Update: Schubin Cafe: Beyond HD: Resolution, Frame-Rate, and Dynamic Range

February 9th, 2012 | No Comments | Posted in Download, Today's Special

You can download the PowerPoint presentation by clicking on the title:

SchubinCafe_Beyond_HD.ppt (7.76 MB)


You can download the mov file of the webinar by clicking on the title:


Tags: , , , , , , , , ,

NAB 2011 Wrapup, Washington, DC SMPTE Section, May 19, 2011

June 1st, 2011 | No Comments | Posted in Download, Today's Special

NAB 2011 Wrapup
Washington, DC SMPTE Section
May 19, 2011

(38 slides / 43 minutes)


Tags: , , , , , , , ,
Web Statistics