Produced by:
| Follow Us  

NAB 2016 Wrap-up

June 8th, 2016 | No Comments | Posted in Download, Schubin Cafe

Recorded on May 25, 2016 at the SMPTE DC “Bits by the Bay,” Managing Technologies in Transition at the Chesapeake Beach Resort & Spa.

TRT: 33:00 (52 MB)

Download link: NAB 2016 Wrap-up

Embedded:

Tags: , , , , , ,

Crowded Room, Empty Booth

April 20th, 2016 | No Comments | Posted in Schubin Snacks, Today's Special

Cons-Exp

Yesterday, at the 2016 NAB Show in Las Vegas, the National Association of Broadcasters’ big annual event, I visited “The ATSC 3.0 Consumer Experience” at the southernmost end of the corridor outside the upper level of the South Hall. It’s shown above in an artist’s rendering. It was actually bigger and more crowded, so crowded, in fact, that I had to queue to get into the audio room.

Slightly north of that ATSC 3.0 Consumer Experience area are the two main entrances to the South Upper exhibits. Each morning—especially Monday morning—there’s an even bigger queue waiting for the hall to open. North of those entrances is the exhibit of the Society of Motion Picture and Television Engineers (SMPTE), and north of that are two corridors of rooms where sessions and events take place.

I was on a panel in a session that ended at 3:30, and I was surprised to see a long queue doubling back in the sessions corridor and extending into the main corridor for a 4:00 event in the same room. When I emerged from the ATSC audio room, the queue had reached that far. I joined it, and we moved past a crowd of students at the SMPTE exhibit. By the time we got to the session room—the largest at the NAB Show—it was already filled beyond capacity. Its attendance has been described as “standing room only,” but that’s not true; there wasn’t really even room to stand. Anyone fainting could have been assured of remaining upright.

20160419_162750

We had all come to watch the unveiling of Lytro Cinema. If you click on the image above and click again to blow it up, you’ll see the camera in the center—not the TV camera with an operator behind it; the Lytro Cinema camera is the gigantic horizontal object at the front of the room, about the size of a large coffin and weighing more. Its statistics are mind boggling: a sensor with what is described as 755 RAW megapixels, 300 frames per second, 16 stops of dynamic range.

More important, it’s a light-field camera. That means that, after the shooting is over and the talent has gone home, it is still possible, in post, to change the focus, adjust the depth of field, reframe the shot, do motion tracking, and much more. I oohed and ahhed with the rest of the crowd. But I also thought back to an uncrowded exhibit in the Central Hall a year earlier.

Then and there, I watched as, after production had been completed (on a different continent), a scene was re-lit. A spotlight moved around a performer. A background was made dimmer. There was a motion-tracking shot without any camera motion (a different version is shown in this earlier posting). Oh, and this rig was small and light enough to hold with one hand.

FraunhoferIIS_HDM_LightfieldTestshootComingHOme

FraunhoferIIS_LightfieldMediaProductionSystemIt was the Fraunhofer Institute’s exhibit, and they were again showing their lightfield camera system (one version shown at right), which, like the Lytro Cinema demonstration a year later, had been used to shoot movie scenes (left). That was in 2015.

In previous years, the Fraunhofer light-field array provided images that—in post—were refocused and reframed, used depth-based keying, had their depths of fields changed, and much more. They were shown at NAB Shows, the International Broadcasting Convention, and the HPA Tech Retreat, among other venues.

Great things can be found at the NAB Show in big, crowded rooms. But they can also be found in small, uncrowded exhibits.

 

Tags: , , , ,

What Will Be at NAB? Consider IBC

April 17th, 2016 | No Comments | Posted in Schubin Cafe

 

SoftPanel Autocolorautocolor demoLike every other NAB show, the upcoming 2016 one will likely have lots of innovations. One to which I’m looking forward is the new “autocolor” button on SoftPanels LED lights. Each can measure the ambient lighting and adjust its output to match, whether it’s an incandescent table lamp or daylight streaming in a window.

I’m eager to see other innovations in other booths. But, for an idea of the major themes at the convention, there’s no preview like the previous fall’s International Broadcasting Convention (IBC) in Amsterdam. Cinegy brought the DeLorean time machine of Back to the Future to the show because 2015 was the year to which it brought the travelers in the second movie of the series. But, in some ways IBC 2015 seemed more “Forward to the Past.”

11934564_1496068180707708_1063597390926253_o

Consider, for example, the format wars. Remember competing videotape formats and HDTV structures? Format wars are back. Cinegy has been pushing IP (internet protocol) connections instead of SDI (serial digital interface) for some time, but there wasn’t just one version of IP for television at IBC. NewTek, for example, introduced its open NDI (network device interface) at the show.

That’s for connecting devices. Squeezing their data through an IP pipe is a different issue and one with its own format wars.Higher spatial resolution (4K) seems to demand some form of bit-rate reduction (“compression”), preferably of a mild type (“mezzanine level”) so that it won’t affect image quality in production and post-production. TICO could be found at many booths (“stands” in IBC lingo), but so could many other options.

TICO cropped

Higher spatial resolutions were the rage at IBC 2014. At the 2015 version, more attention seemed to be paid to higher dynamic range (HDR), which had its own format wars. The Philips version, requiring just 35 bytes per scene (not per frame or per second) was shown in one dark room (with the lights turned on for the photo below).

Philips cropped

There were also many sessions about HDR, some covering HDR in movie theaters. Peter Ludé of RealD noted that their scientists separately measured the reflectivity of auditorium finishes (walls and carpet), seating and people in typical theaters. The results were that the people were the biggest contributor to light backscatter to the screen. He quipped, therefore, that, for best results, HDR movies should be shown in empty auditoriums.

There were cutting-edge technologies even for ordinary HDTV, from the BBC’s “Responsive Subtitles” (which increase comprehension by appearing with the speeds and rhythms of speech) to GATR’s inflatable portable satellite antennas. The Fraunhofer Institute, which at NAB showed the ability to re-light scenes in post thanks to their camera array, at IBC showed how tracking shots could be done without moving a camera.

Below is a composite shot, the scene outside the window added in post.

 

It looks like there was a camera move in the background, but, as the freeze in the video below indicates, the motion was all done in post production.

 

20150911_164307What might Franhofer show at NAB 2016? What other goodies will be on the NAB show floor?

We’ll soon know, but right now, it’s like the World War II German encryption machine at the Rambus IBC stand: an enigma.

Tags: , , , , , , , , , , , , ,

Where Are We Going, & How Did We Get Here (The Future) by Mark Schubin

May 19th, 2015 | No Comments | Posted in Download, Schubin Cafe

Recorded during “An Evening with Mark Schubin” at the SMPTE New England Section, Dedham Holiday Inn on May 14, 2015.

We’ve sort of made it into the era of digital cinema and HDTV. What’s next? 4K? 8K? higher frame rate? higher dynamic range? wider color gamut? more immersive sound? direct brain stimulation? Will we still need lenses? How about cameras? Will this list of questions ever end? As just one example, Mark promises to show pictures from flying cameras that don’t fly (or even exist). He also promises to explain why more contrast demands more frames per second.

Direct Link (109 MB / 1:11:28 TRT):
Where Are We Going, & How Did We Get Here (The Future) by Mark Schubin

Embedded:

Tags: , , , , , , , , , , , , , , ,

An Eclectic View of IBC 2014

November 2nd, 2014 | No Comments | Posted in Download, Schubin Cafe

On exhibit floors that had products ranging from 8K cameras to automatic captioning, why were many visitors excited about Skype? At a conference where the title of one presentation began “Minimising nonlinear Raman crosstalk,” why did one press report comment on cinema-auditorium lighting and the gross receipts of one episode of one TV show?

Between bites of fresh raw herring, Mark Schubin wandered through IBC (moderating one conference session) and discovered those and more: for example, a 4K camera that can directly use long-range zoom lenses, a 3D display that doesn’t require either special glasses or a sweet viewing spot, the Holo-Deck, an immersive egg, the ability to zoom and dolly in post, and a fully accredited Wile E. Coyote.

Catching liars and thieves? Yes, there was that, too.

Direct Link (50 MB / 38:49 TRT): An Eclectic View of IBC 2014

Embedded:

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

NAB 2013 Wrap Up at the SMPTE DC chapter, May 23, 2013

June 2nd, 2013 | No Comments | Posted in Download, Today's Special

Mark Schubin’s NAB 2013 wrap up.

Presented to the SMPTE DC chapter, May 23, 2013.

Video (TRT 40:02)

Tags: , , , , , , , , , , , , , , , , , , , , , , , ,

Enabling the Fix

April 29th, 2013 | No Comments | Posted in Schubin Cafe

NAB logo

Sometimes cliches are true. Sometimes the check is in the mail. And sometimes you can fix it in post. Amazingly, the category of what you can fix might be getting a lot bigger.

Sonic Notify trimmedAt this month’s NAB show, there was the usual parade of new technology, from Sonic Notify’s near-ultrasonic smartphone signaling for extremely local advertising — on the order of two meters or so (palmable transducer shown at left) to Japan’s National Institute ofARRI-Ikegami-HDK-97ARRI-Camera Information and Communications Technology’s TV “white space” transmissions per IEEE 802.22. In shooting, for those who like the large-sensor image characteristics of the ARRI Alexa but need the “systemization” of a typical studio/field camera, there was the Ikegami HDK-97ARRI (right), with the front end of the former and the back end of the latter.

Dolby 1Even where items weren’t entirely new, there was great progress to be seen. Dolby’s autostereoscopic (no glasses) 3D demo (left) has come a long way in one year. So has the European Project FINE, which can create a virtual-camera viewpoint almost anywhere, based on just a few normally positioned cameras. Last year, there was a lot of processing time per frame; this year, the viewpoint repositioning was demonstrated in real-time.

Leyard 4K wallIf you’re more interested in displays, consider what’s been going on in direct-view LED video. It started out in outdoor stadium displays, where long viewing distances would hide the visibility of the individual LEDs. At NAB 2013, two companies, Leyard (right) and SiliconCore, showed systems with 1.9-mm pixel pitch, leaving the LED structure virtually invisible even at home viewing distances. Is “virtually” not good enough? SiliconCore also showed their new Magnolia panel, with a pitch of just 1.5 mm!

The Leyard display shown here (and at NAB) was so-called “4K,” with more than twice the number of pixels of so-called “Full HD” across the width of the picture. 4K also typically has 2160 active (picture carrying) lines per frame, twice 1080, so it typically has four times the number of pixels of the highest-resolution for of HD.

The Way of the Eagle4K was unquestionably the major unofficial theme on the NAB show floor, replacing the near-ubiquitous 3D of two years ago. There were 4K lenses, 4K cameras, 4K storage, 4K processing, 4K distribution, and 4K displays. Using a form of the new high-efficiency video codec (HEVC), the Fraunhofer Institute was showing visually perfect 4K pictures Inca trimmedwith their bit rates reduced to just 5 Mbps; with the approval of the FCC, that means it could be possible to transmit multiple 4K programs simultaneously in a single U.S. broadcast TV channel. But some other things in the same booth seemed to be attracting more attention, including ordinary HD images, shot by INCA, a tiny, 2.5-ounce “intelligent” camera, worn by an eagle in flight. The eagle is shown above left, the camera, with lens, at right. The seemingly giant attached blue rod is a thin USB cable.

smartphoneThroughout the show floor, wherever manufacturers were highlighting 4K, visitors seemed more interested in other items. The official theme of NAB 2013 was METAMORPHOSIS, with the “ME” intended to stand for media and entertainment, not pure self interest. But most metamorphoses seemed to have happened before the show opened. metamorphosisDigital cinematography cameras aren’t new; neither are second-screen applications. Mobile DTV was introduced years ago. So was LED lighting.

There were some amazing new technologies discussed at NAB 2013 — perhaps worthy of the metamorphosis label.  But they weren’t necessarily on the show floor (at least not publicly exhibited). Attendees at the SMPTE Technology Summit on Cinema (TSC), for example, could watch large-screen bright images that came from a laser projector.

The NAB show was vast, and the associated conferences went on for more than a week. So I’m going to concentrate on just one hour, a panel session called “Advancing Cameras for Cinema,” in one room, the SMPTE TSC, and how it showed the metamorphosis of what might be fixed in post.

1895 MaryConsider the origin of post, the first edit, and it was a doozy! It occurred in 1895 (and technically wasn’t exactly an edit). At a time when movies depicted real scenes, The Execution of Mary, Queen of Scots, in its 27-foot length (perhaps 17 seconds), depicts a living person being led to the chopping block. Then the camera was stopped, a dummy replaced the person, the camera started again, and the head was chopped off. It’s hard to imagine what it must have been like to see it for the first time back then. And, since 1895, much more has been added to the editing tool kit.

It’s now possible to combine different images, generate new ones, “paint” out wires and other undesirable objects, change colors and contrast, and so on. It’s even possible to stabilize jerky images and to change framing at the sacrifice of some resolution. But what if there were no sacrifice involved?

Hitachi-Compact-8K-Camera croppedAstrodesign 8K trimmedThe first panelist of the SMPTE TSC Advancing Cameras session was Takayuki Yamashita of the NHK Science & Technology Research Labs. He described their 8K 120-frame-per-second camera. 8K is to 4K approximately as 4K is to HD, and 120 fps is also four times the 1080i frame rate. This wasn’t a theoretical discussion; cameras were on the show floor. Hitachi showed an 8K camera in a familiar ENG/EFP form (left); Astrodesign showed one dramatically smaller (right).

If pictures are acquired at higher resolutions, they may be reframed in post with no loss of HD resolution. With 8K, four adjacent full-HD-resolution images can be extracted across the width of the 8K frame and four from top to bottom. A shakily captured image that bounces as much as 400% of the desired framing can be stabilized in post with no loss of HD resolution. And the higher spatial sampling rate also increases the contrast ratio of fine detail.

100perc_lin_xHDR_color

Contrast ratio was just one of the topics in the presentation, “Computational Imaging,” of the second panelist, Peter Centen of Grass Valley. Above is an image he presented at the SMPTE summit. The only light source in the room is the lamp facing the camera lens, but every chip on the reflectance chart is clearly visible and so are the individual coils of the hot tungsten filament. It’s an extraordinarily high dynamic range (HDR); a contrast ratio of about ten million to one — more than 23 stops — was captured.

Yes, that was an image he presented at the SMPTE summit — five years ago in 2008. This year he showed a different version of an HDR image. There’s nothing wrong with the technology, but bringing it to the market is a different matter.

Coded apertureAt the 2013 TSC, Centen showed an even older development, one first presented by an MIT-based group at SIGGRAPH in 2007 <http://groups.csail.mit.edu/graphics/CodedAperture>, a so-called “coded aperture.” Consider a point just in front of a camera’s lens. The lens might zoom in or out and might focus on something in the foreground or background. Its aperture might be wide open for shallow depth of field or partially closed for greater depth of field. If it’s a special form of lens (or lenses), it might even deliver stereoscopic 3D. All of those things might happen after the light enters the lens, but all of those possibilities exist in the “lightfield” in front of the lens.

ApertureCoded aperture from MIT paperThere have been many attempts to capture the whole lightfield. Holography is one. Another, used in the Lytro still camera, uses a fly’s-eye type of lens, which can cut into resolution (an NAB demonstration a few years ago had to use an 8K camera for a low-resolution image). A third was described by the third panelist (and shown in his booth on the show floor). The one Centen showed requires only the introduction of a disk with a pattern of holes into the aperture of any lens on any camera.

Centen closeCenten farHere is just one possible effect on fixing things in post, with images from the MIT paper. It is conceivable to change focus distance and depth of field and derive stereoscopic 3D from any single camera and lens combo after it has been shot (click on images to enlarge).

The moderator’s introduction to the panel showed a problem with higher resolutions: getting lenses that are good enough. He showed an example of a 4K lens (with just a 3:1 zoom ratio) costing five times as much as the professional 4K camera it can be mounted on. Centen offered possibilities of correcting both lens and sensor problems in post and of deriving 4K (or even 6K) from today’s HD sensors.

Fraunhofer arrayThe third panelist, Siegfried Foessel of the Fraunhofer Institute, seemed to cover some of the same ground as did Centen — using computational imaging to derive higher resolution from lower-resolution image sensors, increasing dynamic range, and capturing a lightfield, but his versions used completely different technology. The higher resolution and HDR can come from masking the pixels of existing sensors. And the Fraunhofer lightfield capture uses an array of tiny cameras not much bigger than one ordinary one, as shown in their booth (right). Two advantages of the multicamera approach are that each camera’s image looks perfect (with no fly’s eye resolution losses or coded-aperture light losses) and that the wider range of lens positions also allows some “camera repositioning” in post (without relying on Project FINE processing).

Foessel also discussed higher frame rates (as did many others at the 2013 TSC, including a professor of neuroscience and an anesthesiologist). He noted that capturing at a high frame rate allows “easy generation of different presentation frame rates.” He also speculated that future motion-image programming might use a frame rate varying as appropriate.

jotsThe last panelist was certainly not the least. He was Eric Fossum from Dartmouth’s Thayer School of Engineering, but he was introduced more simply, as the inventor of the modern CMOS sensor. His presentation was about a “quanta image sensor” (QIS) containing, instead of pixels, “jots.” The simplest description of a jot is as something like a photosensitive grain from film. A QIS sensor counts individual photons of light and knows their location and arrival time.

An 8K image sensor has more than 33 million pixels; a QIS might have 100 billion jots and might keep track of them a thousand times a second. The exposure curve seems very film-like. Fossum mentioned some other advantages, like motion compensation and “excellent low light performance,” although this is a “longer-term effort” and we “won’t see a camera for some time.”

The “convolution window size” (something like film grain size) can be changed after image acquisition.  In other words, even the “film speed” will be able to be changed in post.

Tags: , , , , , , , , , , , , , , , , , , , , , , , ,

Advancing Cameras for Cinema (Panel Intro Only)

April 6th, 2013 | 2 Comments | Posted in Download, Today's Special

Advancing Cameras for Cinema Panel Discussion (Panel Intro Only)
NAB 2013
April 6, 2013

Video (3:43 TRT)

Tags: , , , , , , , , , , , ,

Leg Asea

March 1st, 2013 | No Comments | Posted in Schubin Cafe

2013 HPA Tech Retreat Broadcasters Panel: ABC, CBC, CBS, Ericsson, Fox, NAB, NBC, and PBS are shown (not in order); EBU, NHK, Sinclair, Univision, and locals were also present

Joe Zaller, a manager of the very popular (16,000-member) Television Broadcast Technologies group on LinkedIn, tweeted on February 21 from the 2013 HPA Tech Retreat in Indian Wells, California: “Pretty much blown away from how much I learned Wednesday at [the broadcasters] panel… just wish it had been longer.”

Adam Wilt, in his huge, six-part, 14-web-page (each page perhaps 20 screens long) coverage of the five-day event for ProVideoCoalition.com put it this way: “When you get many of the best and brightest in the business together in a conference like this, it’s like drinking from a fire hose. That’s why my notes are only a faint shadow of the on-site experience. Sorry, but you really do have to be there for the full experience”: http://provideocoalition.com/awilt/story/hpa-tech-retreat-wrap-up

In his Display Central coverage, Peter Putman called it “one of the leading cutting-edge technology conferences for those working in movie and TV production”: http://www.display-central.com/free-news/display-daily/4k-in-the-desert-2013-hpa-tech-retreat/. The European Broadcasting Union’s technology newsletter noted of the retreat, held in the Southern California desert, “There were also many European participants at HPA 2013, in particular from universities, research institutes and the supplier industry. It has clearly become an annual milestone conference for technology strategists and experts in the media field”: http://tech.ebu.ch/news/new-media-technology-on-the-agenda-at-te-22feb13

It was all those things and more. HPA is the Hollywood Post Alliance, but the event is older than HPA itself. It is by no means restricted to Hollywood (presenters included the New Zealand team that worked on the high-frame-rate production of The Hobbit and the NHK lab in Japan that shot the London Olympics in 8K), and it’s also not restricted to post. This year’s presentations touched on lighting, lenses, displays, archiving, theatrical sound systems, and even viewer behavior while watching one, two, or even three screens at once.

It is cutting-edge high tech–the lighting discussed included wireless plasmas, the displays brightnesses as high as 20,000 cd/m² (and as low as 0.0027), and the archiving artificial, self-replicating DNA–and yet there was a recognition of a need to deal with legacy technologies as well. Consider that ultra-high-dynamic-range (HDR) display.

The simulator created by Dolby for HDR preference testing is shown at left, minus the curtains that prevented light leakage. About the only way to achieve sufficient brightness today is to have viewers look into a high-output theatrical projector. In tests, viewers preferred levels far beyond those available in today’s home or theatrical displays. But a demonstration at the retreat seemed to come to a different conclusion.

The SMPTE standard for cinema-screen brightness, 196M, calls for 16 foot-lamberts or 55 cd/m² with an open gate (no film in the projector). With film, peak white is about 14 fL or 48 cd/m², a lot lower than 20,000. Whether real-world movie theaters achieve even 48–especially for 3D–is another matter.

During the “More, Bigger, But Better?” super-session at the retreat, a non-depolarizing screen (center at right) was set up, the audience put on 3D glasses, and scenes were projected in 3D at just 4.5 fL (15 cd/m²) and again at 12 fL (41 cd/m²). The audience clearly preferred the latter.

Later, however, RealD chief scientific officer Matt Cowan showed a scene from an older two-dimensional movie at 14, 21, and 28 fL (48, 72, and 96 cd/m²). This time, the audience (but not everyone in the audience) seemed to prefer 21 to 28. Cowan led a breakfast roundtable one morning on the question “Is There a ‘Just Right’ for Cinema Brightness?”

Of course, as Dolby’s brightness-preference numbers showed, a quick demo is not the same as a test, and people might be reacting simply to the difference between what they are used to and what they were shown. The same might be the case with reactions to the high-frame-rate (HFR) 48 frames per second (48 fps) of The Hobbit. When the team from Park Road Post in New Zealand showed examples in their retreat presentation, it certainly looked different from 24-fps material, but whether it was better or worse was a subjective decision that will likely change with time. There were times when the introduction of sound or color were also deemed detrimental to cinematic storytelling.

At least the standardized cinema brightness of 14 fL had a technological basis in arc light sources and film density. A presentation at the retreat revealed the origin of the 24-fps rate and showed that it had nothing to do with visual or aural perception or technological capability; it was just a choice made by Western Electric’s Stanley Watkins (left) after speaking with Warner Bros. chief projectionist Jack Kekaley. And we’ve gotten used to that choice for 88 years.

Today, of course, actual strands of film have nothing to do with the moving-images business–or do they? Technicolor’s Josh Pines noted a newspaper story explaining the recent crop of lengthy movies by saying that digital technology lets directors go longer because they don’t have to worry about the cost of film stock. But Pines analyzed those movies and found they they had actually, for the most part, been shot on film.

Film is also still used for archiving. Major studio blockbusters, even those shot, edited, and projected electronically, are transferred to three strands of black-&-white film (via a color-separation process), even though that degrades the image quality, for “just-in-case” disaster recovery; b&w film is the only moving-image medium to have thus far lasted more than a hundred years.

At one of the 2013 HPA Tech Retreat breakfast roundtables (right) one morning, the head of archiving for a major studio shocked others by revealing they were no longer archiving on film. At the same roundtable, however, studios acknowledged that whenever a new restoration technology is developed, they go to the oldest available source, not a more-recent restoration.

If film brightness, frame rate, and archiving are legacies of the movie business, what about television? There was much discussion at the retreat of beyond-HDTV resolutions and frame rates. Charles Poynton’s seminar on the technology of high[er] frame rates explained why some display technologies don’t have a problem with them while others do.

Other legacies of early television also appeared at the retreat. Do we still need the 0.999000… frame-rate-reduction factor of NTSC color in an age of Ultra-HD? It’s being argued in those beyond-HD standards groups today.

Interlace and its removal appeared in multiple presentations and even in a demo by isovideo in the demo room (a tiny portion of which is shown at left). As with film restoration from the original, the demo recommended archiving interlaced video as such and using the best-available de-interlacer when necessary. And there appeared to be a consensus at the retreat that conversion from progressive to interlace for legacy distribution is not a problem.

There was no such consensus about another legacy of early television, the 4:3 aspect ratio. One of the retreat’s nine quizzes asked who intentionally invented the 16:9 aspect ratio (for what was then called advanced television), what it was called, and why it was created. The answers (all nine quizzes had winners) were: Joseph Nadan of Philips Labs, 5-1/3:3, and because it was considered the minimum change from 4:3 that would be seen as a valuable-enough difference to make consumers want to buy new TV sets. But that was in 1983.

Thirty years later, the retreat officially opened with a “Technology Year in Review,” which called 2013 the “27th (or 78th) Annual ‘This Is the Year of HDTV.'” It noted that, although press feeds often still remain analog NTSC, according to both Nielsen and Leichtman research in 2012 75% of U.S. households had HDTVs. Leichtman added that 3/5 of all U.S. TVs, even in multi-set households, were HDTV. Some of the remainder, even if not HDTV, might have a 16:9 image shape. So why continue to shoot and protect for a 4:3 sub-frame of the 16:9?

On the broadcasters panel, one U.S. network executive explained the decision by pointing to other Nielsen data showing that, as of July 15 of 2012, although roughly 76% of U.S. households had HDTVs (up 14% from the previous year), in May only 29% of English-language broadcast viewing was in HD and only 25% of all cable viewing. Furthermore, much of the legacy equipment feeding the HDTV sets is not HD capable.

A device need not be HD capable, however, to be able to carry a 16:9 image. Every piece of 4:3 equipment ever built can carry a 16:9 image, even if 4:3 image displays will show it squeezed. So the question seems to be whether it’s better for a majority of U.S. TV sets to get the horizontally stretched picture above left or a minority to get the horizontally squeezed picture at right.

What do actual viewers think about legacy technologies? Two sessions scarily provided a glimpse. A panel of students, studying in the moving-image field, offered some comments that included a desire to text during cinema viewing. And Sarah Pearson of Actual Customer Behaviour in the UK showed sequences shot (with permission) in viewer homes on both sides of the Atlantic, analyzed by the 1-3-9 Media Lab (example above left). Viewers’ use of other media while watching TV might shock, but old photos of families gathered around the television often depicted newspapers and books in hand.

It wasn’t only legacy viewing that was challenged at the retreat. Do cameras need lenses? There was a mention of meta-materials-based computational imaging.

Do cameras need to move to change viewpoint, or can that be done in post? Below are two slides from “The Design of a Lightfield Camera,” a presentation by by Siegfried Foessel of Germany’s Fraunhofer Institut (as shot off the screen by Adam Wilt for his ProVideoCoalition.com coverage of the retreat: http://provideocoalition.com/awilt/story/hpa-tech-retreat-day-4). Look at the left of the top of the chair and what’s behind it.

Are lightfield cameras with computational sensors the future? Will artificial-DNA-based archives replace all other media? Will U.S. broadcasters finally stop protecting legacy 4:3 TV screens? Plan now to attend the 2014 HPA Tech Retreat, the week of February 17-21 at the Hyatt Regency in Indian Wells, California.
——————————————————————————————————————-
Disclosure: I have received compensation from HPA for my role in helping to put the Tech Retreat together.
Tags: , , , , , , , , , , , , ,
Web Statistics