Produced by:
| Follow Us  

HDR: The Bottom Line by Mark Schubin

September 1st, 2016 | No Comments | Posted in Download, Schubin Cafe

This is a modified version of the original presentation given at the 2016 HPA Tech Retreat on February 18, 2016.

High Dynamic Range (HDR) imagery offers the most bang for the bit in viewing tests.  Equipment is available, and issues are being worked out.  What happens in theaters and homes, however, is a different matter.

Download: HDR: The Bottom Line by Mark Schubin (TRT 5:07)

Embedded:

 

Tags: , , , , , , , , ,

NAB 2011 Wrapup, Washington, DC SMPTE Section, May 19, 2011

June 1st, 2011 | No Comments | Posted in Download, Today's Special

NAB 2011 Wrapup
Washington, DC SMPTE Section
May 19, 2011

PPT:
http://www.schubincafe.com/wp-content/uploads/2011/05/Schubin_NAB_2011.ppt
(38 slides / 43 minutes)

PLEASE VIEW IN SLIDE SHOW MODE TO ACTIVATE AUDIO

Tags: , , , , , , , ,

How Good Is Good Enough?

April 30th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

As usual, there were many new, useful products announced at this month’s annual convention of the National Association of Broadcasters (NAB) in Las Vegas. As usual, there were also many new trends, one sparked by the the U.S. Congress and another by last month’s earthquake & tsunami in Japan.

At the event’s Digital Cinema Summit, not only 3D but also higher frame rates, greater spatial resolutions, and increased bit depths and color gamuts were discussed. Yet the announcement that startled me most was near the beginning of Panasonic’s press conference.

Normally, I don’t pay much attention to manufacturer sales announcements. They might indicate real interest in a product, but the sales could also be the result of many other factors, including sweetheart deals and existing infrastructure.

Panasonic’s announcement was about the 2012 Olympic Games in London. Like the Super Bowl and other grand events, the quadrennial Olympics are opportunities to showcase new video technologies. At the 1984 Games, for example, Panasonic introduced its fluorescent-discharge-tube-based Astrovision giant color screens with pictures visible in broad daylight.

What new technology might the company provide for the world’s top sporting event, taking place more than a year after NAB 2011? Might it be something to do with 3D? Panasonic introduced a new integrated 3D camcorder at the show, the AG-3DP1, with larger image sensors (1/3-inch format), greater-range lenses (17x), and AVC-Intra recording onto dual P2 solid-state memory cards.

Might it be something to do with AVC-Ultra, the company’s highest-grade video bit-rate-reduction system, capable of dealing with 1080-line HD at 60 progressively-scanned pictures per second or other signal types including 3D and Hollywood’s 2K 4:4:4? Might it be something beyond even that?

Alas, no. The startling (to me) announcement was that “the official recording format for capturing the London 2012 Olympic Games,” as specified by Olympic Broadcasting Services London (OBSL), the host broadcaster, will be–ready?–DVCPRO HD. Next year’s NAB show will be the 13th annual equipment show since that format was introduced (and the 14th since it was announced).

As the image above right indicates, DVCPRO HD was introduced as a tape-cassette-based recording format (although Panasonic noted that OBSL “will also use the P2 HD series with solid-state memory cards”). Like HDCAM before it, DVCPRO HD is also a sub-sampling recording format; it doesn’t capture full horizontal resolution even in luma (brightness detail). But it would appear that OBSL considers it good enough.

“Good enough” was a phrase that came to my mind at many places on the NAB Show exhibit floor this year. Consider Sony’s new OLED reference monitors. The BVM series was introduced at February’s Hollywood Post Alliance (HPA) Tech Retreat. They have a slight color shift with viewing angle but otherwise seem ideal for the production-truck market, where a 42-inch plasma screen in video control is generally out of the question.  And their price is in the range of similarly sized reference monitors using other technologies.

At NAB 2011, Sony expanded its offerings with a PVM OLED series at less than a quarter of the price (a discount of about 77%). Not only that, but the PVM monitors are even thinner than the BVM and include built-in controls.

Obviously, there have to be some drawbacks, given the extreme price difference. The signal processing in the PVM is not as high in quality as in the BVM, the flexibility is limited, and the OLED panels for the PVM are chosen from the manufactured stock after the top-of-the-line BVM panels have been selected and removed.

That might mean a bit less color-shift-free viewing angle. But another flaw was mentioned for the PVM panels: possible dead pixels.

In a sense, that’s no different from what Sony has done since its first chip-based cameras. Perfect image sensors went into the broadcast series, slightly flawed into the professional series, and more flawed into the consumer series. In cameras, however, bad pixels can be effectively “removed” by taking an average of the good pixels around them. In a display panel, there is nothing between the dead pixel and the eye to do any averaging (though Sony promised bad pixels would be off, never on).

In the choice between Sony’s BVM and PVM OLED monitors, the trade-off is clearly between cost and quality. At some other exhibits at NAB 2011, the parameters were less obvious. Consider, for example, the 52-inch “Professional 3D display” from Dimenco shown at the Triaxes Vision booth. It was said to have a “stunning and crystal-clear 3D image.”

From a 3D perspective, the autostereoscopy (glasses-free 3D) was superb. The image could easily be fused into 3D, and there was a broad viewing angle. The reason that part of the viewing experience was so good is that the displayed used 28 different views created from “2D-plus-Depth” information. Unfortunately, the display starts with ordinary HD resolution of 1920 pixels across. Divide that by 28 views, and you get some idea of how not-exactly-crystal-clear I perceived the resulting image.

That might be an extreme example, but there were many others at the show. Almost every 3D display there traded off either spatial resolution (in passive-glasses systems) or temporal resolution (in active glasses) or both.

Almost every display did that. One that did not could be found at the Calibre exhibit in the North Hall. Among other products, Calibre makes scalers, and their PremierViewProHD-IW includes what the company calls “3D Left/Right Extraction & Alignment for Passive 3D Projection Systems.”

In brief, the scalers take the “frame-packed” 3D signal from a Blu-ray disc and convert it to two, separate HD signals, one for the left eye and one for the right. Each signal is fed to its own projector, simple polarizing filters are clamped in front of the projection lenses, and simple passive glasses are used for viewing, with no loss of spatial or temporal resolution.

The system might be used for viewing 3D dailies. That would require a relatively inexpensive way to create 3D Blu-ray discs. That’s what Pico House’s Easy 3D does. It requires only a laptop with a BD-RE drive. The trade-off on this one is that its input format is AVCHD–ideal for a small, relatively inexpensive camcorder like Panasonic’s AG-3DA1, not so good for systems recording on other formats.

Is AVCHD good enough for dailies? Is any bit-rate-reduced format good enough for mastering? I’ll get to those questions in part II.

Tags: , , , , , , , , , ,

Ex uno plures

March 27th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

HPA breakfast roundtable - copyright @morningstar productions 2011

There were many wonders at the 17th-annual HPA Tech Retreat in February in the California desert. And many of the more-than-500 attendees at the Hollywood Post Alliance were left wondering. One thing they wondered about was how to accommodate all viewers from a single master or feed.

As usual, many manufacturers introduced new products at the event (it’s where, in the past, Panasonic first showed its Varicam and Sony first showed HDCAM SR). But this year even the best products gave one pause.

Consider, for example, the Kernercam 3D rig, shown at left. It is transportable from set to set in three relatively small packing cases (far left). It takes just a few minutes to go from those cases to shooting. Each individual camera subassembly (bottom right of the image at left, shown with a Sony P1 camera) is pre-adjusted to the desired stereoscopic alignment parameters. After that, the two camera modules (with almost any desired cameras) just snap into the overall rig, with no readjustment necessary. The mounts are so rugged that repeatedly snapping cameras in and out or even hitting them does not change the 3D alignment.

That’s great, right? For many purposes, it probably is. But some stereoscopic camera-rig manufacturers, such as 3ality, are justifiably proud that their rigs do not use fixed alignment and can, therefore, be adjusted even during shots.

The choice of a super-rugged, fixed mount or a less-rugged, remotely adjustable mount is just that, a choice, and directors, cinematographers, & videographers have been making choices all their professional lives. The result of those choices adds up to a desired effect. Or does it?

Sony also introduced new products at this year’s HPA Tech Retreat. One, SR Memory, with the ability to store up to a terabyte of data on a solid-state memory “card” and a transfer rate allowing four live uncompressed HD streams simultaneously, falls into that category of choice. It’s also a wonder of new technology (though retreat attendees were given a preview in 2010, as shown in the picture at right, from Adam Wilt’s excellent coverage of last year’s HPA Tech Retreat, http://provideocoalition.com/index.php/awilt/story/hpatr2010_4/P1/).

Another new Sony introduction, OLED reference monitors, might have introduced a different kind of wonder. Some in attendance were delighted by what seemed like perfect image reproduction in something that (in one size, at least) will fit in a standard equipment rack. Others thought that existing larger devices already offer sufficiently good reference monitoring.

copyright @morningstar productions 2011

The way Sony conducted its demonstration, the new monitor was placed between Sony’s own reference-grade LCD and CRT monitors. With 24-frame-per-second source material, the CRT image flickered perceptibly. In black image areas, the LCD was noticeably lighter. The OLED suffered from neither problem. But is that necessarily good?

Many home viewers still watch TV on picture tubes. Many others watch on LCD displays. Others watch plasma or DLP. Some view images roughly 60 times a second, others 120, 240, or even 480 times a second. Some watch TV in dimly lit living rooms. Others watch on mobile devices outdoors in the sun. Still others watch content shot with the same cameras on giant projection screens in cinema auditoriums or even bigger LED screens in sports stadiums. The problem is that we are no longer shafted.

We were originally shafted in 1925 — literally! In that year, John Logie Baird was probably the first person to achieve a recognizable video image of a human face. A picture of the apparatus he used is shown at right. At far right is the original subject, a ventriloquist’s dummy’s head called Stooky Bill. The spinning disks on the shaft were used for image scanning. But the shaft extended from the camera section to a display section in the next room. It was impossible to be out of sync.

Another television pioneer was Philo Taylor Farnsworth, probably the first person to achieve all-electronic television (television in which neither the camera nor the display use mechanical scanning). His first image, in 1927, was a stationary dollar sign.

Although Farnsworth deserves credit for achieving all-electronic television, he was not the first to conceive it. Boris Rosing came up with the picture tube in 1907 in Russia, and the following year Alan Archibald Campell Swinton came up with the concept of all-electronic television in Britain. His diagram (left) was published a few years later. Although the idea of tube-based cameras might seem strange today, the first video camera to be shown at an NAB exhibit that did not use a tube didn’t appear until 1980 (and then only in prototype form), and tubeless HD cameras didn’t begin to appear until 1992.

Tube-based cameras and TVs with picture tubes didn’t have the physical shaft of Baird’s first apparatus, but they were still effectively shafted. When the electron beam in the camera’s tube(s) was at the upper left, the electron beam in the viewer’s picture tube was in the same position. Tape could delay the whole program, but it didn’t change the relationship.

The introduction of solid-state imaging devices changed things. An image might be captured all at once but displayed a line at a time, resulting in “rubbery” table legs as a camera panned past them. Camera tubes and solid-state imaging devices also had other differences. We’ve learned to work with those differences as well as the ones between different display technologies.

Now there’s 3D. I’ve written before about 3D’s other three dimensions, and their effect on depth perception: pupillary distance (between the eyes, especially different between adults and children), screen size, and viewing distance. See, for example, http://www.schubincafe.com/2010/03/14/the-other-three-dimensions-of-3dtv/. There are other issues associated with individual viewers, who might be blind in one eye, stereo blind, have limited fusion ranges (depths at which the two stereoscopic images can fuse into one), long acquisition times (until fusion occurs), etc.

There are also display-technology issues. One is ghosting. A presentation in the HPA Tech Retreat’s main program was called “Measurement of the Ghosting Performance of Stereo 3D Systems for Digital Cinema and 3DTV,” presented by Wolfgang Ruppel of RheinMain University of Applied Sciences in Germany. Ruppel presented test charts used to measure various types of ghosting for commonly used cinema and TV display systems. A trimmed version of one of his slides appears at left. It’s taken (with permission) from Adam Wilt’s once-again excellent coverage of the 2011 HPA Tech Retreat (which includes the full slides and the names of the stereoscopic display systems, http://provideocoalition.com/index.php/awilt/story/hpa_tech_retreat_2011_day_4/).

Ruppel’s paper also looked at the effects of ghosting suppression systems and noted color shifting. Some systems shifted colors towards yellow, others towards blue, and at least two systems shifted the colors differently for the two eyes! Can one master recording deliver accurate color results to cinemas when one auditorium might use one 3D display system and another a different one?

In one of the demo rooms, SRI (Sarnoff Labs) demonstrated a different test pattern for checking stereoscopic 3D parameters. It is shown above with the left- and right-eye views side by side. The crosstalk (ghosting) scale is shown at right in a demonstration of the way it would look with 4% crosstalk. The pattern can also be used to check synchronization between eye views, using the small, moving white rectangles shown just to the right of center below the eye-view identification.

There were other Sarnoff demonstrations, however, that indicated that synchronization of eye views is not as simple as making them appear when they are supposed to. Consider, for example, the current debate about the use of active glasses vs. passive glasses in 3DTVs.

Active glasses shutter the right eye during the left eye’s view and then shutter the right eye during the left eye’s view. Passive glasses usually involve a pattern of polarizers on the screen sending portions of the image (typically every other row) to one eye and the rest to the other (although there are also passive-glasses systems that use a full-image optical-retarder plate to alternate between left-eye and right-eye images).

Above are side-by-side right-eye and left-eye random-dot-type images used in another of the SRI demos. If you cross your eyes so they form a single image, you should see a circular disc, slightly to the right of the center, floating above the background.

That’s a still image.  SRI’s demo had multiple displays of moving images.  One used active glasses and another simultaneous-image passive glasses.

When the sequence was set for the left- and right-eye views to move the disc simultaneously side to side, that’s exactly what viewers looking at the passive display saw. But, with the exact same signal feeding the active-glasses display, viewers of that one saw the disc moving in an elliptical path into and out of the screen as well as back and forth. With the selection of a different playback file, the Sarnoff demonstrators could make the active-glasses view be side to side and the passive-glasses view be elliptical.

copyright @morningstar productions 2011

The random-dot nature of the image assured that no other real-world depth cues could interfere. But how significant would the elliptical change be in real-world images?

That’s one thing SRI wants to figure out, so they can come up with a mechanism to rate the quality of stereoscopic images in the same way that their JND (just-noticeable differences) technology has been used to evaluate the quality of non-stereoscopic imagery in the era of bit-rate-reduced (“compressed”) recording and distribution.

It’s not easy to figure out. One SRI sequence of slowly changing depth caused one researcher to get queasy. As can be seen at left, however, it didn’t bother another viewer at all.

We’re just beginning to learn about the many factors that can affect both 2D (consider those CRT, OLED, and LCD displays at the Sony demo, as well as others not shown) and 3D viewing. But there’s no turning back.

The motto carried in the beak of the eagle on the Great Seal of the United States is often translated as “Out of Many, One.” The title of this post means “Out of One, Many,” the problem faced by those creating moving-image programming in the post-shafted era.

That’s the front of the Great Seal. The back has two more mottoes: One, Novus Ordo Seclorum, emphasizes the impossibility of returning to the shaft. We’re in “A New Order of the Ages.” The other, Annuit Coeptis, I choose to translate as “Might As Well Smile About These Undertakings.”

 

 

Tags: , , , , , , , , , , , , , , ,

Sony OLED Reference Monitors

February 8th, 2011 | No Comments | Posted in Schubin Snacks

22_bvme250_side trimmedAt next week’s HPA Tech Retreat in Rancho Mirage, California, Sony will introduce their TriMaster EL series of OLED reference monitors, the 16.5-inch BVM E170 and the 24.5-inch BVM E250. Here are some of their characteristics:

  • 30,000-hour panel life
  • better energy efficiency than even LCD
  • faster pixels than even CRT
  • more contrast than even CRT
  • P3 color gamut
  • thickness (BVM E250) of just 148 mm (as shown at right), not increased by rear cable connections
  • SD scaling improved over the BVM L series
  • HDMI, Display Port, and 3 Gbps HD-SDI
  • negligible processing latency

The new monitors will be unveiled Tuesday before what might be the world’s most critical audience, in the HPA’s hands-on demo area. And, given the other demonstrations and presentations at that event, you really should be there. Here’s the latest schedule: http://www.hpaonline.com/mc/page.do?sitePageId=122447&orgId=hopa

If by some freak of fate you cannot attend the HPA Tech Retreat, however, you’ll still be able to view the new Sony technology at the New York Public Television Quality Group Workshop on March 2. The workshops, funded by the Corporation for Public Broadcasting, have already enlightened audiences in San Francisco, St. Paul, Boston, and Nashville.

The upcoming New York workshop is not restricted to participants involved in public television. Here’s the agenda for that event, which will feature such production luminaries as Tom Holmes and Billy Steinberg: https://secure.connect.pbs.org/PbsDocuments/PBS/QualityGroup/events.html

I hope to see you at one or both of these great events.

Tags: , , , , , , , , , ,

3D and Not 3D: The Knowledge Returns

January 30th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

Last year was a wonderful one for 3D.  In terms of worldwide and domestic box-office grosses, six of the top-10 movies released in 2010 were in 3D. And by year’s end there were almost two dozen models of integrated 3D cameras and camcorders and literally dozens of models of two-camera 3D rigs.

Alice Mad-HatterThere’s just one problem: None of those 3D cameras or camera rigs — not a single one of them — was used to create any of those six top-10 3D movies. Four of the movies were animated, and the other two, including the second-highest grosser of the year, Alice in Wonderland, were converted from 2D to 3D in post production.

That’s not a fact that is frequently mentioned. But it will be mentioned next month at the 17th annual HPA Tech Retreat® in the (perhaps appropriately named) community of Rancho Mirage, California.

The first retreat predates even its sponsoring organization, the Hollywood Post Alliance. And, although it might seem natural that post-production processing of 3D is an appropriate topic for HPA, the retreat is limited to neither post nor Hollywood.

It has featured presenters from locations ranging from New Zealand to Norway and Argentina to Australia and from organizations ranging from broadcast networks to manufacturers, the military, and movie exhibitors. If someone there is from NATO, that could stand for the National Association of Theater Owners or the North Atlantic Treaty Organization (both have made presentations in the past). You’ll find more on the retreat in this earlier post: http://schubincafe.com/blog/2010/01/someone-will-be-there-who-knows-the-answer/

Stereoscopic 3D has been a prominent feature of the retreat for many years. Presenters on the topic have included Professor Martin Banks of the Visual Space Perception Laboratory at the University of California-Berkeley. Topics have included the BBC’s research on virtual stereoscopic cameras. And then there are the demonstrations.

sixflags_thumbFor the 2008 retreat, HPA arranged to convert an auditorium at a local multiplex to 3D so participants could judge for themselves everything from the 3D Hannah Montana movie to different forms of 2D-to-3D conversions prepared by In-Three. Long before turning into a product, JVC demonstrated the technology in its 2D-to-3D converter at the 2009 retreat.

At that same retreat, RabbitHoles Media showed multiple versions of full-motion, full-color, high-detail holography (one is shown above right in a shot taken from Jeff Heusser’s coverage of the 2009 retreat for FXGuide.com http://www.fxguide.com/featured/HPA_Technology_Retreat_2009/; you can see it in motion here http://www.rabbitholes.com/entertainment-gallery/). At last year’s retreat, Dolby demonstrated 3D HD encoded at roughly 7 Mbps.

Virtual 3D and 2D-to-3D conversion are just two forms that will be discussed in a presentation called “Alternatives to Two-Lens 3D.” And here are some of the other 3D sessions that will be on this year’s program: 3D Digital Workflow, Avid 3D Stereoscopic Workflow, Live 3D: Current Workarounds and Needed Tools, 3D Image Quality Metrics, Subtitling for Stereographic Media, Will 3D Become Mainstream?, Single-Lens Stereoscopy, Home 3D a Year Later, Storage Systems for 3D Post, Measurement of the Ghosting Performance of Stereo 3D systems for Digital Cinema and 3DTV, and Photorealistic 3D Models via Camera-Array Capture. Participants will range from 3D equipment manufacturers to 3D distributors to the 3D@Home Coalition.

If the 2011 HPA Tech Retreat seems like a great 3D event, that’s probably because it is. But it’s a lot more, too. If you’re interested in advanced broadcast technology, for example, here are some of the sessions on that topic: ATSC Next-Generation Broadcast Television, Information Theory for Terrestrial DTV Broadcasting, Near-Capacity BICM-ID-SSD for Future DTTB, DVB-T2 in Relation to the DVB-x2 Family, the Application of MIMO in DVB, Hybrid MIMO for Next-Generation ATSC, 3D Audio Transmission, Next-Generation Handheld & Mobile, High-Efficiency Video Coding, Convergence in the UHF Band, Global Content Repositories for Distributed Workflows, Content Protection, Pool Feeds & Shared Origination, Multi-Language Video Description, Consumer Delivery Mayhem, Networked Television Sets, Interoperable Media, FCC Taking Back Spectrum, the CALM Act, Making ATSC Loudness Easy, Media Fingerprinting, Embracing Over-the-Top TV, and Image Quality for the Era of Digital Delivery.

Broadcast-tech presenters will come from, among others: ABC, CBS, Fox, NBC, PBS, Sinclair Broadcast Group, and NAB; ATSC, BBC, Canada’s CRC, China’s Tsinghua University, the European Broadcasting Union, Germany’s Technische Universität Braunschweig, Korea’s Kyungpook National University, and Japan’s NHK Science & Technology Research Labs; AmberFin, DTS, Linear Acoustic, Microsoft, Rohde & Schwarz, Roundbox, Rovi, and Verance; Comcast, Starz, and TiVo.

Not interested in 3D or broadcast? How about reference monitoring, with presentations on LCD, OLED, and plasma, new research results from Gamma Guru Charles Poynton, and an expected major new product introduction from a major manufacturer?

What about workflow? Warner Bros. will present their evaluation of 13 different workflows at a “supersession” on the subject. The supersession will feature major studios and post facilities and is expected to cover everything from scene to screen. If that’s not enough, there will be other sessions on interoperable mastering and interoperable media, file-based workflows, and “Hollywood in the Cloud.”

Point.360 trimmedInterested in archiving? Merrill Weiss and Karl Paulsen will be presenting an update on the Archive Exchange Format, a large panel will discuss (and possibly argue about) the many aspects of LTO-5, and there will even be a session on new technology for archiving on, yes, film.  At left are some images from Point.360 Digital Film Labs (left is the original and right is their film-archived version).

There will be much more: hybrid routing, consumer electronics update, Washington update, global content repositories and other storage networks, shooting with HD SLRs, movie restoration (including a full screening of a masterpiece), standards update, new audio technologies for automating digital pre-distribution processes — even surprises about cable bend radius. The full program may be found here: http://www.hpaonline.com/mc/page.do?sitePageId=122447&orgId=hopa

In short, whatever you might want to know about motion-image production and distribution and related fields, there will probably be somebody there who knows the answer. Is this information available elsewhere, at, say, a SMPTE conference?  Perhaps it is.  But next month, SMPTE’s executive vice president, engineering vice president, and director of engineering will all be at the HPA Tech Retreat.

Tags: , , , , , , , , , , , , , , ,

Oh Less for OLED?

May 18th, 2010 | No Comments | Posted in Today's Special

OLED seems an ideal display technology except for two problems.  One was lifetimes, but the OLED monitors shown at last month’s NAB convention were said to be good to 20,000 to 35,000 hours of use.

The other was cost.  Sony’s PVM740 seven-inch OLED display carries a suggested U.S. list price of $3850.00

MIT’s Technology Review reported yesterday, however, on a new DuPont printing process that might drop production costs considerably.  Here’s the story: http://www.technologyreview.com/computing/25337/?a=f

Tags: , , ,
Web Statistics