Recorded May 20, 2015
SMPTE DC Bits-by-the-Bay, Chesapeake Beach Resort
Direct Link ( 44 MB / TRT 34:01):
NAB 2015 Wrap-up by Mark Schubin
Recorded May 20, 2015
SMPTE DC Bits-by-the-Bay, Chesapeake Beach Resort
Direct Link ( 44 MB / TRT 34:01):
NAB 2015 Wrap-up by Mark Schubin
Once upon a time, people were prevented from getting married, in some jurisdictions, based on the shade of their skin colors. Once upon a time, a higher-definition image required more pixels on the image sensor and higher-quality optics.
Actually, we still seem to be living in the era indicated by the second sentence above. At the 2012 Hollywood Post Alliance (HPA) Tech Retreat, to be held February 14-17 (with a pre-retreat seminar on “The Physics of Image Displays” on the 13th) at the Hyatt Grand Champions in Indian Wells, California <http://bit.ly/slPf9v>, one of the earliest panels in the main program will be about 4K cameras, and representatives from ARRI, Canon, JVC, Red, Sony, and Vision Research will all talk about cameras with far more pixel sites on their image sensors than there are in typical HDTV cameras; Sony’s, shown at the left, has roughly ten times as many.
That’s by no means the limit. The prototypical ultra-high-definition television (UHDTV) camera shown at the right has three image sensors (from Forza Silicon), each one of which has about 65% more pixel sites than on Sony’s sensor. There is so much information being gathered that each sensor chip requires a 720-pin connection (and Sony’s image sensor is intended for use in just a single-sensor camera, so there are actually about five times more pixel sites). But even that isn’t the limit! As I pointed out last year, Canon has already demonstrated a huge hyper-definition image sensor, with four times the number of pixels of even those Forza image sensors used in the camera at the right <http://www.schubincafe.com/2010/09/07/whats-next/>!
Having entered the video business at a time when picture editing was done with razor blades, iron-filing solutions to make tape tracks visible, and microscopes, and when video projectors utilized oil reservoirs and vacuum pumps, I’ve always had a fondness for the physical characteristics of equipment. Sensors will continue to increase in resolution, and I love that work. At the same time, I recognize some of the problems of an inexorable path towards higher definition.
The standard-definition camera that your computer or smart phone uses for video conferencing might have an image sensor with a resolution characterized as 640×480 or 0.3 Mpel (megapixels), even if that same smart phone has a much-higher-resolution image sensor pointing the other way for still pictures. That’s because video must make use of continually changing information. At 60 frames per second, that 0.3 Mpel camera delivers more pixels in one second than an 18 Mpel sensor shooting a still image.
Common 1080-line HDTV has about 2 Mpels. So called “4K” has about 8 Mpels. It’s already tough to get a great HDTV lens; how will we deal with UHDTV’s 33-Mpel “8K”?
A frame rate of 60-fps delivers twice as much information as 30-fps; 120-fps is twice as much as 60-fps. How will we ever manage to process high-frame-rate UHDTV?
Perhaps it’s worth consulting the academies. In U.S. entertainment media, the highest awards are granted by the Academy of Motion Picture Arts & Sciences (the Academy Award or Oscar), the Academies (there are two) of Television Arts & Sciences (the Emmy Award), and the Recording Academy (the Grammy Award). Win all three, and you are entitled to go on an EGO (Emmy-Grammy-Oscar) trip!
In the history of those awards, only 33 people have ever achieved an EGO trip. And only two of those also won awards from the Audio Engineering Society (AES), the Institute of Electrical and Electronics Engineers (IEEE), and the Society of Motion-Picture and Television Engineers (SMPTE). You’re probably familiar with the last name of at least one of those two, Ray Dolby, shown at left during his induction into the National Inventors Hall of Fame in 2004.
The other was Thomas Stockham. Some in the audio community might recognize his name. He was at one time president of the AES, is credited with creating the first digital-audio recording company (Soundstream), and was one of the investigators of the 18½-minute gap in then-President Richard Nixon’s White House tapes regarding the Watergate break-in.
Those achievements appeal to my sense of appreciation of physical characteristics. The Soundstream recorder (right) was large and had many moving parts. And the famous “stretch” of Nixon’s secretary Rose Mary Woods (left), which would have been required to accidentally cause the gap in the recording, is a posture worthy of an advanced yogi (Stockham’s investigative group, unfortunately for that theory, found that there were multiple separate instances of erasure, which could not have been caused by any stretch). But what impressed (and still impresses) me most about Stockham’s work has no physical characteristics at all. It’s pure mathematics.
On the last day of the HPA Tech Retreat, as on the first day, there will be a presentation on high-resolution imaging. But it will have a very different point of view. Siegfried Foessel of Germany’s Fraunhofer research institute will describe “Increasing Resolution by Covering the Image Sensor.” The idea is that, instead of using a higher-resolution sensor, which increases data-readout rates, it’s actually possible to use a much-lower-resolution image sensor, with the pixel sites covered in a strange pattern (a portion of which is shown at the right). Mathematical processing then yields a much-higher-resolution image — without increasing the information rate leaving the sensor.
In the HPA Tech Retreat demo room, there should be multiple demonstrations of the power of mathematical processing. Cube Vision and Image Essence, for example, are expected to be demonstrating ways of increasing apparent sharpness without even needing to place a mask over the sensor. Lightcraft Technology will show photorealistic scenes that never even existed except in a computer. And those are said to have gigapixel (thousand-megapixel) resolutions!
All of that mathematical processing, to the best of my knowledge, had no direct link to Stockham, but he did a lot of mathematical processing, too. In the realm of audio, his most famous effort was probably the removal of the recording artifacts of the acoustical horn into which the famous opera tenor Enrico Caruso sang in the era before microphone-based recording (shown at left in a drawing by the singer, himself).
As Caruso sang, the sound of his voice was convolved with the characteristics of the acoustic horn that funneled the sound to the recording mechanism. Recovering the original sound for the 1976 commercial release Caruso: A Legendary Performer required deconvolving the horn’s acoustic characteristics from the singer’s voice. That’s tough enough even if you know everything there is to know about the horn. But Stockham didn’t, so he had to use “blind” deconvolution. It wasn’t the first time.
He was co-author of an invited paper that appeared in the Proceedings of the IEEE in August 1968. It was called “Nonlinear Filtering of Multiplied and Convolved Signals,” and, while some of it applied to audio signals, other parts applied to images. He followed up with a solo paper, “Image Processing in the Context of a Visual Model,” in the same journal in July 1972. Both papers have been cited many hundreds of times in more-recent image-processing work.
One image in both papers showed the outside of a building, shot on a bright day; the door was open, but the inside was little more than a black hole (a portion of the image is shown above left, including artifacts of scanning the print article with its half-tone images). After processing, all of the details of the equipment inside could readily be seen (a portion of the image is shown at right, again including scanning artifacts). Other images showed effective deblurring, and the blur could be caused by either lens defocus or camera instability.
Stockham later (in 1975) actually designed a real-time video contrast compressor that could achieve similar effects. I got to try it. I aimed a bright light up at some shelves so that each shelf cast a shadow on what it was supporting. Without the contrast compressor, virtually nothing on the shelves could be seen; with it, fine detail was visible. But the pictures were not really of entertainment quality.
That was, however, in 1975, and technology has marched — or sprinted — ahead since then. The Fraunhofer Institut presentation at the 2012 HPA Tech Retreat will show how math can increase image-sensor resolution. But what about the lens?
A lens convolves an image in the same way that an old recording horn convolved the sound of an acoustic gramophone recording. And, if the defects of one can be removed by blind deconvolution, so might those of the other. An added benefit is that the deconvolution need not be blind; the characteristics of the lens can be identified. Today’s simple chromatic-aberration corrections could extend to all of a lens’s abberations, and even its focus and mount stability.
Is it a merely a dream? Perhaps. But, at one time, so was the repeal of so-called anti-miscegenation laws.Tags: 4K, ARRI, blind deconvolution, Canon, convolution, Cube Vision, EGOT, Enrico Caruso, Forza Silicon, Fraunhofer Institut, hdtv, HPA Tech Retreat, hyper-resolution, Image Essence, image processing, image sensor, JVC, lens aberrations, Red, Sony, Soundstream, Super Hi-Vision, Thomas Stockham, UHDTV, Vision Research, Watergate tapes
Last year was a wonderful one for 3D. In terms of worldwide and domestic box-office grosses, six of the top-10 movies released in 2010 were in 3D. And by year’s end there were almost two dozen models of integrated 3D cameras and camcorders and literally dozens of models of two-camera 3D rigs.
There’s just one problem: None of those 3D cameras or camera rigs — not a single one of them — was used to create any of those six top-10 3D movies. Four of the movies were animated, and the other two, including the second-highest grosser of the year, Alice in Wonderland, were converted from 2D to 3D in post production.
That’s not a fact that is frequently mentioned. But it will be mentioned next month at the 17th annual HPA Tech Retreat® in the (perhaps appropriately named) community of Rancho Mirage, California.
The first retreat predates even its sponsoring organization, the Hollywood Post Alliance. And, although it might seem natural that post-production processing of 3D is an appropriate topic for HPA, the retreat is limited to neither post nor Hollywood.
It has featured presenters from locations ranging from New Zealand to Norway and Argentina to Australia and from organizations ranging from broadcast networks to manufacturers, the military, and movie exhibitors. If someone there is from NATO, that could stand for the National Association of Theater Owners or the North Atlantic Treaty Organization (both have made presentations in the past). You’ll find more on the retreat in this earlier post: http://schubincafe.com/blog/2010/01/someone-will-be-there-who-knows-the-answer/
Stereoscopic 3D has been a prominent feature of the retreat for many years. Presenters on the topic have included Professor Martin Banks of the Visual Space Perception Laboratory at the University of California-Berkeley. Topics have included the BBC’s research on virtual stereoscopic cameras. And then there are the demonstrations.
For the 2008 retreat, HPA arranged to convert an auditorium at a local multiplex to 3D so participants could judge for themselves everything from the 3D Hannah Montana movie to different forms of 2D-to-3D conversions prepared by In-Three. Long before turning into a product, JVC demonstrated the technology in its 2D-to-3D converter at the 2009 retreat.
At that same retreat, RabbitHoles Media showed multiple versions of full-motion, full-color, high-detail holography (one is shown above right in a shot taken from Jeff Heusser’s coverage of the 2009 retreat for FXGuide.com http://www.fxguide.com/featured/HPA_Technology_Retreat_2009/; you can see it in motion here http://www.rabbitholes.com/entertainment-gallery/). At last year’s retreat, Dolby demonstrated 3D HD encoded at roughly 7 Mbps.
Virtual 3D and 2D-to-3D conversion are just two forms that will be discussed in a presentation called “Alternatives to Two-Lens 3D.” And here are some of the other 3D sessions that will be on this year’s program: 3D Digital Workflow, Avid 3D Stereoscopic Workflow, Live 3D: Current Workarounds and Needed Tools, 3D Image Quality Metrics, Subtitling for Stereographic Media, Will 3D Become Mainstream?, Single-Lens Stereoscopy, Home 3D a Year Later, Storage Systems for 3D Post, Measurement of the Ghosting Performance of Stereo 3D systems for Digital Cinema and 3DTV, and Photorealistic 3D Models via Camera-Array Capture. Participants will range from 3D equipment manufacturers to 3D distributors to the 3D@Home Coalition.
If the 2011 HPA Tech Retreat seems like a great 3D event, that’s probably because it is. But it’s a lot more, too. If you’re interested in advanced broadcast technology, for example, here are some of the sessions on that topic: ATSC Next-Generation Broadcast Television, Information Theory for Terrestrial DTV Broadcasting, Near-Capacity BICM-ID-SSD for Future DTTB, DVB-T2 in Relation to the DVB-x2 Family, the Application of MIMO in DVB, Hybrid MIMO for Next-Generation ATSC, 3D Audio Transmission, Next-Generation Handheld & Mobile, High-Efficiency Video Coding, Convergence in the UHF Band, Global Content Repositories for Distributed Workflows, Content Protection, Pool Feeds & Shared Origination, Multi-Language Video Description, Consumer Delivery Mayhem, Networked Television Sets, Interoperable Media, FCC Taking Back Spectrum, the CALM Act, Making ATSC Loudness Easy, Media Fingerprinting, Embracing Over-the-Top TV, and Image Quality for the Era of Digital Delivery.
Broadcast-tech presenters will come from, among others: ABC, CBS, Fox, NBC, PBS, Sinclair Broadcast Group, and NAB; ATSC, BBC, Canada’s CRC, China’s Tsinghua University, the European Broadcasting Union, Germany’s Technische Universität Braunschweig, Korea’s Kyungpook National University, and Japan’s NHK Science & Technology Research Labs; AmberFin, DTS, Linear Acoustic, Microsoft, Rohde & Schwarz, Roundbox, Rovi, and Verance; Comcast, Starz, and TiVo.
Not interested in 3D or broadcast? How about reference monitoring, with presentations on LCD, OLED, and plasma, new research results from Gamma Guru Charles Poynton, and an expected major new product introduction from a major manufacturer?
What about workflow? Warner Bros. will present their evaluation of 13 different workflows at a “supersession” on the subject. The supersession will feature major studios and post facilities and is expected to cover everything from scene to screen. If that’s not enough, there will be other sessions on interoperable mastering and interoperable media, file-based workflows, and “Hollywood in the Cloud.”
Interested in archiving? Merrill Weiss and Karl Paulsen will be presenting an update on the Archive Exchange Format, a large panel will discuss (and possibly argue about) the many aspects of LTO-5, and there will even be a session on new technology for archiving on, yes, film. At left are some images from Point.360 Digital Film Labs (left is the original and right is their film-archived version).
There will be much more: hybrid routing, consumer electronics update, Washington update, global content repositories and other storage networks, shooting with HD SLRs, movie restoration (including a full screening of a masterpiece), standards update, new audio technologies for automating digital pre-distribution processes — even surprises about cable bend radius. The full program may be found here: http://www.hpaonline.com/mc/page.do?sitePageId=122447&orgId=hopa
In short, whatever you might want to know about motion-image production and distribution and related fields, there will probably be somebody there who knows the answer. Is this information available elsewhere, at, say, a SMPTE conference? Perhaps it is. But next month, SMPTE’s executive vice president, engineering vice president, and director of engineering will all be at the HPA Tech Retreat.Tags: 3d, Alice in Wonderland, ATSC, broadcast, Comcast, Dolby, HPA Tech Retreat, In-three, JVC, OLED, Point.360, post, production, RabbitHoles, SMPTE, Warner Bros.
The Oversight Executive for Motion Intelligence of the Office of the Under Secretary of Defense for Intelligence is scheduled to be in the southern California desert next month. So are the chief technology officers (CTOs) of both Panasonic and Sony. So is the head of the Visual Space Perception Laboratory at the University of California – Berkeley. So is one of the developers of Cablecam. So is the CTO of Cable Television Laboratories. So is a co-inventor of MP3. So is the mysterious Mo Henry, whose credit has appeared in movies ranging from Apocalypse Now to Zombieland.
The list could go on and on. Hundreds of top technical executives will be there. CTOs and VPs of Hollywood studios and television networks will be there. So will the head of emerging technologies of the European Broadcasting Union. So will the VP of standards of the Advanced Television Systems Committee (ATSC) and the director of engineering and standards of the Society of Motion-Picture and Television Engineers (SMPTE). Where will they be?
It’s the 16th annual Hollywood Post Alliance Tech Retreat, February 16-19 at Rancho Las Palmas conference center in Rancho Mirage, California. But every part of that title can convey a false impression.
HPA, for example, is not yet 16 years old, but the retreat is older. When the organization that created it, the Association for Imaging Technology and Sound, went belly up, HPA’s founders thought the retreat was too important to die, so they took it over. After 9/11, when other events went down in attendance, the retreat went up. It has actually had to turn people away on occasion because it has sold out.
Similarly, “Hollywood” and “Post” are misleading. The event is not (and has never been) in Hollywood. Its participants come from all over the world, from New Zealand to Norway, and from Bombay to Buenos Aires. If someone at the retreat is from NATO, that could be the North Atlantic Treaty Organization or the National Association of Theater Owners (both have sent representatives, sometimes at the same retreat); similarly, there have been representatives from MPEG the Moving Picture Experts Group and MPEG the Motion Picture Editors Guild. More »Tags: 3dtv, CableLabs, EBU, European Broadcasting Union, Hollywood Post Alliance, HPA, JVC, Mo Henry, MPEG, NATO, Panasonic, RabbitHoles, Sony, Tech Retreat
IBC is my favorite trade show. I can leave work, catch an evening flight to Amsterdam, and take a train directly from the airport to the convention center. If I’m hungry, some exhibitor will be providing food. Thirsty? Water, various forms of coffee, juices, beer, and wine flow freely. IBC even throws a party to which everyone is invited. But none of that is why I like it so much.
Americans tend to forget that we are not alone. Back in the days of RCA cameras, you needed to come to IBC to see those of the UK-based manufacturer Pye.
Today, we tend to think of NAB as an international show. Cameras are shown there by such Japanese manufacturers as Hitachi, JVC, Panasonic, Sony, and Toshiba. And Grass Valley’s cameras at NAB come from Europe. So why bother with IBC? More »Tags: 3-D, 3D-One, Acutelogic, ARRI, Astro, camera, Camera Corps, Dalsa, Gigawave, Grass Valley, HDAVS, IBC, JVC, LMP, Lux Media Plan, Meuser Optik, P+S Technik, Panasonic, Point Grey, Red, ShenZhen Tiger, Silicon Imaging, Skyline:Views, Sony, Thomson, Vaddio, Weisscam