Produced by:
| Follow Us  

B4 Long

April 22nd, 2015 | No Comments | Posted in Schubin Cafe


logo_nabshowSomething extraordinary happened at this month’s annual convention of the National Association of Broadcasters in Las Vegas. Actually, it was more a number of product introductions — from seven different manufacturers — adding up to something extraordinary: the continuation of the B4 lens mount into the next era of video production.

Perhaps it’s best to start at the beginning. The first person to publish an account of a working solid-state television camera knew a lot about lens mounts. His name was Denis Daniel Redmond, his account of “An Electric Telescope” was published in English Mechanic and World of Science on February 7, 1879, and the reason he knew about lens mounts was that, when he wasn’t devising new technologies, he was an ophthalmic surgeon.

Baird apparatus 2 croppedIt would be almost half a century longer before the first recognizable video image of a human face could be captured and displayed, an event that kicked off the so-called mechanical-television era, one in which some form of moving component scanned the image in both the camera and the display system. At left above, inventor John Logie Baird posed next to the apparatus he used. The dummy head (A) was scanned by a spiral of lenses in a rotating disk.

1931 Jenkins cameraA mechanical-television camera designed by SMPTE-founder Charles Francis Jenkins, shown at right, used a more-conventional single lens, but it, too, had a spinning scanning disk. There was so much mechanical technology that the lens mount didn’t need to be made pretty.

The mechanical-television era lasted only about one decade, from the mid-1920s to the mid-1930s. It was followed by the era of cathode-ray-tube (CRT) based television: camera tubes and picture tubes. Those cameras also needed lenses.

FernsehkanonenThe 1936 Olympic Games in Berlin might have been the first time that really long television lenses were used — long both in focal length and in physical length. They were so big (left) that the camera-lens combos were called Fernsehkanone, literally “television cannon.” The mount was whatever was able to support something that large and keep it connected to the camera.

In that particular case, the lens mount was bigger than the camera. With the advent of color television and its need to separate light into its component colors, cameras grew.

TK-41 KMTVAt right is an RCA TK-41 camera, sometimes described as being comparable in size and weight to a pregnant horse; its viewfinder, alone, weighed 45 lbs. At its front, a turret (controlled from the rear) carried a selection of lenses of different focal lengths, from wide angle to telephoto. Behind the lens, a beam splitter fed separate red, green, and blue images to three image-orthicon camera tubes.

The idea of hand-holding a TK-41 was preposterous, even for a weight lifter. But camera tubes got smaller and, with them, cameras.

TK-44P 1972 2-person croppedRCA’s TK-44, with smaller camera tubes, was adapted into a “carryable” camera by Toronto station CFTO, but it was so heavy that the backpack section was sometimes worn by a second person, as shown at the left. The next generation actually had an intentionally carryable version, the TKP-45, but, even with that smaller model, it was useful for a camera person to be a weightlifter, too.

HL-35At about the same time as the two-person adapted RCA TK-44, Ikegami introduced the HL-33, a relatively small and lightweight color camera. The HL stood for “Handy-Looky.” It was soon followed by the truly shoulder-mountable HL-35, shown at right.

The HL-35 achieved its small form factor through the use of 2/3-inch camera tubes. The outside diameter of the tubes was, indeed, 2/3 of an inch, about 17 mm, but, due to the thickness of the tube’s glass and other factors, the size of the image was necessarily smaller, just 11 mm in diagonal.

Many 2/3-inch-tubed cameras followed the HL-35. As with cameras that used larger tubes, the lens mount wasn’t critical. Each tube could be moved slightly into the best position, and its scanning size and geometry could also be adjusted. Color-registration errors were common, but they could be dealt with by shooting a registration chart and making adjustments.

B4The CRT era was followed by the era of solid-state image sensors. They were glued onto color-separation prisms, so the ability to adjust individual tubes and scanning was lost. NHK, the Japan Broadcasting Corporation, organized discussions of a standardized lens-camera interface dealing with the physical mount, optical parameters, and electrical connections. Participants included Canon, Fuji, and Nikon on the lens side and Hitachi, Ikegami, JVC, Matsushita (Panasonic), Sony, and Toshiba on the camera side.

To allow the use of 2/3-inch-format lenses from the tube era, even though they weren’t designed for fixed-geometry sensors, the B4 mount (above left) was adopted. But there was more to the new mount than just the old mechanical connection. There were also specifications of different planes for the three color sensors, types of glass to be used in the color-separation prism and optical filters, and electrical signal connections for iris, focus, zoom, and more.

When HDTV began to replace standard definition, there was a trend toward larger image sensors, again — initially camera tubes. After all, more pixels should take up more space. Sony’s solid-state HDC-500 HD camera used one-inch-format image sensors instead of 2/3-inch. But existing 2/3-inch lenses couldn’t be used on the new camera. So, even though those existing lenses were standard-definition, the B4 mount continued, newly standardized in 1992 as Japan’s Broadcast Technology Association S-1005.

Lockheed Martin sensorLockheed Martin cameraThe first 4K camera also sized up — way up. Lockheed Martin built a 4K camera prototype using three solid-state sensors (called Blue Herring CCDs, shown at left), and the image area on each sensor was larger than that of a frame of IMAX film.

As described in a paper in the March 2001 SMPTE Journal, “High-Performance Electro-Optic Camera Prototype” by Stephen A. Stough and William A. Hill, that meant a large prism. And a large prism meant a return to a camera size not easily shouldered (shown above at right).

Bayer filterThat was a prototype. The first cameras actually to be sold that were called 4K took a different approach, a single large-format (35 mm movie-film-sized) sensor covered with a patterned color filter.

An 8×8 Bayer pattern is shown at right, as drawn by Colin M. L. Burnett. The single sensor and its size suggested a movie-camera lens mount, the ARRI-developed positive-lock or PL mount.

separated Bayer colorsOne issue associated with the color-patterned sensors is the differences in spatial resolution between the colors. As seen at left, the red and blue have half the linear spatial resolution of the sensor (and of the green). Using an optical low-pass filter to prevent red and blue aliases would eliminate the extra green resolution; conversely, a filter that works for green would allow red and blue aliases. And, whether it’s called de-Bayering, demosaicking, uprezzing, or upconversion, changing the resolution of the red and blue sites to that of the overall sensor requires some processing.

Abel chartAnother issue is related to the range of image-sensor sizes that use PL mounts. At right is a portion of a guide created by AbelCine showing shot sizes for the same focal-length lens used on different cameras <>. In each case, the yellowish image is what would be captured on a 35-mm film frame, and the blueish image is what the particular camera captures from the same lens. The windmill at the left, prominent in the Canon 5D shot, is not in the Blackmagic Design Cinema Camera shot.

Whatever their issues, thanks to their elimination of a prism, the initial crop of PL-mount digital-cinematography cameras, despite their large-format image sensors, were relatively small, light, and easily carried. Their size and weight differences from the Lockheed Martin prototype were dramatic.

There was a broad selection of lenses available for them, too — but not the long-range zooms with B4-mount lenses needed for sports and other live-event production. It’s possible to adapt a B4 lens to a PL-mount camera, but an optically perfect adaptor would lose more than 2.5 stops (equivalent to needing about six times more light). Because nothing is perfect, the adaptor would introduce its own degradations to the images from lenses designed for HD, not 4K (or Ultra HD, UHD). And a large-format long-range zoom lens would be a difficult project. So multi-camera production remained largely B4-mount three-sensor prism-based HD, while single-camera production moved to PL-mount single-sensors with more photo-sensitive sites (commonly called “pixels”).

Then, at last year’s NAB Show, Grass Valley showed a B4-mount three-sensor prism-based camera labeled 4K. Last fall, Hitachi introduced a four-chip B4-mount UHD camera. And, at last week’s NAB Show, Ikegami, Panasonic, and Sony added their own B4-mount UHD cameras. And both Canon and Fujinon announced UHD B4-mount long-range zoom lenses.

Grass-Valley-LDX-86The camera imaging philosophies differ. The Grass Valley LDX 86 is optically a three-sensor HD camera, so it uses processing to transform the HD to UHD, but so do color-filtered single-sensor cameras; it’s just different processing. The Grass Valley philosophy offers appropriate optical filtering; the single-sensor cameras offer resolution assistance from the green channel.

sk_uhd4000_xl_1Hitachi’s SK-UHD4000 effectively takes a three-sensor HD camera and, with the addition of another beam-splitting prism element, adds a second HD green chip, offset from the others by one-half pixel diagonally. The result is essentially the same as the color-separated signals from a Bayer-patterned single higher-resolution sensor, and the processing to create UHD is similar.

Panasonic AK-UC3000Panasonic’s AK-UC3000 uses a single, color-patterned one-inch-format sensor. To use a 2/3-inch-format B4 lens, therefore, it needs an optical adaptor, but the adaptor is built into the camera, allowing the electrical connections that enable processing to reduce lens aberrations. Also, the optical conversion from 2/3-inch to one-inch is much less than that required to go to a Super 35-mm movie-frame size.

Ikegami_23-inch_CMOS_4K_cameraSony-HDC-4300Both Ikegami’s UHD camera (left) and Sony’s HDC-4300 (right) use three 2/3-inch-format image sensors on a prism block, but each image sensor is truly 4K, making them the first three-sensor 4K cameras since the Lockheed Martin prototype.  By increasing the resolution without increasing the sensor size, however, they have to contend with photo-sensitive sites a quarter of the area of those on HD-resolution chips, reducing sensitivity.

It might seem strange that camera manufacturers are moving to B4-mount 2/3-inch-format 4K cameras at a time when there are no B4-mount 4K lenses, but the same thing happened with the introduction of HD. Almost any lens will pass almost any spatial resolution, but the “modulation transfer function” or MTF (the amount of contrast that gets through at different spatial resolutions) is usually better in lenses intended for higher-resolution applications, and the higher the MTF the sharper the pictures look.

UA80x9 tilt (2) (1280x805)UA22x8 tilt revised (3) (1280x961)With all five of the major manufacturers of studio/field cameras moving to 2/3-inch 4K cameras, lens manufacturers took note. Canon showed a prototype B4-mount long-range 4K zoom lens, and Fujinon actually introduced two models, the UA80x9 (left) and the UA22x8 (right). The lenses use new coatings that increase contrast and new optical designs that increase MTF dramatically even at HD resolutions.

There is no consensus yet on a shift to 4K production, but 4K B4-mount lenses on HD cameras should significantly improve even HD pictures.  That’s nice!

Tags: , , , , , , , , , , , , , , , , , , , ,

When Will We Convert to HDTV?

February 28th, 2014 | No Comments | Posted in Schubin Cafe


A few weeks ago I worked on an event television production in New Jersey. Last week I was at the HPA Tech Retreat in California. Yesterday I attended Panasonic’s pre-NAB press conference in New York. What do the three have in common? They made me wonder when we will make the transition to HDTV. That’s right: HDTV, not “4K” or any other form of beyond-HD television.

ScheideThe event was called Ode to Joy, a concert at Princeton University’s Richardson Auditorium celebrating the 100th birthday of philanthropist and musical scholar William H. Scheide. It was shot in HDTV, which has a picture shape or aspect ratio, worldwide, of 16 units wide to 9 units high, 16:9, wider than the old TV aspect ratio of 12:9 or 4:3.

Schubin ScheideThe producers released an eight-minute, behind-the-scenes, promotional video, which I recommend highly to anyone who wants to see some of what’s involved in such productions, from running cables through the snow in sub-zero temperatures to going over the music and shots with the camera people before the concert. Here’s a link to it:

Schubin Scheide 4x3In addition to its YouTube release, the promo was shown on a number of public television stations. I watched one, via cable television, at a friend’s house. No setting of the friend’s TV or cable box allowed me to see the promo as was intended, filling the 16:9 HDTV screen; the sides were chopped off, at either the station or the cable system, back to old TV’s 4:3.

Such chopping is why some broadcasters still want their content configured in “shoot-and-protect” mode, shot to fill the 16:9 frame but with important content and graphics protected to schubin scheide 4x3 stretchedbe visible in what remains after the sides of the HDTV image are chopped off. Maybe shoot-and-protect made sense in the early days of HDTV, when most viewers watched narrower screens; today it can mean most viewers watching stretched out, unnatural pictures as they try to fill their widescreen TVs with chopped-off images.

I know that it’s most viewers because I follow and report on surveys of television households in the U.S. One of the places where I do such reports is at the annual HPA Tech Retreat.

HPA slide pic

Above is the opening slide of the “Technology Year in Review” that I present there (you can get the whole presentation on the “Get the Download” section of this site here: For many years, I’ve been running essentially the same slide, just tweaking the numbers a bit. This year I noted that the Consumer Electronics Association, Leichtman, and Nielsen all agreed that, as of the beginning of 2013, about ¾ of U.S. television households had HDTVs. That’s most.

So, while shoot-and-protect is preventing a minority of viewers from losing important information at the sides of the picture, it is fostering an environment in which the majority of viewers is watching content in the wrong shape and/or, as in the case of my viewing of the broadcast of the Ode to Joy promo, losing important content at the sides. And that’s not the only problem.

Ode to Joy was an event. That’s the type of television show on which I work most often. And events are often newsworthy. Often the event producer will invite members of the press to cover it. When that happens, part of my job is providing the radio and television press with the feeds they need.

walesaFor a live transmission, that can be as simple as delivering satellite coordinates or authorizing a carrier to feed a station. Most newsworthy events also require some form of “press bridge,” audio and video distribution amplifiers and connectors. A 32-output press bridge is shown in the image in the slide above. The most I’ve ever fed was about 175 when Solidarity-leader Lech Wałesa spoke at the AFL-CIO convention in 1989 in Washington, D.C. I’d prepared for only 150, so some press daisy-chained off of others.

For most of the analog television era, such daisy-chaining was relatively easy. Video was 4:3 standard-definition NTSC color on a coaxial cable with a BNC. Audio was monaural and used a XL-type connection. Press bridges often had switches to deal with the biggest issue, such as whether the audio desired was to be line level or mic level; those press needing mic level usually brought their own attenuators, just in case.

va32-6hToday, in the supposed surround-sound HDTV era, press bridges provide… 4:3 standard-definition NTSC color on a BNC and monaural audio on an XL-type connection, as in the Opamp Labs VA-32 shown at left and still being sold. If someone shows up with a recorder that can accept an HD-SDI input with embedded, AES-3, or analog audio, I can usually accommodate it. If there’s an HDMI input, and I know of it in advance, I can usually accommodate that, too. Unfortunately, those are rare. And that brings me to yesterday’s Panasonic press conference.

AJ-PX270Among other products, the company is introducing a new HDTV camcorder, the AJ-PX270. It’s relatively low cost, and, based on everything reported about it at the press conference, extremely flexible and high in quality. The company suggested many possible uses for it, including news coverage. Its small size and light weight seem to make it a good choice for shooting a car accident or fire or tornado or for rushing in with the rest of the crowd to get shots of an acquitted or convicted defendant after a trial.

Unfortunately for me and others of my ilk who try to provide press feeds at planned events, it will also likely show up at those, and, like other camcorders of its ilk, it lacks any form of video input, and a news videographer bringing along a separate recorder would cancel the small-size, light-weight, and low-cost advantages. So, as I have done in the past, I will provide a monitor for the camcorder to shoot. And, as I have done in the past, I will tweak the numbers on my first Technology Year in Review slide, the one with the picture of the pre-HDTV-era press bridge still being sold.

I really, really, really look forward to junking that slide some day. That’ll be when we’re truly in the HDTV era.

Tags: , , , , , , , ,

How Good Is Good Enough?

April 30th, 2011 | No Comments | Posted in 3D Courses, Schubin Cafe

As usual, there were many new, useful products announced at this month’s annual convention of the National Association of Broadcasters (NAB) in Las Vegas. As usual, there were also many new trends, one sparked by the the U.S. Congress and another by last month’s earthquake & tsunami in Japan.

At the event’s Digital Cinema Summit, not only 3D but also higher frame rates, greater spatial resolutions, and increased bit depths and color gamuts were discussed. Yet the announcement that startled me most was near the beginning of Panasonic’s press conference.

Normally, I don’t pay much attention to manufacturer sales announcements. They might indicate real interest in a product, but the sales could also be the result of many other factors, including sweetheart deals and existing infrastructure.

Panasonic’s announcement was about the 2012 Olympic Games in London. Like the Super Bowl and other grand events, the quadrennial Olympics are opportunities to showcase new video technologies. At the 1984 Games, for example, Panasonic introduced its fluorescent-discharge-tube-based Astrovision giant color screens with pictures visible in broad daylight.

What new technology might the company provide for the world’s top sporting event, taking place more than a year after NAB 2011? Might it be something to do with 3D? Panasonic introduced a new integrated 3D camcorder at the show, the AG-3DP1, with larger image sensors (1/3-inch format), greater-range lenses (17x), and AVC-Intra recording onto dual P2 solid-state memory cards.

Might it be something to do with AVC-Ultra, the company’s highest-grade video bit-rate-reduction system, capable of dealing with 1080-line HD at 60 progressively-scanned pictures per second or other signal types including 3D and Hollywood’s 2K 4:4:4? Might it be something beyond even that?

Alas, no. The startling (to me) announcement was that “the official recording format for capturing the London 2012 Olympic Games,” as specified by Olympic Broadcasting Services London (OBSL), the host broadcaster, will be–ready?–DVCPRO HD. Next year’s NAB show will be the 13th annual equipment show since that format was introduced (and the 14th since it was announced).

As the image above right indicates, DVCPRO HD was introduced as a tape-cassette-based recording format (although Panasonic noted that OBSL “will also use the P2 HD series with solid-state memory cards”). Like HDCAM before it, DVCPRO HD is also a sub-sampling recording format; it doesn’t capture full horizontal resolution even in luma (brightness detail). But it would appear that OBSL considers it good enough.

“Good enough” was a phrase that came to my mind at many places on the NAB Show exhibit floor this year. Consider Sony’s new OLED reference monitors. The BVM series was introduced at February’s Hollywood Post Alliance (HPA) Tech Retreat. They have a slight color shift with viewing angle but otherwise seem ideal for the production-truck market, where a 42-inch plasma screen in video control is generally out of the question.  And their price is in the range of similarly sized reference monitors using other technologies.

At NAB 2011, Sony expanded its offerings with a PVM OLED series at less than a quarter of the price (a discount of about 77%). Not only that, but the PVM monitors are even thinner than the BVM and include built-in controls.

Obviously, there have to be some drawbacks, given the extreme price difference. The signal processing in the PVM is not as high in quality as in the BVM, the flexibility is limited, and the OLED panels for the PVM are chosen from the manufactured stock after the top-of-the-line BVM panels have been selected and removed.

That might mean a bit less color-shift-free viewing angle. But another flaw was mentioned for the PVM panels: possible dead pixels.

In a sense, that’s no different from what Sony has done since its first chip-based cameras. Perfect image sensors went into the broadcast series, slightly flawed into the professional series, and more flawed into the consumer series. In cameras, however, bad pixels can be effectively “removed” by taking an average of the good pixels around them. In a display panel, there is nothing between the dead pixel and the eye to do any averaging (though Sony promised bad pixels would be off, never on).

In the choice between Sony’s BVM and PVM OLED monitors, the trade-off is clearly between cost and quality. At some other exhibits at NAB 2011, the parameters were less obvious. Consider, for example, the 52-inch “Professional 3D display” from Dimenco shown at the Triaxes Vision booth. It was said to have a “stunning and crystal-clear 3D image.”

From a 3D perspective, the autostereoscopy (glasses-free 3D) was superb. The image could easily be fused into 3D, and there was a broad viewing angle. The reason that part of the viewing experience was so good is that the displayed used 28 different views created from “2D-plus-Depth” information. Unfortunately, the display starts with ordinary HD resolution of 1920 pixels across. Divide that by 28 views, and you get some idea of how not-exactly-crystal-clear I perceived the resulting image.

That might be an extreme example, but there were many others at the show. Almost every 3D display there traded off either spatial resolution (in passive-glasses systems) or temporal resolution (in active glasses) or both.

Almost every display did that. One that did not could be found at the Calibre exhibit in the North Hall. Among other products, Calibre makes scalers, and their PremierViewProHD-IW includes what the company calls “3D Left/Right Extraction & Alignment for Passive 3D Projection Systems.”

In brief, the scalers take the “frame-packed” 3D signal from a Blu-ray disc and convert it to two, separate HD signals, one for the left eye and one for the right. Each signal is fed to its own projector, simple polarizing filters are clamped in front of the projection lenses, and simple passive glasses are used for viewing, with no loss of spatial or temporal resolution.

The system might be used for viewing 3D dailies. That would require a relatively inexpensive way to create 3D Blu-ray discs. That’s what Pico House’s Easy 3D does. It requires only a laptop with a BD-RE drive. The trade-off on this one is that its input format is AVCHD–ideal for a small, relatively inexpensive camcorder like Panasonic’s AG-3DA1, not so good for systems recording on other formats.

Is AVCHD good enough for dailies? Is any bit-rate-reduced format good enough for mastering? I’ll get to those questions in part II.

Tags: , , , , , , , , , ,

IBC 2010 – 2D, 3D, 4D, 5D

October 25th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

There was plenty of 3D at the International Broadcasting Convention (IBC) in Amsterdam this year.  At the awards ceremony, alone, the audience was frequently asked to don 3D glasses to see clips from the winners (before viewing a portion of the not-yet-released 3D movie Tron: Legacy).  But the first sentence of the first comment posted on the question “What did you see around at IBC2010?” posted on the LinkedIn Digital TV Professionals Group was “Lots of 3DTV Demos that nobody was looking at” (from Alticast senior vp Anthony Smith-Chaigneau), and two other group members quickly agreed.


In fact, some of the 3D demos were very viewed, including the ones in Sony’s exhibit, based largely around their MPE200 processor.  Introduced at NAB in April, the MPE200 was then capable primarily of correcting stereoscopic camera-alignment errors, as shown above.  It has become so popular that one announcement of Sony 3D equipment sales at the show (to Presteigne Charter) included 13 MPE200 processors but only 10 HDC1500R cameras (with two required per 3D rig).

At IBC 2010, the MPE200 was joined in that correction function by Advanced 3D Systems’ The Stereographer’s Friend (  Whereas the MPE200 currently has a four-frame latency, The Stereographer’s Friend is said to do its corrections within just one frame and for lower cost.

TS-5 smallSome stereoscopic camerazepar small rigs are said to be so precise that correction is not necessary.  Although some had seen it previously, 3ality’s small, relatively lightweight TS-5 rig (shown at left) was officially introduced at IBC 2010.  Zepar introduced an even-smaller stereoscopic lens system (shown at right) for a single camera, reducing the need for correction.  Such 3D-lens systems normally raise concerns of resolution and sensitivity loss, but Zepar’s is intended to be mounted on the Vision Research Phantom 65, which has plenty of each.

At IBC 2010, however, Sony’s MPE200 was no longer just a correction box; three more functions were introduced.  One is 2D-to-3D upconversion.  Sony was not alone in that area, either.  One new competitor is SterGen, an Israel-based company with a system intended specifically for sports.  According to their web site (, they offer “better quality than real 3D shooting.”


Then there’s graphics insertion.  In that new function, the MPE200 was joined by Screen Subtitling’s 3DITOR, which analyzes not only the depth characteristics of the current frame but also the depth history.  Above, one of the depth-measurement tools is shown (based on an image from Wild Ocean, ©2010 Yes/No Productions Ltd and Giant Screen Films).  The company offers a white paper on the myriad issues of 3D text:

Another new MPE200 function is stitching, the ability to combine pictures from multiple cameras into one big panorama and then derive a stereoscopic camera image from a portion of the result.  The European research lab imec had shown a stereoscopic virtual camera at NAB in April (and brought it to IBC, too:, and BBC R&D described a system even earlier (


Much of the interest in stitching at IBC 2010, however, was unrelated to 3D.  It was associated, instead, with the Hego OB1 system, which uses a package of six cameras in one location to create the panorama.  It won awards from Broadcast Engineering and TVBEurope magazines.  Certainly, the system uses interesting technology, but so does Sony’s MPE200.  Perhaps Hego’s winning the awards had something to do with how the OB1 was demonstrated, with bikini-clad beach-volleyball players on the IBC Beach, as shown above in a portion of a photo by Wes Plate (  That’s the camera array at the upper right.

Sisvel tile

In fact, there was plenty new at the show that was not 3D.  There was more 3D, of course.  In the area of distribution, Dolby pushed its version of 3D encoding and Sisvel brought a new “tile” format, shown above with an image from Maga Animation.  The left-eye view occupies a 1280 x 720 portion of the 1920 x 1080 frame, allowing it to be extracted for 2D HD viewing without necessarily changing existing decoders.

There were new 3D analyzers from Binocle (DisparityTagger) and Cel-Soft (Cel-Scope).  There was an iPhone/iPod app from Dashwood Cinema Solutions for stereoscopic calculations associated with Panasonic’s 3DA1 camcorder.  There was the Vision 3 camera with toe-in-free convergence that I wrote about just before the show (  There were glasses-free displays (one noting that its correct viewing distance was 4.4 meters).  There was a seven-camera 3D rig for capturing information for such displays.  There was a book about stereoscopic cinematography from 1905.  There was an eye-tracking 3D laser-display system.

That was just in the exhibits.  There were also plenty of 3D conference sessions.  IBC’s best-paper award went to a group from NDS for their paper “Does size matter? The challenges when scaling stereoscopic 3D content,” which showed that not only does apparent depth change with different screen sizes, but it also doesn’t scale.  And stereographer Kommer Kleijn punched holes in “religious” views of toe-in vs. parallel shooting in a presentation about stereoscopic shooting for people experienced in 2D.

Actually, in addition to 2D and 3D, IBC 2010 also had 4D.  It was in a small exhibit in a low-traffic hall.  The full title was Real-Sense 4D, from ETRI, the Korean Electronics and Telecommunications Research Institute.

ETRI 4D cropped

As shown above, Real-Sense 4D involves more than just an image display.  I tried it out.  When the story involved a fire, I not only saw the flames and heard them crackling but also felt the heat and smelled the smoke.  During a segment on ice skating, I felt the air rushing past and then, in a moment out of Nancy Kerrigan’s career, felt a sudden WHAP! on my legs.

Panasonic AF100 on shoulder-mount rig trimmed

As at many recent professional equipment exhibitions, there was also 5D, specifically the Canon Eos 5D Mark II DSLR camera, capable of shooting HD.  But there was also something characterized by David Fox in the IBC Daily as “the HD DSLR Killer.”  It was Panasonic’s AG-AF100/101 (above), shown only in a display case at the NAB show earlier this year.  It combines the advantages of a large-size image sensor (Micro Four Thirds format) with the features of a video camcorder.  At IBC, there were many operating units, and the reaction of the IBC press corps was wildly enthusiastic about them, much more so than to Panasonic’s 3DA1 camcorder.

Polecam_HRO_69_HD_lens trimmedWhereas at IBC 2009 some 25 new camera models were introduced, at IBC 2010, besides the V3i stereoscopic camera and the AF100/101, the main introductions were Canon’s XF100 and XF105 camcorders and IDT’s palm-sized, 2K, high-speed NR5.  There were also compact versions of NHK’s 8K Super Hi-Vision cameras from Hitachi and Ikegami.  But there were significant wide-angle lens introductions from Polecam (HRO69, at left) and Theia (MY125) for 1/3-inch-format cameras, offering horizontal acceptance angles of 69 and 125 degrees, respectively. Halibut

There were also introductions in the camera-mount area, such as Bradley Engineering’s multi-axis Gyro 350 (similar looking to the older Axsys V14 but said to be lower in cost), Vinten’s encoding Vector 750i pan head, and SiSLive’s Halibut underwater track.  Vaddio’s Reveal wall-mounted camera systems are invisible until used.  Broadcast Solutions showed a tiny two-seat Smart car equipped as a five-camera studio.  That’s not merely the control equipment; the five cameras were mounted in the car.

Brick House Tally Ho!Other acquistion-related introductions at IBC included a video whiteboard system from Vaddio that does not require a computer, a version of Sennheiser’s MKE-1 lavalier microphone in which every part, from cable to connector to windscreen, is paintable to precisely match costume color, and a wireless tally system from Brick House Video.  Capable of dealing with up to eight cameras, the Tally Ho! handles both on-air and preview/iso tally, and the charger for the tally modules doubles as the system  transmitter.

Atomos Ninja Product Image trimmedAJA Ki Pro MiniIf IBC 2010 wasn’t about new cameras, it did offer many new introductions in storage and distribution.  There was, for example, AJA’s new, small, lightweight, camera-mountable Ki Pro Mini (left).  Then there was the even smaller and lighter Atomos Ninja (right), intended specifically for use with certain types of cameras.  And Cinedeck Extreme v. 2.0 allows direct use of Avid’s DNxHD codec.  Sonnet’s Qio MR brings the ability to play essentially all popular types of camcorder flash cards (including Panasonic’s P2 and Sony’s SxS) to Windows-based tower computers.

Marvin trimmedThen there were transportable systems, bigger than those above but still usable in the field.  One was the Globalstor Extremestor Transport.  Comparably sized but serving a very different function was Marvin (left), from Marvin Technologies.  It accepts almost any form of field recording and then, according to preselected options, automatically makes copies, including archival tape cartridges and DVD screening copies.

The tiny storage devices introduced at IBC 2010 were joined by tiny encoders for distribution.  The ViewCast Niagara 4100 was small, the TV1.EU miniCaster smaller, and the Teradek Cube smaller still.  Clear-Com’s HelixNet intercom won an award from TV Technology Europe.  It’s a digital intercom system using microphone-type cables like older analog systems (but also very much like Riedel’s already existing digital Performer series).

Quantel QTube trimmed

There was much more at IBC.  Cloud-based editing (an Internet Explorer screen from Quantel’s QTube is shown above), a new acoustic summing algorithm, a multi-touch video wall — and those were just some of the items in the public exhibits.  In private rooms, one could find such items as TiVo’s integration of YouTube and Sony’s 24-inch OLED and terabyte memory card.

uWand trimmed

Then there was uWand, an unusual remote control from Philips.  Like so many others, it uses infra-red signals.  Unlike those others, it receives those infra-red signals rather than emitting them, so a user can, for example, move an image from a TV screen to a digital picture frame, just by aiming the remote control.

IBC clearly isn’t just about broadcasting anymore.  For more of my take on IBC 2010, see the Power Point from the Schubin Cafe IBC review on October 12, available here:

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Pico the Hits

July 15th, 2010 | No Comments | Posted in Schubin Cafe
othello-press-2_wedding - small

Salma Shaw & Roger Payano in Synetic Theater’s “Othello”

Roger Payano has a BS in mechanical engineering and an MS in industrial engineering and has worked in the defense industry, but when he was applauded recently at the Kennedy Center in Washington, D.C. it had nothing to do with his engineering prowess.  He played the title role in the Synetic Theater production of Othello, and everyone in the audience knew what his character was thinking, despite the fact that he did not utter a single word during the course of the play.  Neither did any of his fellow actors.

Synetic Theater won two Helen Hayes awards this year for earlier productions.  Prior to Othello this season they presented critically acclaimed versions of such other plays as Antony and Cleopatra and A Midsummer Night’s Dream, also without any spoken words.  But Othello was different.  Audience members could literally see what Othello was thinking, thanks to a video-based technological breakthrough.

Theater has long used technology.  More than two thousand years ago, the ancient Roman poet we call Horace warned writers not to use a “deus ex machina” (a god from a machine) to resolve plots.  He was referring to a practice in ancient Greek theater, at least 500 years older still, of having a crane drop an actor playing god onto the stage to use supernatural powers to take care of problems.

Nicola Sabbatini 1638 dimmer

Nicola Sabbatini’s 1638 lighting dimmer

The exact dates when the spotlight and the lighting dimmer were invented might never be known, but in 1638 Nicola Sabbatini’s Pratica di fabricar scene e macchine ne‘ teatri, a theatrical-technology instruction book, offered plans for both, not to mention designs for set-changing, flying, and storm- and flame-simulating machinery.  Sweden’s Drottningholms Slottsteater, opened in 1766, still uses the original 18th-century, human-powered stage machinery (and lighting control), which can effect a complete set change — wings, flies, etc. — in a matter of seconds.  You can see it in operation here:

magic lantern 2

One of Zahn’s moving-slide projectors

The use of motion pictures on a screen in theatrical presentations is much older than Sabbatini’s book.  Documentation exists that an 11-piece shadow puppet was used to entertain Emperor Wu of China more than 2000 years ago.  Johann Zahn’s Oculus Artificialis Teledioptricus Sive Telescopium, first published in 1685, showed how to project moving images using mechanical slides in what are today called “magic lantern” or “stereopticon” projectors.  Below is a moving image from one form of motion slide, as shown on the Dutch magic-lantern site de Luikerwaal:


Below this paragraph is another motion picture, said to be the oldest existing long sequence intended to be projected in a theater (older sequences shot by Eadweard Muybridge exist, and some of those were intended to be projected, but they comprise only a few frames each, arranged around a disk).  The motion sequence below was shot by Louis Le Prince in Leeds, England, in 1888.

Le Prince roundhay_animation_small

Was Le Prince trying to invent what we today call “movies”?  There’s no question that he wanted to be able to capture and project live-action sequences, and some say he succeeded before Edison.  But it’s not clear that he intended audiences to enter an auditorium simply to watch his movies.  At least one contemporary document suggests that his invention was intended to provide motion-picture backdrops for live performances.  In 1896, when the Rosabel Morrison Company performed Carmen at the Lyceum Theater in Elizabeth, New Jersey in front of a projected Eidoloscope movie sequence of a bullfight, Le Prince’s goal might have been achieved.

Faust boatMoving-image projection in the service of theatrical drama has certainly advanced over the past century or so.  When a new production of La Damnation de Faust opened at the Metropolitan Opera in 2008, it utilized multiple high-definition projectors, fed computer graphics generated live based on input from motion sensing cameras, to provide images that interact with the human performances on the stage.  Thus, a character on stage could pole a gondola, the water rippling in its wake — except for the fact that there was no water, let alone wake and ripples; it was all projected computer graphics.

There are advanced projection systems today that can track moving screens across a stage and not only project on them but also pre-distort the images according to a varying screen angle.  Some can even project on non-flat surfaces.  Here’s one example:

Unfortunately, the more advanced the systems, the less they seem to be appropriate in live drama.  As Avatar showed, it’s possible to do anything in a movie, from atrophying an actor’s legs to creating an entire non-terrestrial civilization.  And movie tickets, despite recent price hikes, usually cost less than those of live theater.  Might computer-graphics-based images, like amplified voices, make live theater seem more like movies, and, if so, would it be worth paying more than a movie-ticket price to see it?

The wordless Synetic Theater performances don’t use amplified voices, of course.  And the company’s foray into video projection, though it involved such advanced concepts as tracking moving screens and presenting images on irregular surfaces, was remarkably live and human.

Othello_Desdemona_Cassio_Projection small

Othello literally grapples with projected jealousy in Synetic Theater’s production (photo by Graeme B. Shaw)

There were no whirring projector fans.  There was no projection booth.  No illuminated dust motes made the projection beams visible.  At times the images seemed to come directly from Othello’s mind.

In fact, the images came from palm-sized, battery-operated, handheld 3M MPro150 pico-projectors, small enough to be hidden in costumes when not used and barely visible even when they were.  One is shown below on a small tripod.


Introduced for the business-presentation market, pico projectors might not be ideal for that purpose.  With just 15-lumen output, the MPro150 would provide 8 nits of luminance (think brightness) on a plain white screen just 2 feet high.  For comparison, a Panasonic TH-50VX100U 50-inch plasma display, with roughly the same size picture, would offer 1200 nits, 150 times more.  A smaller projected picture would be brighter, but then the business presentation might as well be shown on a laptop screen, also brighter.

On a dark stage, however, the images from the pico projectors in Synetic Theater’s production of Othello seemed perfect.  Tracking moving screens was no problem; the actors using the projectors just turned their wrists.  And the human foibles of such tracking seemed to keep the images human, too.  Similarly, projecting hands on the irregular surface of the waist of a character’s dress required only that the actor doing the projecting aim that way.

Perhaps Othello was the first example of what will be an age of on-stage pico projection.  Either way, it was a superb production.

Tags: , , , , , , , , , , , , , , ,

The Elephant in the Room: 3D at NAB 2010

April 30th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe
implicit range of 3D eyewear at NAB 2010

implicit 3D eyewear range at NAB 2010

As I roamed the exhibits at the NAB show this month, I kept wondering what other year it seemed most like.  And I was not alone.

There were plenty of important issues covered at the show, from citizen journalism to internet-connected TV.  And then there was the elephant in the room.

It would be a lie to say that 3D technologies could be found at every booth on the show floor.  But it was probably the case that there was 3D in at least every aisle.  There was so much 3D that it tended to diminish all other news.

litepanels_sola12In acquisition technology, for example, LED lighting was near ubiquitous, with focusable instruments, such as the Litepanels Sola, sometimes painfully bright.  Panasonic and Sony both showed models of future inexpensive video cameras with large-format imagers, and Aaton joined the range of those offering “digital magazines” for film cameras.  In small formats, GoPro’s Hero is a complete HD camcorder weighing just three ounces.

In storage technology, Cache-A, For-A, IBM, and Sony all showed in new offerings that tape is not dead.  Meanwhile, iVDR removable-hard-drive storage could be seen in several new products, and Canon introduced new camcorders based on Compact Flash cards.

Cinedeck looks like a viewfinder but includes built-in storage and editing capability. NextoDI’s NVS 2525 can copy either P2 or SxS cards.

In processing, Dan Carew’s Indie 2.0 blog said of Blackmagic Design’s DaVinci Resolve 7.0, “this best-in-class color correction software was formerly US$250,000 (for software and hardware) and is now available in a Mac software only verions for US$995.” Immersive Media’s 11-camera spherical views can now be stitched and streamed live.  NewTek’s TriCaster TCXD850 can deal with 22 inputs and virtual sets.  And, though you might not yet be able to figure out why you’d want this capability, Snell’s Kahuna 360 production switcher can deal with up to 16 shows at once.

In wireless distribution, there was VµbIQ’s 60 GHz uncompressed transmitter on a chip and Streambox’s Avenir for bonding up to four cellular modems to create a 20 Mbps channel.  In wired, there was Pleora’s EtherCast palm-sized bidirectional ASI-IP gateways.  And, in technologies that could be applied to either, there were Fraunhofer’s codec with a latency of just one macroblock line and a Harris-LG/Zenith proposal for expanding ATSC mobile transmission to full-channel use.

Ostendo 2In presentation, there was a reference picture monitor from Dolby (seen in almost its final form at the HPA Tech Retreat).  Several booths had OLED monitors, from 7-inch at Sony to 15-inch at TVLogic.  Wohler’s Presto router has an LCD video display on each button.  And Ostendo’s CDM43 is a curved monitor with a 30:9 aspect ratio.

Epic smallThat barely scratches the surface of the non-3D news from NAB.  And then there was 3D.

Even All-Mobile Video’s Epic 3D production truck, parked in Sony’s exhibit, wore 3D glasses.  But it was the glasses on visitors to the truck that proved more instructive.

Sony provided RealD circularly polarized glasses to visitors for looking at everything from relatively small monitors to a giant outdoor-type LED display.  As soon as those visitors entered the control room of AMV’s Epic 3D truck and donned their glasses, however, they saw ghosting — crosstalk between the two eye views.  AMV staff were prepared for the shocked looks.  “Sit down,” they said.  “There’s a narrow vertical angle, and you have to be head-on to the monitors.”  Sure enough, that solved the problem — at least for those who could sit.

Another potential 3D problem was mentioned in the two-day 3D Digital Cinema Summit before the show opened.  If 3D is shot for a small screen and blown up to cinema size, it can cause eye divergence.  3ality’s camera rigs indicate when this might happen, but it happened anyway on at least one cinema-sized screen at NAB, leading to some audience queasiness.

Buzz Hays of the Sony 3D Technology Center says making 3D is easy, but making good 3D is hard.  There was a lot of 3D at NAB, including both easy and hard, good and bad.

It was hard to count the number of side-by-side and beam-splitter dual-camera rigs at the show, but, in addition to those, there were integrated (one-piece) 3D cameras and camcorders, in various stages of readiness, from 17 different brands, both on and off the show floor.  It seems that all of them were said to be “the first.”


Much could be learned about 3D at the two-day Digital Cinema Summit before the show opened.  It began with Sony’s Pete Lude showing that an ordinary 2D picture can seem 3D when viewed with just one eye, leading a later speaker (me) to quip that watching with an eye patch, therefore, is an inexpensive way to get 3DTV.

3ality’s Steve Schklair followed Lude with an on-screen, live demonstration-tutorial on the effects of different 3D rig settings: height, rotation, lens interaxial, convergence, etc.  He was followed by directors, stereographers, and trainers of 3D-convergence operators, among others.

Although 3D would seem to require more equipment (two cameras and lenses plus a stereo rig at each location) and more personnel (a convergence operator per camera in addition to a stereographer), there is seemingly one saving grace.  According to Schklair and others, 3D can get away with fewer cameras and less cutting than 2D.

The same thing was said of HD, however, in its early days.  Sure enough, when I worked on one show in 1989, we used just four HD cameras feeding the HD truck and twice as many non-HD cameras feeding the non-HD truck.  In the early days, it was common practice to do separate HD and SD productions.  Today, of course, one HD production feeds all, and it typically uses as many cameras and as rapid cutting as an SD show.

Pace ShadowAtop a tower of Fujinon’s NAB booth, Pace showed something that recognizes the current economics of 3D.  With virtually no 3DTV audience, it’s hard to justify separate 3D productions, but, with such major players as ESPN, DirecTV, Discovery, and Sky involved in 3D, the elephant cannot be ignored, either.  So the Pace Shadow system places a 3D rig atop the long lens of a typical 2D sports camera.  Furthermore, it interconnects the controls (in a variety of selectable ways) so that the operator of the 2D camera need not be concerned about shooting 3D: one camera position, one operator, different 2D and 3D outputs.

Screen Subtitling came up with similarly clever solutions to the problem of 3D graphics.  Unless text is closer to the viewer (in 3D depth) than the portion of the image that it is obscuring, it can be uncomfortable to read.

Traditionally, subtitles are at the bottom of a screen, where 3D objects are closest to the viewer.  Raise the graphics to the top, and they might work in the screen plane.

Then there’s the issue of putting the graphics on the screen.  With left- and right-eye views, it might seem that two keying systems are required.  But with much 3D being distributed in a side-by-side format, a single keyer can place 3D graphics directly into the side-by-side feed.

Screen Subtitling small

copyright 2010 Inition | Niche | Pacific

Relay opticsThere was much more 3D at the show, in every field of video technology (and, perhaps even audio).  In acquisition, for example, aside from integrated cameras, 3D mounts, and even individual cameras designed specifically for 3D (like Sony’s HDC-P1), there were also 3D lens adaptors, precision-matched lenses, precision lens controls, and even relay optics intended to allow wider cameras to be placed closer together, as in this picture shot by Eric Cheng of

LED smallAt the other end of the 3D chain, there were both plasma and LCD autostereoscopic (no-glasses) displays using both lenticular and parallax-barrier technology, small OLED displays with active-shutter glasses and giant LED screens with passive circularly polarized glasses.  There were LCD and plasma screens (up to 152-inch at Panasonic) and DLP rear-projectors using active-shutter glasses, and both LCD and laser projection using passive polarized glasses.

DSC01809There were dual-panel displays with beam splitters, and displays intended to be viewed through long strips of fixed polarized materials (to accommodate all viewers’ heights).  There were many anaglyph displays in the three-different primary-and-complement color combinations.  There were 3D viewfinders using glasses and others with displays for each eye.

Burton Aerial 3D trimmedJapan’s Burton showed a laser-plasma display that creates 3D images in mid-air.  Normally, they’ve viewed through laser-protection goggles, as in the image at the right at the top of this post.  But as a safety measure, they showed them instead inside an amber tube at NAB.

InKeisoku small storage, it seems that everyone who had anything that could record images had a version that could do so in 3D.  Even Convergent Design’s tiny Nano was available in a 3D version.  The Abekas Mira is an eight-channel digital production server — or it’s a four-channel 3D digital production server.  Want an uncompressed 3D field recorder?  Keisoku Giken’s UDR-D100 was just one such product at the show.

In processing, just about every form of editing and processing had a 3D version.  Monogram showed a touch-screen 3D “truck-in-a-box” production system.  Belgium’s Imec research lab even showed licensable technology for stereoscopic virtual cameras.

There was a range of equipment and services for converting 2D to 3D either in real time or not, automatically and with human assistance.  And there was a large range of processing equipment designed to fix 3D problems, such as camera rotation and height variation.

Sony’s MPE200 is one such device, with a U.S. list price of $38,000.  The MPES3D01/01 software to run it, however, is another $22,500.  With the least-expensive 3D camera at the show (Minoru 3D) retailing for under $60 at, it might be said that 3D is cheap, but good 3D costs.

There was 3D test equipment from many manufacturers.  There was high-speed 3D (Antelope/Vision Research). Belden 1694D trimmed There was 3D coax (Belden 1694D, complete with anaglyph color coding).  Ryerson University is doing eye-tracking research on what viewers look at in 3D and whether it’s different from HD and 4K.

So why was I wondering what year it was?  At NAB shows there have been many technologies shown that never went anywhere.  We still await voice-recognition production switchers, for example, and also voice-recognition captioning.  But those have generally been shown by only one company or a small number of exhibitors.

Digital video effects were among the fastest technologies to penetrate the industry.  First shown at NAB in 1973, they were commonly seen in homes by the end of the decade.

Then there was HDTV.  Its penetration after NAB introduction took much longer, even if dated only from 1989, when an entire exhibition hall was devoted to the subject (there were many earlier NAB displays).  Estimates vary, but U.S. household penetration of HDTV 21 years later seems to be in the vicinity of half.

extravisionAt least HDTV did eventually penetrate U.S. households.  Visitors to NAB conventions in the early 1980s could see aisle after aisle of exhibits claiming compatibility with one or both competing standards for teletext.  One standard was being broadcast on CBS and NBC; the other on TBS.  There were professional and consumer equipment manufacturers and services offering support.  Based on the quantity and diversity of promotion at NAB, it was hard to imagine that teletext would not take off in the U.S.

So, will 3DTV emulate digital effects, HDTV, U.S. teletext, or none of the above?  Time will tell.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Panasonic 3D Camcorder: Show Us the Money

February 12th, 2010 | 2 Comments | Posted in 3D Courses, Schubin Snacks

In the 3D-in-the-Home “supersession” at next week’s HPA Tech Retreat, one presentation is titled “Are You Nuts?”  I thought of that at today’s Panasonic pre-NAB press conference.

BT-3DL2550_SLANT small

Panasonic's $9900 BT-3DL2550 monitor uses passive cross-polarized glasses

AW-HS50_front small

The Tiny AW-HS50 HD switcher includes a multiviewer

Let me emphasize from the outset that I was not thinking about Panasonic in the “nuts” category.  As best I could tell, no one from the company lied, which is my highest praise at press conferences.  No one avoided questions.  There was legitimate news (such as an inexpensive P2-to-USB adaptor, two tiny HD switchers, and a small HD pan-tilt-zoom camera optimized for IP networks).  I also think Panasonic builds good equipment.


No, the people I thought were nuts were some potential customers for something the company sort of unveiled at the recent Consumer Electronics Show (CES), an integrated (one-piece) 3D camcorder (shown above) to be delivered this fall at a list price of $21,000.  Panasonic said it had received thousands of inquiries about the product, some seeking to buy it sight unseen.

It was those blind-faith customers that I think are nuts.  Here’s why (and also why I said Panasonic only “sort of” unveiled the product at CES):

The camcorder has twin zoom lenses.  What is their widest angle?  Their tightest?  Panasonic representatives at the meeting didn’t avoid the question; they said it hadn’t been determined yet.

The camcorder will be capable of some amount of stereoscopic convergence.  How much?  Again, it has not yet been decided.  Also undecided, for this one-person, compact camcorder, is whether or not there will be any mechanism to tie convergence to focus.

One Panasonic representative did point out that the spacing of the lens centers is smaller than that of an adult human’s pupils and will not be getting bigger.  Based on a rough measurement I made, it appears to be about 57 mm.  That puts an outer limit on the maximum diameter of the lenses, which, coupled with the fact that the system uses 1/4-inch-format image sensors, means it will not be the most sensitive camcorder on the market.

When a journalist at the press conference inquired about using the camcorder for cinema content, a Panasonic representative emphasized that it had those 1/4-inch-format image sensors.  He got high points from me for that answer.

tiny camcorderSo what is the intended market?  At $21,000, it seems priced too high for most consumers.  At CES DXG showed a pocket-sized $400 3D camcorder (shown here to the left) with a 3D viewfinder (something Panasonic’s AG-3DA1 lacks), albeit non-HD and with much smaller lens-center spacing.

In the professional, HD realm, 3D-One offers four 3D camcorder models, all with nominal adult-vision lens spacing, 3D viewfinders, larger image sensors, and specified lenses and convergence.  Their CP-20 is shown below.  I wrote about them here in September:

3D-One CP-20

At the press conference, Panasonic indicated receiving inquiries ranging from dental to military applications, including sports.  But a 57-mm lens-center spacing doesn’t lend itself to long-distance 3D shooting in a sports venue.

So, who is really interested in buying what Panasonic says will be a made-to-order product?  At the press conference, the company announced a way to find out.  Starting today, they will accept orders for this device of unknown optical capabilities, but each order is to be accompanied by a non-refundable $1000 deposit.

Panasonic hopes to learn much from this first-generation product.  Maybe we all will.

Tags: , , , , , , , , , , ,

Someone Will Be There Who Knows the Answer

January 15th, 2010 | No Comments | Posted in 3D Courses, Schubin Cafe

The Oversight Executive for Motion Intelligence of the Office of the Under Secretary of Defense for Intelligence is scheduled to be in the southern California desert next month.  So are the chief technology officers (CTOs) of both Panasonic and Sony.  So is the head of the Visual Space Perception Laboratory at the University of California – Berkeley.  So is one of the developers of Cablecam.  So is the CTO of Cable Television Laboratories.  So is a co-inventor of MP3.  So is the mysterious Mo Henry, whose credit has appeared in movies ranging from Apocalypse Now to Zombieland.

Golf_vertical_mountain_viewThe list could go on and on.  Hundreds of top technical executives will be there. CTOs and VPs of Hollywood studios and television networks will be there.  So will the head of emerging technologies of the European Broadcasting Union.  So will the VP of standards of the Advanced Television Systems Committee (ATSC) and the director of engineering and standards of the Society of Motion-Picture and Television Engineers (SMPTE).  Where will they be?

It’s the 16th annual Hollywood Post Alliance Tech Retreat, February 16-19 at Rancho Las Palmas conference center in Rancho Mirage, California.  But every part of that title can convey a false impression.

HPA_logoHPA, for example, is not yet 16 years old, but the retreat is older.  When the organization that created it, the Association for Imaging Technology and Sound, went belly up, HPA’s founders thought the retreat was too important to die, so they took it over.  After 9/11, when other events went down in attendance, the retreat went up.  It has actually had to turn people away on occasion because it has sold out.

Similarly, “Hollywood” and “Post” are misleading.  The event is not (and has never been) in Hollywood.  Its participants come from all over the world, from NATO smallNew Zealand nato-logoto Norway, and from Bombay to Buenos Aires.  If someone at the retreat is from NATO, that could be the North Atlantic Treaty Organization or the National Association of Theater Owners (both have sent representatives, sometimes at the same retreat); similarly, there have been representatives from MPEG the Moving Picture Experts Group and MPEG the Motion Picture Editors Guild. More »

Tags: , , , , , , , , , , , , ,

Walkin’ in a Camera Wonderland

September 20th, 2009 | 3 Comments | Posted in 3D Courses, Schubin Cafe
If you want to see products that don’t appear in U.S. trade-press magazines, you need to go beyond NAB, SMPTE, and InfoCOMM. You need to go to the International Broadcasting Convention.


IBC is my favorite trade show. I can leave work, catch an evening flight to Amsterdam, and take a train directly from the airport to the convention center. If I’m hungry, some exhibitor will be providing food. Thirsty? Water, various forms of coffee, juices, beer, and wine flow freely. IBC even throws a party to which everyone is invited. But none of that is why I like it so much.

Americans tend to forget that we are not alone. Back in the days of RCA cameras, you needed to come to IBC to see those of the UK-based manufacturer Pye.

Today, we tend to think of NAB as an international show. Cameras are shown there by such Japanese manufacturers as Hitachi, JVC, Panasonic, Sony, and Toshiba. And Grass Valley’s cameras at NAB come from Europe. So why bother with IBC? More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , ,
Web Statistics