Produced by:
| Follow Us  
Moving Slowly to the Next Miracle
October 4th, 2007 Posted in Schubin's Greatest Hits by sfelix | Print This Post Print This Post
Originally published in Videography October 2007

Who makes small professional HDTV camcorders?  That’s easy: Canon, JVC, Panasonic, and Sony.  Who else makes larger HDTV cameras?  Grass Valley, Hitachi, and Ikegami.  How about tiny HDTV cameras?  Easylook, Iconix, Lux Media Plan, and Toshiba.  Cameras with resolutions beyond HDTV?  ARRI, Dalsa, Olympus, Red, and Panavision.  Then who are the digital moving-image camera manufacturers AOS, CPL, DRS, Fastec, NAC, Photo-Sonics, PCO, Redlake, Shimadzu, and SVSi?  Perhaps you’ve come across i-Movix, Kinor, Photron, Vision Research, Weinberger, and Weisscam at a broadcasting convention, but even they’re not common names — at least not yet.

Perhaps they should be.  They seem to outclass traditional video cameras in many characteristics.

Some videographers complain about too much depth of field on video cameras, an inability to direct viewer attention by focusing on only one character or object in a picture.  Depth of field is inversely proportional to imager size.  The photosensitive chips in Sony’s HVR-V1 have a 3.9-mm image width.  The ones in a standard 2/3-inch-format HDTV camera are 9.6-mm wide.  But those in Vision Research’s Phantom 65 cameras have a 25.6-mm image width, more than six-and-a-half times bigger than those in the V1, allowing extraordinary direction of a viewer’s attention through limiting what’s in focus.

Consider camera size.  Iconix Video’s HD-RH1 camera head is a tiny 33.5 x 38 mm; NAC has a camera that’s just 21 x 21 mm.  What about resolution?  The Red One camera has an astounding 2540 rows of pixels, more than twice as many as 1080-line HDTV; the Photo-Sonics SIR2 has 2688.

Then there’s sensitivity.  One measure of a camera’s sensitivity is the size of its pixel sensors.  A standard 2/3-inch 1920 x 1080 camera has sensor boundaries five microns (millionths of a meter) square, considerably larger than those in the HVR-V1.  But in NAC’s Memrecam fx K4 they’re roughly 21.7 microns square, about 18.8 times more area, or roughly the difference between f/1.8 and f/8 (or between f/8 and f/35).  And, before you get too impressed by that 18.8 figure, consider that Photo-Sonics has a camera with an image intensifier that can increase the effective light level by a factor of up to 7,000!

Why would anyone need that much sensitivity?  Suppose you’re shooting a commercial in a well-lit studio.  You’ve adjusted the lighting so that you’re shooting at f/4, probably about the sharpest aperture for a 2/3-inch format.  Then the director calls for something in slow motion.
Some say that slow motion was invented — literally — by August Musger.  He applied for an Austrian patent on December 3, 1904, and it was issued as number 23608 on August 15 of the next year.

Musger was a priest.  He was also a big fan of the new moving-image art form.
There’s a reason movies are called “flicks.”  In the medium’s early days, with picture-repetition rates ranging roughly between 15 and 20 per second, pictures flickered horrendously.  Musger wanted to do something about that, so he developed an image-blending optical system (a version of which is still used today in some film-editing equipment) that allowed movies to be projected at essentially any rate without flicker.

A side-effect of Musger’s invention was that it allowed slow motion to be achieved by slowing the projected frame rate.  Unfortunately, it did nothing to improve temporal or dynamic resolution.  Musger-style slow motion was blurry and jerky.

In an era of hand-cranked cameras, cinematographers soon discovered a different way to achieve slow motion.  If they cranked faster than normal (“overcranking”) when acquiring the images, motion would appear to slow down when the film was projected at normal speed.
“Overcranking” is still the term used for slow-motion capture, long after motors replaced cranks in movie cameras.  And with motors much higher frame rates could be achieved — thousands of frames per second (fps).  That might not be too useful in a movie (imagine spending two hours watching just a single character move across the screen), but it was very useful for scientific motion analysis, even if the camera could hold only enough film to capture a fraction of a second.

In 1948, a definition associated with the Society of Motion-Picture Engineers (which later became SMPTE) said slow motion required a frame rate of at least 128 fps with at least three consecutively captured frames.  But there was a problem, the problem of that well-lit commercial shoot.

If the appropriate exposure at 24 fps was f/8, then at 48 fps it would be f/5.6, at 96 fps f/4, and so on.  Long before a thousand fps, a cinematographer would run out of aperture.  Solutions included putting much more light on the subject (as long as it wouldn’t melt from the associated increased heat) and using more sensitive film.

Unlike film, with variable-rate cranking or motors, video traditionally had a fixed frame rate.  But much television programming was shot on film.

In 1962, CBS introduced slow-motion sports replays by shooting them with a movie camera, rapidly developing the film, and then playing the film back slowly via a Musger-like film-to-video scanning system.

Even before that, in 1959 Toshiba showed the VTR-1, a videotape recorder that could do much the same thing, minus the film-development time.  By 1965, Precision Instrument brought out a variable-speed videotape recorder.  Early in 1967, MVR’s VDR 250 offered slow-motion video with a disk recorder, and later that year ABC broadcast full-color slow-motion instant replay using Ampex’s HS-100 disk recorder.

The disk recorders offered glitch-free video, but those sequences had the same problem as Musger’s (and CBS’s film-based) slow motion.  The images were shot to be seen at about 30 fps, so they looked blurry and jerky at slower frame rates.

ABC wanted something different for its coverage of the 1984 Olympic Games in Los Angeles.  They worked with Sony to develop what became known as Super Slo-mo.

Sony’s BVP-3000 camera could shoot at roughly 90 fps instead of 30 fps, and their BVH-2700 videotape recorder could also run at three times normal speed, with three video-recording heads on its drum instead of one.  When the tape was played back at normal speed, the result was smooth, clear slow motion, as though ordinary video had been shot in a world in which everyone and everything moved at one-third the normal rate.  Meanwhile, motion analysts continued to use very-high-frame-rate film cameras for their work.

The BVP-3000 camera used imaging tubes, not solid-state sensors, but the latter would soon replace the former.  And then there was a development seemingly unrelated to slow motion.
Video in much of the Americas and in some other countries has a 29.97-fps frame rate; in the rest of the world, it’s 25.  If the two rates share a frame at a moment in time, the next five 29.97-fps frames will occur at moments different from those of the next four 25-fps frames before there’s another frame coincidence.

As a result, viewers of sequences converted from one rate to another traditionally saw jerky motion, double images, blurring, or some combination of the three.  With digital processing technology, however, there was another option.  It was, at least in theory, possible to analyze the motion of every object in the image and predict where it would be at any moment in time.  It would then be possible to create a frame for a particular frame rate that matched what a frame would have looked like had it been shot at that frame rate in the first place.

When used to transfer video between 29.97- and 25-fps, such technology is called motion-compensated frame-rate conversion.  But why do those have to be the only frame rates?
Snell & Wilcox, a manufacturer of motion-compensating standards converters, long ago showed a prototype of something called Gazelle, a box that could deal with just such arbitrary frame rates.  It could remove the jerkiness from essentially any rate of slow motion.  Although it couldn’t remove image blur, shuttered cameras could achieve sufficiently high dynamic resolution without having to operate at faster frame rates.

Unfortunately, Gazelle didn’t become a product.  It would have been too expensive.  Cost was also a reason why Super Slo-Mo didn’t spread to every video camera and recorder.  But motion analysts sometimes had more money than did videographers, so they could drive the development of non-film-based systems for their work.

As high-frame-rate motion imaging moved from film to electronic cameras, solutions to the sensitivity problem included larger pixel-sensor boundaries (to gather more light) and the use of image intensifiers.  As you’ve probably deduced by now, the unfamiliar manufacturers listed at the beginning of this column make high-frame-rate cameras, largely for that scientific motion analysis.  But they’ve found an interesting recent serendipity.

Motion analysts want spatial detail as well as high frame rates; that sounds like HDTV or beyond.  Large pixel-sensor boundaries with a large number of pixel sensors means a large imager, just the thing for those seeking limited depth of field.  And, for sports replay and commercial production, detailed, silky-smooth slow motion can be very attractive.

Tech Imaging Services adapted a Photron motion-analysis system for CBS’s SwingVision (i-movix showed another Photron adaptation for videographers at the National Association of Broadcasters convention this spring).  Fletcher Chicago has done similar with a NAC system adapted for use by ESPN.  And some slow-motion systems, such as the Vision Research Phantom HD, the Weisscam HS-1, and the Weinberger Cine SpeedCam, are specifically intended for use by cinematographers and videographers, not motion analysts.

Does that mean videographers will soon be more familiar with Fastec, NAC, Photron, and Vision Research than with Grass Valley, Ikegami, Panasonic, and Sony?  Perhaps not.
There are some areas in which traditional cameras surpass the high-speed newcomers.  They include controls, image processing, system connectivity, and, perhaps most significantly, cost.
Grass Valley’s big announcement at the International Broadcasting Convention last month was the addition of a 100- or 120-fps frame rate to the LDK-8000; one high-speed camera can shoot an effective 200,000,000 fps (for cyclical events), but it’s not inexpensive.  The same economic consideration that has kept Gazelle off the market has also kept slow motion out of the hands of most videographers.

FrameFree Technologies (described here last year) offers a new form of video processing that, as its name suggests, is independent of frame rate.  Sequences between key frames can last any duration.  It could be the system that brings processing-based slow motion to the masses.
As for camera-based slow motion, Casio has announced an Exilim camcorder that will shoot high-resolution images at up to 300 frames per second.  Pricing hasn’t yet been announced, but Casio’s other Exilim products are well within consumer price ranges.

All high-frame-rate cameras suffer from the light-sensitivity problem, but processor-based slow motion systems like Gazelle or a hypothetical FrameFree version have to predict the missing frames.  Sometimes those predictions are correct; sometimes they aren’t.  At a Gazelle demonstration, when an image of the tire of a car speeding along a gravel road was slowed by a factor of 50:1, some of the gravel appeared to move the wrong way.

Of course, that was in an early prototype many years ago.  There’s little doubt that slow-motion capability — whether by processing or by high-frame-rate cameras — will soon be spreading.

Pictures are looking better, sensitivity is greater, there’s more spatial (and even temporal) resolution, cameras are smaller and more rugged, recording times are longer, and prices are dropping.  It’s just happening in slow motion.

###

Web Statistics