Here are two questions currently facing videographers: What is the size of a Super 35 mm film frame? And, more important, who cares?
Throughout the history of video, there has been a trend towards the smaller. The first video camera occupied substantial portions of two rooms. The first all-electronic camera to be sold was much smaller, but it still utilized a camera tube with an image the size of an index card — about 144 mm in diagonal. That giant tube was followed by versions with image diagonals of about 46 mm, 21 mm, 16 mm, and 11 mm. The last matched the image size of so-called 2/3-inch solid-state cameras.
Those, in turn, were followed by versions with roughly 8-mm, 6-mm, 4-mm, and 3-mm diagonal image sizes. At the same time, videotape dropped from 2-inch to 1-inch, 3/4-inch, 1/2-inch, 1/3-inch, 1/4-inch, and 1/6-inch sizes. A “portable” video camera and recorder went from something requiring a truck to transport to a shoulder-mountable package to something hand-held to something wearable.
Lighting has evolved from giant, smoking arc-lights through ten-thousand-watt incandescent bulbs to fluorescent lamps small enough to hide behind a pinky finger. Editing gear has moved from multi-room facilities to sub-notebook computers.
There’s no doubt that there has been an unstoppable trend towards the smaller. There’s just one problem. The unstoppable just went into reverse.
In an era of 3-mm-diagonal imagers in video cameras, Panavision was recently acclaimed for introducing one with a 28-mm diagonal. That followed an exciting ARRI prototype video camera with an imager with a 30-mm diagonal, a thrilling Dalsa camera with an imager with a 38-mm diagonal, and a remarkable Lockheed Martin model with three imagers, each with a 79-mm image diagonal — bigger than any to appear in a new camera for more than half a century!
Why? The simple answer is that all of those large-imager cameras are intended to use the same lenses used for 35-mm film-based movie cameras. As for why that is important, there are a number of reasons: Cinematographers are accustomed to using those lenses. Some say they are superior to traditional video-camera lenses. And then there’s the mathematical formula for hyperfocal distance.
The hyperfocal distance is the distance that, when focused upon, causes everything from half that distance and farther to be in focus. The formula says hyperfocal distance is calculated by dividing the square of the lens focal length by the product of the <I>f<P>-stop and the diameter of the circle of confusion. The circle of confusion is the largest circle that cannot be distinguished from a point in any imaging system. For video, it may be said to be about 1.5 scanning lines (because if something is in just one scanning line, it cannot be distinguished from a point, and in two it can).
The smaller an imager is, the smaller the lens focal length needed to fill the image with any particular scene. So, the smaller the imager, the closer the hyperfocal distance. That’s good news for amateur photographers using tiny point-&-shoot cameras. It’s less good for cinematographers accustomed to directing viewer attention based on a narrow depth of field.
Depth of field is the range of distances that will be in focus closer than the hyperfocal distance. In a tight shot looking down a row of faces, a cinematographer or videographer can make just one stand out in focus while the others are blurry, or can shift focus from one to another.
That is, a cinematographer or videographer can do so if the depth of field is sufficiently narrow. The depth of field is determined by the distance between the subject (the faces) and the camera, the focal length, and the hyperfocal distance.
A longer lens focal length means a farther hyperfocal distance, and, therefore, less depth of field. But, on a small-format camera, using a sufficiently long focal length might mean putting the camera so far away that the row of faces loses all perspective (i.e., the farthest face looks about as big as the closest). Depth of field can also be reduced by opening the lens aperture, but there’s a limit to how far it can be opened.
There’s almost no getting around the problem. To perfectly achieve the depth-of-field look of 35-mm film, a videographer using an electronic camera effectively needs to shoot with 35-mm-sized imagers and lenses designed for them.
That might seem easy. After all, every 35-mm movie-equipment rental facility has a broad range of lenses already designed for 35-mm movie cameras. Why shouldn’t such video-camera manufacturers as Grass Valley Thomson, Ikegami, Panasonic, or Sony just increase the size of their imagers so those lenses can be used?
There are, unfortunately, a number of issues to be considered. When HDTV cameras began to be sold, they typically had one-inch imagers (with 16-mm image diagonals). Manufacturers shifted from one-inch to 2/3-inch (11-mm image diagonal) to allow cameras and lenses to be smaller and lighter and to allow purchasers to use their existing 2/3-inch-format lenses. Bigger imagers mean bigger, heavier, more-expensive cameras and lenses.
Then there’s the color problem. High-end color video cameras use three imaging chips, one each for the red, green, and blue primary color ranges. That necessitates a color-separation mechanism (typically prism based), which adds to the distance between lens an imager. The rear of a film-camera lens practically touches the film; the rear of a video lens is much farther from its imagers.
Lockheed Martin kept the traditional, prism-based color-separation approach in its digital-cinematography work. In order to be able to use lenses for 35-mm movie cameras, therefore, they had to add yet more optical elements, in something called a telecentric lens, to move the image from where a film lens would put it to where the imagers need it (changing its size at the same time).
ARRI, Dalsa, and Panavision took a different approach. A Super 35 mm film frame (essentially what used to be called a “silent” film frame, utilizing as much area as possible within the limits of the perforations) has a 4 x 3 shape and is a little smaller than 25 x 19 mm with about a 31-mm image diagonal.
The ARRI D20 project uses a single imaging chip with almost exactly the same dimensions: 24 x 18 mm with a 30-mm image diagonal. The beyond-HDTV-resolution imager is covered with tiny color filters. The single imager allows 35-mm movie-camera lenses to be used as they normally would be, with an image formed right behind the rear optical element.
Based on comments from the American Society of Cinematographers and the Directors Guild of America during the digital-television approval process, Dalsa wanted the single imaging chip in its Origin digital-cinematography camera to have a 2 x 1 shape. That meant it couldn’t exactly match a Super 35 frame. They chose to come close vertically, with a 17.2 mm image height in their prototype. That made the width 34 mm and the image diagonal 38 mm. The larger size means that some wide-angle lenses intended for 35-mm movie cameras might not quite fill the image size.
Like ARRI’s, Dalsa’s imager has beyond-HDTV detail resolution (so do the Lockheed Martin imagers). Also like ARRI, Dalsa uses similar on-chip color filters.
The single Sony-made imaging chip in Panavision’s Genesis camera also uses color filters, though of a different sort. Given Panavision’s and Sony’s experience in the digital-cinematography field, the image shape in this case is HDTV’s 16 x 9, which (as in Dalsa’s case) precludes exactly matching the size and shape of a Super 35 frame. Dalsa opted to come close to matching the height of the film frame; Panavision and Sony went for closely matching the width. The sensor image area is 24 x 13.5 mm — about 27.5 mm in diagonal. Again, the detail resolution level is well beyond HDTV.
One reason for the higher resolutions on the imaging chips is to get closer to the detail offered by a 35-mm film frame. Another is to offset any losses caused by the color filter.
Yet another color-separation technology could eliminate the latter issue. In the film used in movie cameras, the three color-sensitive layers are stacked; the same is true of the Foveon imaging-chip technology. A single chip still allows film lenses to be used, and the stacked-color technology eliminates any on-chip optical-filter reduction of detail. A Foveon-technology-based digital-cinematography camera, however, has yet to be introduced.
Would it be better to have the extra detail anyway? It <I>does<P> bring videography closer to what’s available in a film frame, and it should make pictures appear sharper even on lower-resolution displays. On the other hand, Panasonic’s Varicam, with half the number of picture elements (per chip) of even 1080-line HDTV cameras, has been widely acclaimed by cinematographers. And the super-high-detail signals from the beyond-HDTV imaging chips are more difficult to record than standard HDTV, let alone ordinary video.
If detail resolution isn’t that important an issue, is it possible to achieve film’s depth of field (and the ability to use film lenses) without having to go to a camera with large, high-detail imagers? The answer may be yes.
P+S Technik’s PRO35 Digital is a six-inch long, under five pound optical adapter. One end connects to a standard, 2/3-inch-format video camera (HDTV or otherwise) and the other to a 35-mm-format film lens. In between is a moving ground-glass screen. The film lens places its image on the ground-glass screen, and the video camera shoots that image (with an internal lens designed for no other purpose). The motion keeps any artifacts of the screen from being seen. A Mini35 Digital model performs a similar function for 1/3-inch-format video cameras.
Like all of the other film-depth-of-field systems listed here, P+S Technik’s “image converters” have been praised by cinematographers. Unlike all of the others, they require no special recording systems. A tape-based camcorder can continue to record its signals on tape.
<I>All<P> of the technologies have been appropriately praised for providing what had been lacking in modern, tiny-imager videography. And the new electronic cinematography may help bring the charm of the star of <I>Edward Scissorhands<P> and <I>Pirates of the Caribbean<P> to other actors. After all, the purpose of all of the work with film lenses and larger imagers is based on trying to achieve the best with Depp’s appeal.
SI’s Size Sighs
Although the U.S. hasn’t fully adopted the <I>Système International d’Unités<P> (SI), better known as the metric system, American children <I>are<P> taught some of the basics. There are about 25.4 mm per inch, for example.
That should make 2/3 of an inch just under 17 mm. But a 2/3-inch image sensor has an image diagonal of 11 mm. Why?
It’s simple. Camera tubes became known by their outside diameters, not their image sizes. The four-and-a-half-inch image orthicon actually had a smaller image area than its three-inch cousin.
The image area on a 2/3-inch tube had an 11-mm diagonal. Solid-state camera manufacturers didn’t want to make existing lenses obsolete, so they kept the same image size and called it “2/3-inch” so their customers would know what they were getting.
As chip sizes dropped, the relationship between the image diagonal and name remained relatively constant. Perhaps we should expect the same as image sizes grow.
If the move to the ever larger is ongoing, a camera manufacturer may one day experience the thrill of victory and diagonal of the feet.